Developers Contest [CORRECTED]: Slashing Condition Design [3-23.08.2020.]

Accepted by Governance voting

Moved to Active contests!


Tell me please, where can I find this code?

Подскажите, а где этот код найти?

There is a checkload command in lite-client:

checkload[all|severe] [] Checks whether all validators worked properly during specified time interval, and optionally saves proofs into -.boc
loadproofcheck Checks a validator misbehavior proof previously created by checkload

This is probably what @Mitja meant.



Why there is no submission for this important contest yet? Just 3 days left… Come on community :wink:

I want to submit my own version of slashing conditions for consideration, it will probably require some code change to calculate the average time for block to be signed by a validator, but such a scheme will help in the future to avoid slowing down the network by participants who cannot process blocks in a timely manner.
My submission is uploaded to the contest and if necessary can be added here as well.

Are you sure you have posted it to the contest? Cannot find it here

I think you also should publish it here on forum so that it can be discussed.


Yes I’m sure what posted it in contest page, and even ask Roman about this as well, what I cannot see my proposal. Can we check it somehow?

1 Like

Just sent our TON Improvement Proposal for “Slashing Condition” specifications, please check here for details:

Feel free to contact: @zxcat, @andreypf


Please check my submission to Slashing Conditions Contest.
Will be glad to see any comments and hope ideas from my work can improve FreeTon network and make it more stable and secure.


My Slashing conditions TIP:

I think this proposal will help make FreeTON more secure and decentralized.
Certain types of slashing may require additional discussions and cybersecurity audits.


FreeTON: Slashing Condition Design

Developer Contest

23 AUGUST 2020


These are 3 main issues with Validator in freeTON Blockchain

  1. Validator Signing Less Blocks and Longer Time - Making Network Slow
  2. Validator Nodes Down - Making Network less secure and vulnerable to attacks
  3. Double Signing of Block / Malicious Activities

We will discuss all these 3 problems and slashing conditions below in detail.

Validator Signing Less blocks than Others and Taking More Time in Signing Each Block

The Slashing conditions should be applied on validators who are not using good hardwares and network and connectivity in running the validator nodes. They may be using some cheap servers with poor connectivity which result in less blocks signing and more time taken in signing the blocks, thus making the overall network slower.

Although this is not a big threat to the network stability, as only ⅔ signatories are needed to pass a block transaction. But we should constantly remove bad validators and add new validators to keep the network fast and secure.

I am proposing a slashing condition which will be automated and will slash slow validators and also reward good validators.


Total Validators : 1000
Time Period : 15 days ( every slashing round, so that 1-2 days slow network of validators don’t make it a bad validator )

We can calculate the validator efficiency by calculating the power of each validator, where we will put greater weight on the number of blocks signed and some less weight on time taken to sign the block.

Validator Power = (Block Weight)(Block Signed) - (Time Weight)(Average time taken in singing the block over the period of 15 days)*(Total Blocks Signed)

For Example : Validator 1 Power = 2*(10000 blocks signed) - 1*(0.002 secs)*(10000 blocks signed)

We will rank the validators in the order validator power from 1 to 1000.

Note : One simple way is to slash the validators which came last and reward those who came in top, but I believe this type of blind slashing will punish even average validators.

Thus I am proposing a method of Mean and standard deviation. Only those validators who performed worse than the standard deviation should be punished, and if in a cycle no validator is permoring less than the standard deviation then none should be punished.

Mean will be = Total of all Validator Powers / Number of Validators

Deviation : (Mean Validator Power - Validator Power of Rank 1)

If Validator Power < ( Mean - Deviation ) - then validator will be slashed

In this case only those Validators whose power is less than the expected deviation, only they will be slashed.

Rewards for Good Validators : 2 Methods

  1. Divide slashed tokens to all validators above mean power equally.
    Like say 200 validators performed better than mean validation power, then all will get equal amounts of tokens, as all of them are better than average.

  2. Preferential Allotment of Rewards.
    This also applied to those validators who are above mean validation power, but it will reward more to the top validators and less to the other validators, thus creating competition to become more effective and earn more rewards, thus pushing the network towards better hardware and software and network.
    Like if 200 validators are above mean power, top 30 will get 50% of slashed tokens, while the rest 50% is distributed among the other 150.

I like Method 2 better, but I couldn’t come up with the arrangement of how much top validators should get more rewards than other, I believe this the whole community should decide.

Validator Nodes Down - Making Network less secure and vulnerable to attacks

Nodes are the backbone of the Network and Validators have the responsibility to keep these nodes running in good health and keep the network stable, and they are also getting token rewards for it.

If a few nodes get down at sometime then it’s not a problem but slashing condition is very important so that validators don’t take nodes for granted and make the network slow overtime.

Condition 1:

Slashing Cycle : 30 days
Validator Rewards per Month : Total Reward/12

If a node is down for more than 15 days the full reward for that month will be slashed.
If a node is slashed for 3 month continuously then they will get no rewards and they will be out of network. They will get no rewards.

Condition 2:

For 1 Day Network Down : 1% Slashing of Total Rewards.
Thus if a Node goes down 100 days in a year, they will get no rewards.
This 1% Slashing can be further divided into hours and minutes, so we can also slash if a node goes down for new minutes or hours.

For Ex :
24 hour = 1% slashing
1 hour = 1/24% slashing

Rewards for Good Valodators:

These slashed tokens can be divided equally among validators who had more than 99% or 99.5% uptime in that slashing cycle. Also we can provide these rewards for good validators according to power as discussed in the first part. That’s upto the community to decide.

Double Signing of Block / Malicious Activities

I believe fishermen technique mentioned in the TON Whitepaper is a good way to find fraudulent activities and double signing of Blocks.

Here the community can appoint fishermen and anyone with significant tokens and resources can join as a fishermen and try to catch double signing of blocks and other suspicious activities.

A slashing can be done anywhere between 5% to 100% depending on the frequency of these activities by the validators, and all reward goes to the fishermen.

I believe only fishermen who caught and reported the activity first should be rewarded, as it is the one who is gambling his tokens and resources to catch the unusual activity, thus the reward of fishermen should be high and it should not be shared with other validators, because validators are already getting rewards in above 2 cases.


The core of my proposal to slashing problem is based on Magister Ludi’s M1 metric (uptime) and the extension of TVM. The proposed solution does not rely on any off-chain mechanisms, on-chain only with only two new necessary TVM instructions, so the implementation is totally possible entirely on smart-contracts logic.

Please check the proposed PDF for the details, the PDF’s SHA256 is 8cfc643bbf44fb3228723db80ad1293b06d9cbd651b5edc97df5b43fe4594a97.
The proposal and its attached PDF are accessible here:



Actually I’m still studying ton.pdf and catchain.pdf documents and I’m trying to apply my knowledge to the results we get from four weeks of the Validation Contest. And I didn’t plan to take part in this Slashing Condition Design contest because I don’t feel myself so confident from the technical background viewpoint. So, please, excuse me for any possible technical misconceptions.
However, I was surprised to find that none of the published proposals solve what I believe is a huge problem that was identified during the passed Validators contest.


The TON documentations states that the TON Blockchain is able to generate new blocks once every 4–5 seconds, as originally planned. Nevertheless we have all seen how this promise is not fulfilled even with less than 100 validators and a high load on the network. For example, these are excerpts from the statistics I collected during the validator contest:

4698 blocks was produced during 18 hours of 5-th cycle game1 week3. This gives us approximately ~13,8 seconds of average block time.
5890 blocks during 8-th cycle game1 week3 - ~11 seconds average block time
9515 blocks during 5-th cycle game2 week3 - ~6,6 seconds average block time
354 blocks during 6-th cycle game2 week3 - ~183 seconds average block time
3957 blocks during 3-th cycle game0 week4 - ~16,4 seconds average block time
2781 blocks during 4-th cycle game0 week4 - ~23,3 seconds average block time
4481 blocks during 5-th cycle game0 week4 - ~14,5 seconds average block time
7101 blocks during 6-th cycle game1 week4 - ~9,1 seconds average block time
6673 blocks during 7-th cycle game1 week4 - ~9,7 seconds average block time
4776 blocks during 3-th cycle game2 week4 - ~13,6 seconds average block time
1190 blocks during 5-th cycle game2 week4 - ~54,4 seconds average block time
2038 blocks during 6-th cycle game2 week4 - ~31,8 seconds average block time

The reason for so huge block time is that validation nodes with weak hardware do not sign blocks with expected speed so this leads to increase delay between signed blocks in the network.
The problem is not that the weak nodes are not signing the blocks at all. The problem is they are signing them slowly. In this case there is no reason to fine the node because following formal criteria it signed the block but from the practice perspective this could freeze the network as increasing block time from 0,2 sec to 5-10 minutes makes the blockchain non-functional even it is still produces the blocks.
Increasing the number of validation nodes makes the network more distributed so it is very good for the blockchain stability and it is expected. At the same time this increases the number of nodes required to sign every block and so increases network load and time necessary to sign the block. And when the number of such nodes, which do not manage to process the heavy load in time, will increase, the network stops working at all because of time out for block generation. So with an increase in the number of validators and an increase in network load, this problem will worsen.


The multi-blockchain structure of a catchain outlined in catchain.pdf leaves very little possibility for “cheating” in a consensus protocol built upon a catchain. At the same time, Validator contest shown that it is impossible to detect weak validation nodes using binary logic (made/didn’t make) for the actions like propose, approve, sign, vote.
All proposals that I’ve heard of are taking in to the account whether validator did or didn’t make some action like proposing, approving, signing, voting but they’re not measuring the speed of this actions which is a critical factor to network keep functional.

TON Improvement Proposal

Slashing should be verifiable by anyone interested so the only way to provide this is to keep slashing metrics in the blockchain itself.
My suggestion is that along with the block signature, each validator writes a signed timestamp to the block.
Accordingly, at the end of each cycle, it will be possible to trace these timestamps for each block and find those few percent of validators who sign the blocks the slowest. Thus, separate smart contracts will be able to analyze the collected data, find a bottleneck in network performance and fine validators accordingly.
At the same time, it is possible that malicious validators can save incorrect data as their timestamp. This way the delay will still be visible from the difference between the first and third consecutive timestamps (meaning the second is slow and malicious), but it will not be clear who exactly causes the delay for this particular block. From other side, the sequence of validators that sign blocks will be different for different blocks, so using statistical methods of analysis for large (> 10,000) number of blocks over a validation circle will allow us to identify validators around which delays are constantly occurring.


Here is my vision for solving the slashing problem.

#Attention Judges

  • Some Clarification in my proposal:

Part 1 : A timestamp to be attached with every action, proposal of block, approval of block, and signing of block. ( part of TIP )

Signing of blocks takes the most time and eventually the main cause of slowing the network, so we should focus on that part.

While the validators who are proposing voting and approving more blocks can be awarded separately, but I don’t believe these should be a reason for slashing those validators who are proposing, voting and approving less.

So either we implement timestamp or the other way is statistical analysis of blocks ( as mentioned by AlexNew ) over a long period of cycles to determine slow validators. But I will suggest to go with the timestamp model.


Interesting considerations! You proposal leads to the idea of recording some or ultimately all of catchain events to the blockchain for later analyse. In your proposal the timestamp is in essence the APPROVE(round, candidate) catchain event without round information.

1 Like

The contest is over but what’s next? Shouldn’t we choose the best solution and implement it?


Will be an implementation contest of course. Friendly reminder the specification contest is not to be connected to implementation contests directly. It is to provide guidelines but not a directive.