logo
bg_imgbg_imgbg_imgbg_img

0.9 Process Documentation

This document describes the 0.9 version of our procedure for conducting a Process Quality Review on a DeFi protocol’s deployed smart contracts and documentation. The focus of this process is on DeFi protocols. We also call the resulting reports Protocol Reviews.

In passing these protocols through these quantitative tests, we create a simple quality score for the DeFi protocol being reviewed. This score will indicate the overall quality of the development process and the documents that the development team created during the creation process of their protocol. The reader is encouraged to dig into the details of the review to see exactly how the score was generated. Every step will be documented and available for critique, as DeFiSafety wants as much feedback on the process as we can receive.

The basic premise of these reviews is that developers following, documenting and maintaining a good software development process should have secure code and maintain a level of security that users can trust. For blockchain projects, a good process should have public and readily auditable documents in addition to other measures - all of which makes the development process clear and transparent. These reviews are developed and focused towards deployed and operating DeFi Protocols. Our initial focus will be on DeFi protocols as these bring in many new users investing significant sums, and they must be able to trust the smart contracts they interface with.

Protocol Reviews are initially written without the developers’ support using exclusively documents that are publicly available such as the website, the software repository of the code (GitHub, etc), and the Contract Address analytics available from the relevant blockchain explorer (Etherscan, BscScan, etc). After the initial Process Quality Review is curated, it is presented to the developers of the reviewed protocol for correction or improvement. The results of these corrections will be clear and publicly documented in a new version of the report. We do this because we want protocols to score highly and it is clear by their cooperation that they do too. This is their opportunity to improve their score, and our opportunity to provide any help we can.

The initial author of this process has a long history in the avionics industry. Aerospace software development has always maintained that a rigorous and frequently audited software development process leads to safe and secure software that can be supported for many decades. The avionics process is DO-178C. It is significantly more rigorous than the expectation of our DeFi review process, however the steps used provide an overarching philosophy that has guided the specification of this review process.

For more detail, this presents the software and system requirements for aerospace code certification in an extremely simplified format. It makes a useful comparison.

1. All System Requirements documented in relevant and proper language.

2. There is strong and clear documented traceability from each system requirement to software requirements or low level requirements. There is traceability from requirements to code to test to test results.

3. All software requirements (or low-level requirements/comments) are met for each part of the software code.

4. There is documented traceability from the low level requirements to the software code they cover.

5. Every piece of software code is covered by both unit tests and system tests. If a unit test covers the requirements of the system test, the single test is sufficient.

6. There is documented traceability from the software to the tests.

7. There were frequent review meetings held according to a documented process to review every requirement and ensure that the software was in compliance.

8. There are documented software requirement reviews held according to a predetermined and documented process to ensure compliance with each software requirement.

9. There are test reviews held in compliance with a predetermined process for each test.

10. When there is a requirement to change software code that has already met the requirements, the code or test must change. After this, a change impact analysis document will be conducted which reviews the changes and recommends which tests should reoccur again on the requirements, software and tests.

11. During the audit any test can be chosen by the auditor and the auditor has to be able to review traceability from the test through the code to the requirements with reviews for each step in the process.

This report is for informational purposes only and does not constitute investment advice of any kind, nor does it constitute an offer to provide investment advisory or other services. Nothing in this report shall be considered a solicitation or offer to buy or sell any security, future, option or other financial instrument or to offer or provide any investment advice or service to any person in any jurisdiction. Nothing contained in this report constitutes investment advice or offers any opinion with respect to the suitability of any security, and the views expressed in this report should not be taken as advice to buy, sell or hold any security. The information in this report should not be relied upon for the purpose of investing. In preparing the information contained in this report, we have not taken into account the investment needs, objectives and financial circumstances of any particular investor. This information has no regard to the specific investment objectives, financial situation and particular needs of any specific recipient of this information and investments discussed may not be suitable for all investors.

Any views expressed in this report by us were prepared based upon the information available to us at the time such views were written. Changed or additional information could cause such views to change. All information is subject to possible correction. Information may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.

Very simply, the review looks for the following declarations from the developer's site. With these declarations, it is reasonable to trust the smart contracts.

  • Here are my smart contracts on the blockchain(s)
  • Here is the documentation that explains what my smart contracts do
  • Here are the tests I ran to verify my smart contracts
  • Here are all the security steps I took to safeguard these contracts
  • Here is an explanation of the control I have to change these smart contracts
  • Here is how these smart contracts get information from outside the blockchain (if applicable)

The process breaks the scores into the following sections:

  • Code and Team -- Deployed Code Verification
  • Code Documentation -- Software Documentation for the Deployed Code
  • Testing -- Overall testing measures for the Deployed
  • Code Security -- Review of the Software Security Audits and Bug Bounty
  • Access Controls -- Review of the public information about the ability and process to change the contracts
  • Oracles – (if applicable) Explanation of the data sources used by the protocol and identification of potential vulnerabilities

Development teams, especially those in DeFi, often prefer a private repository to reduce the ability for an easy fork/copy of their development. We clearly understand the business incentive for this, but we cannot give scores for what we cannot see. Hence, the developers will be penalized for not having a publicly accessible repository.

This is due to the importance of public repositories in having the code in easily readable form as well as being able to see the testing records. Although we fully encourage developers to have a private repository alongside a public one, the public repository must be an industry standard for protocols wishing to establish themselves as leaders in the DeFi space.

Audits are also of special importance. With a public repository, anyone can check the differences between the audited and deployed code because all the information is publicly available.

However, if there is no public repository, the audit is of lesser value because the code cannot be cross referenced. Assuming the audit takes place on private code, then 25% is deducted from the audit question score. This is because differences between deployed and audited code are too important to provide points for.

Our reviews are done on chains on many blockchains and many different coding languages. The process is applicable to almost all. See the website for all the chains on which we have reviewed protocols.

A review’s Final Score is indicated as a percentage. This percentage is calculated as total Achieved Points divided by the total Possible Points. For each question the answer can be either yes (Y), no (N), or a percentage (%). Each of these questions has a “Scoring Weight”, as some are more important than others. For example, “Question 1” is more important than “Question 15”, and therefore has a higher weight.

The individual question’s Achieved Points is the Scoring Weight times the answer (yes, no, %). The review’s Total Achieved Points is the sum of every question’s points. For our purposes, a passing score is one that receives 70% or more.

Please see this example of a scoring matrix for reference:

table

This section goes over each question used in this process.

Any of our Process Quality Reviews start with the code which the protocol deploys, and how frequently it is interacted with by users.

The score for this section is derived from the individual scores of the following questions:

  • 1) Are the smart contract addresses easy to find?
  • 2) Does the protocol have a public software repository?
  • 3) Is the team public (not anonymous)?
  • 4) How responsive are the devs when we present our initial report?

Scoring weight: 20

Summary:

Are the addresses of the deployed smart contract(s) easy to find in the project’s public documents?

This is a very important question for users and developers alike, as it has an effect on the audit score for Question 17. It is extremely rare for a protocol to get a 0% on this question. However, when this does happen, it is a very easy fix for the developers of the protocol.

Key Specifications:

  • Essentially, all the contract addresses must be publicly visible in a protocol’s documentation. Not just the token address - DeFiSafety does not consider tokens in our analyses. Rather, the listed contract addresses should be the ones that enable the key offerings of your protocol. This could include Farms, Pools, Staking, Exchanges, and anything you could think of that is not a token.
  • Most importantly, the listed contract addresses of your key products should ideally be your implementation (logic) contracts that execute a majority of your transactions. A good example would be a “MasterChef.sol” contract.

Additional Specifications:

  • The addresses can be over multiple pages as long as one page has the latest addresses or links to their locations.
  • Each contract address must have the software visible on the relevant block explorer. If we cannot verify active addresses using the block explorer, we cannot give marks for them.

All of this is essential to declare a protocol as passing because users must be able to quickly identify the contracts that they use (even if they do not understand Solidity). Being capable of verifying what the protocol does and how they do it plays a key part in users being able to place their trust in that protocol.

Scoring

Percentage Score Guidance:

  • 100% Clearly labelled and on website, documents or repository, quick to find
  • 70% Clearly labelled and on website, docs or repo but takes a bit of looking
  • 40% Addresses in mainnet.json, in discord or sub graph, etc
  • 20% Address found but labelling not clear or easy to find
  • 0% Executing addresses could not be found

How to improve this score:

Make the addresses of the smart contracts utilized by your application available on either your website, Gitbook or GitHub (in the README for instance). Ensure the addresses are up to date, and that they can be verified using a block explorer. Your documentation should ideally comprise a section called “Smart Contracts”, “Smart Contract Addresses”, or “Deployed Addresses” in order to make them easy to find.

Scoring weight: 0 (This question is more for reference)

Summary:

For this metric, we look at the availability of the protocol’s software repository. A public software repository is essential for best practices, as it enables any user to go through it and read a specific smart contract’s code. This practice enables a lot of cross-referencing between the deployed smart contracts and the source code, and subsequently provides increased transparency to the space. It is also the best place to add tests for the code.

Key Specifications:

  • We define a public GitHub as a repository that contains all or most of the protocol’s executing smart contract code. If a protocol only has frontend code for their website, or only code forks that have not been elaborated upon, we do not consider this a public GitHub.

Scoring:

If a protocol’s GitHub is visible, even one just made for deployment, then this scores as “Yes”. For teams with no public software repository, then “No”.

As mentioned beforehand, this question is critical to our PQR process. Without a public software repository, a protocol is unlikely to score highly as this indicates low protocol transparency.

Score Guidance:

  • Yes - There is a public software repository with the code at a minimum, but also normally tests and scripts. Even if the repository was created just to hold the files and has just 1 transaction.
  • No - For teams with private repositories.

How to improve this score

Ensure your contracts are available for viewing on a public software repository (like GitHub). The link to it can be from your protocol’s website or GitBook documentation. Any alternative public repository service is acceptable as well, provided it is identified.

Scoring weight: 5

Summary

While there are many leading protocols led by anonymous development teams, developers willing to put their faces on things and break anonymity are usually an indicator of a long-term commitment to a project / ecosystem, which in turn boosts its security. It is important to remember that all projects can suffer exploits, but what the protocols do about these issues is key to investor security. Anonymous developers can easily disappear, whereas public ones will be held accountable.

Scoring

To score full marks on this question requires both the protocol to name at least two individuals as working for them, as well as the named individuals publicly confirming this information through LinkedIn, Twitter or a personal website. To get 50%, at least one public name should be attributed to working for the protocol anywhere on the internet. Finally, a protocol with no public members will not receive any marks on this question. The information sources we use for this question are documented in our Team Appendix at the end of our PQRs.

Percentage Score Guidance:

  • 100% At least two names can be easily found in the protocol's website, documentation or medium. These are then confirmed by the personal websites of the individuals / their linkedin / twitter.
  • 50% At least one public name can be found to be working on the protocol.
  • 0% No public team members could be found.

How to improve this score

Create a section of documentation linking to employees, founders or contributors in an easily accessible place such as a website, LinkedIn etc. To score fully, name the contributors to this protocol in your documentation and ensure that this corroborates information that can be found elsewhere on the internet (e.g. LinkedIn/ a personal website). Alternatively, software repository contributors can be public.

Scoring weight: 5

Summary

Except for protocols that are immutable and completely unguided (which are still very rare) you want a community who is responsive when users ask questions. We have added this question in 0.9 to score how responsive the team is when we submit our initial report. We time the duration until we receive a response.

Scoring

In our standard review process, we do the initial review without telling the protocol (on public data), and then once we have an initial report, we contact the devs of the protocol and present them the initial report for comments or corrections. We give them a short time (maybe a few weeks) to implement corrections before publishing the report. If the devs are slow to respond, but are very active in improving the report, we will also give 100%.

Percentage Score Guidance:

  • 100% Devs responded within 24hours
  • 100% Devs slow but very active in improving the report
  • 75% Devs responded within 48 hours
  • 50% Devs responded within 72 hours
  • 50% Data not entered yet
  • 0% no dev response within 72 hours

How to improve this score

Answer the damn phone. Joking aside, having team members respond to posts in discord and telegram is all that is requested.

The documentation section analyses the quality of the protocol’s software documentation. For these metrics, we will be looking at the general accessibility of the documentation, as well as verifying if the functionalities of the software are explained or not. This iteration of the Process Audit standard requests only basic documentation, although increasingly detailed documentation will naturally score higher.

The score for this section is derived from the individual scores of the following questions:

  • 5) Is there a whitepaper? (Y/N)
  • 6) Is the protocol's software architecture documented? (%)
  • 7) Does the software documentation fully cover the deployed contracts' source code? (%)
  • 8) Is it possible to trace the documented software to its implementation in the protocol's source code? (%)
  • 9)b Is the documentation organized to ensure information availability and clarity? (%)

Scoring weight: 5

A whitepaper is a dedicated technical document describing the protocol operation from a technical perspective. In previous versions of our review process, we accepted almost anything. This is no longer true. A basic description of the protocol purpose (without technical detail) on the webpage or gitbook is no longer acceptable. The whitepaper may be from a previous version of the protocol.

Score Guidance:

  • Yes - There is an actual whitepaper or at least a very detailed doc on the technical basis of the protocol.
  • No - No whitepaper. Simple gitbook description of the protocol is not sufficient.

How to improve this score

Ensure that your white paper is available for viewing from your website’s front page, or from the Gitbook. A whitepaper proves that your team technically understands the technology.

Scoring weight: 5

This score requires a section of the documentation that specifically covers the protocol’s architecture. Architecture is a loose term that boils down to a section that details “software function used in contract (code) + what it does/how it does it”. In addition, this can also be presented as a diagram that may include the following:

  • Arrows indicating how the smart contracts interact with each other
  • Specific reference to the software functions themselves
  • A written explanation on how the smart contracts interact alongside the directional arrows.

Scoring

In order to receive full marks on this question, protocols should include either a description of smart contract architecture (code + how it works) or a diagram to go along with it. Ideally, both would be included in a protocol’s documentation.

Percentage Score Guidance:

  • 100% Detailed software architecture diagram with explanation
  • 75% Basic block diagram of software aspects or basic text architecture description
  • 0% no software architecture documentation

How to improve this score

Write this document based on the deployed code and how it operates. This document can be written after deployment, though as with all documentation earlier is better.

Scoring weight: 5

Summary

This score requires documentation specifically written about a protocol’s smart contract source code. As such, any generalized math formulas or state diagrams without directly referencing the code do not count towards this score.

Key Specifications:

  • Something we see quite a bit of is instructions on how to interact with the contract, how to deploy/build on it, etc. We will not consider this as smart contract source code documentation. On the other hand, we do factor API documentation in the overall score for this question. However, API documentation on its own weighs no more than 20%.

Scoring

In order to comply with our standards and meet a 100% score, the documentation (in either the website, GitBooks or the GitHub) must cover all of the developed source code. Covering public libraries is not needed. This question requires smart contract source code being identified, explained and located in order to earn full marks. This ensures protocol users know how the protocol works, and is a necessary step to explain what the code does.

Percentage Score Guidance:

  • 100% All contracts and functions documented
  • 80% Only the major functions documented
  • 79 - 1% Estimate of the level of software documentation
  • 0% No software documentation

How to improve this score

This score can be improved by ensuring protocol documents fully and comprehensively cover everything deployed by the protocol. A good way to do it is to list literally every function for every smart contract that you have deployed for your protocol. Not only that, but you also include a brief description of what the specific contract function does and how it does it.

Scoring weight: 5

Summary

Traceability are documented links between a smart contract’s software architecture, documentation, source code and tests. It can involve code snippets within the documents as a simple identifier (such as a shortened content hash) that connects it to the protocol’s source code. It could also be a direct link from code explanations in the documents to where the code is in their Github repository. This is important for users to relate what the protocol documentation explains and its exact location in the GitHub source code, thereby promoting transparency through traceability.

Scoring

In order to meet our standards, every piece of deployed code must have 100% traceability. For full scoring detailed requirements must trace to the code to the tests to the test results. This is a stretch goal. To date a few protocols have real requirements, but no protocol I know of has full traceability, so 100% is a stretch goal. 90% will be more common. Commenting in code does not count toward traceability.

Percentage Score Guidance:

  • 100% will be Requirements with traceability to code and to tests (as in avionics DO-178)
  • 90% on formal requirements with some traceability
  • 80% for good autogen docs
  • 60% Clear association between code and documents via non explicit traceability
  • 40% Documentation lists all the functions and describes their functions
  • 0% No connection between documentation and code

How to improve this score

This score can be improved by adding traceability from documents to code such that it is clear where each outlined code function in the documentation is coded in the protocol’s GitHub repository.

Scoring weight: 5

Summary

This question scores the general organization and clarity of the software and traceability documentation of the protocol. The intent is for the information to be well organized, clearly presented such that readers can easily find the relevant information quickly. If there is no software documentation, there is a 0% score, even if the rest of the documentation is well organized.

Scoring

This is a qualitative question with relatively simple guidance. 100% for information that is where organized compartmentalized and easy to navigate. This allows the reader to quickly find the information they need. 50% indicates reasonably good organization but clearly not ideal. 0% would be where there is no documentation or the organization is distinctly lacking.

Percentage Score Guidance:

  • 100% Information is well organized, compartmentalized and easy to navigate
  • 50% information is decently organized but could use some streamlining
  • 50% Minimal documentation but well organized.
  • 0% information is generally obfuscated

How to improve this score

Group similar information they gather. Use headings and links to allow the reader to quickly find and navigate to the information they need.

This section covers the software testing of the protocol software. It does not cover testing of front end or background software. Unchained software only is considered.

Scoring weight: 30

Summary

Software tests are fundamental elements of software development. Unit tests are written on a file by file basis and generally are used for code coverage. System tests are for functionality, and testing usage of the code. This is an integral part of pre-deployment and post-deployment of a smart contract, as continuously testing your code ensures that there are no bugs or potential vulnerabilities in the software. In addition, testing ensures that in the event that a bug is found it can be resolved right away, therefore proving as an effective method of detection and resolution.

Key Specifications:

  • Do the tests allow for comprehensive testing of the code? Meaning is every smart contract tested in itself, and is this clearly outlined through individual testing files in the GitHub repository.
  • Do the tests allow for comprehensive testing of the code? Meaning is every smart contract tested in itself, and is this clearly outlined through individual testing files in the GitHub repository.
  • Are there both system and unit tests? Unit testing is a verification that the individual smart contracts work well. On the other hand, and equally as important are system tests that confirm that the contracts interact seamlessly with one another without any issues. A complete testing suite would include both of these.
  • It is better to test after deployment than never test at all. Testing is an integral part of evaluating the safety of a DeFi protocol. It is better to have some than none at all, even if the suite only gets developed post-deployment.

Scoring

For our purposes, this score is guided by the Test to Code ratio (TtC), which is calculated by dividing the lines of code in the deployed smart contracts by the lines of code in the protocol’s testing suite. Generally a good TtC ratio is over 100%, which means that approximately every line of deployed code has undergone some form of testing, which makes for better code. An ideal TtC would be over 120%, as this proves a rigorous testing process. However, the reviewer’s best judgement is the final deciding factor.

Percentage Score Guidance:

  • 100% TtC > 120% Both unit and system test visible
  • 80% TtC > 80% Both unit and system test visible
  • 40% TtC < 80% Some tests visible
  • 0% No tests obvious

How to improve this score

This score can be improved by adding tests to fully cover the code. Document what is covered by traceability or test results in the software repository. Ideally, you should have a test file per smart contract deployed on the blockchain. In addition, providing scripts to run your tests is an essential part of testing transparency, and no testing suite should come without them.

Scoring weight: 10

Summary

Here we consider if the unit tests fully cover the code. Ideally, a protocol should generate a report of a code coverage run with results posted in their GitHub repository. All projects should aim for 100% code coverage, as this ensures that every line of code was checked for quality. Without a report, the author determines a percentage based on the test suite’s percentage, artifacts in the test scripts and qualitative estimation. If there are misses in the coverage report, the documentation should explain why the specific code was not tested. The code that is not covered should be as little as possible and well-justified as to why it is uncovered.

Key Specifications:

  • A code coverage test can be performed by the protocol itself, a third-party auditor, or a code coverage service such as Coveralls.
  • Ideally, a coverage test should include the coverage of both lines of code and branches on the GitHub.

Scoring

The score for this question will reflect the exact percentage of the code coverage results. However, in the event that a protocol does not have a code coverage report, we will still give a maximum of 50% depending on the score given to the testing suite in the previous question. This is because testing is essentially code coverage, but a code coverage report is visual proof of its depth.

Percentage Score Guidance:

  • 100% Documented full coverage
  • 99% - 51% Value of test coverage from documented results
  • 50% No indication of code coverage but clearly there is a complete set of tests
  • 30% Some tests evident but not complete
  • 0% No test for coverage seen

How to improve this score

This score can be improved by performing code coverage tests that are as close to 100% coverage as possible. In the event that some lines of code or entire contracts are missed, you should clearly outline why this is the case in your coverage report. Hence, you should also be aiming to perform code coverage tests upon every single deployment. This proves that the code is rigorously tested, and therefore has a degree of reliability attributed to it. Integrate the result of your code coverage result in your GitHub repository readme, like Tetu protocol has done here.

Scoring weight: 10

Summary

A test report is simply a document indicating that the full test suite was run on the deployed software successfully. The document should clearly indicate each test and the test result (presumably passes). There should also be an indication of code coverage on the same code base. Ideally there is a test report in the repository of the deployed code that was generated by running the full test suite. This is as straightforward as providing the test report created by running the coverage in the protocol’s test environment. A complete test report manufactured by the protocol’s developer can also be used as a testing guide and visualizer for users that wish to execute those same tests. Therefore, adding reports like these provide a great deal of traceability. An ideal report would include a complete visual breakdown of a performed test run by a protocol, as well as explanations of the general testing methodology used.

Key Specifications:

  • The important distinction between a coverage test report and a coverage test output is that a coverage output is almost exclusively software functions and percentages in a table format, while a coverage test report would include additional explanations that provide in-depth information about the methodology used. A report like this also leaves more room for detailing exactly why a line of code was not covered, why there was a miss, errors, etc.

Scoring

For the scoring of this specific metric, we will give full points for test reports that are detailed with the help of methodology explanations. On the other hand, we will give less points for a report that only contains a coverage test output with the functions and percentages.

Percentage Score Guidance:

  • 100% Detailed test report
  • 70% GitHub code coverage report visible
  • 0% No test report evident

How to improve this score

Add a code coverage test report with the results. This should not only be a code coverage output, but rather a combination of your coverage output and a deeper insight on your methodology used. An exemplary test report from Balancer Finance can be found here.

Scoring weight: 5

Summary

Formal verification is a process of testing software processes that is well suited to most blockchain software in the DeFi space. More specifically, Formal Verification tests a protocol’s software algorithms with formal methods of mathematics in order to prove that they work correctly. This test can be tailored to any specification or property, which is why it is a very useful tool. Nevertheless, its accessibility is limited and few protocols have undergone Formal Verification. For this reason, the weight is lower than other elements in our scoring matrix.

Key Specifications:

  • Although certain auditors use certain formal methods, these are not what we consider a full-fledged Formal Verification. To get a good representation of what it actually looks like, established formal verifiers include Certora and Runtime Verification.

Scoring

This is a simple Yes/No question regarding the availability of a Formal Verification test report of the protocol being reviewed. If there is no such report, this metric receives a “No”. If a report is provided, this metric receives a “Yes”.

Score Guidance:

  • Yes Formal Verification was performed and the report is readily available
  • No Formal Verification was not performed and/or the report is not readily available.

How to improve this score

Undergo a Formal Verification for your protocol’s algorithms, and use the services provided by the reputable formal verifiers in the space such as Certora and Runtime Verification. Although this can become expensive, it is an essential part of legitimizing your software’s integrity.

This section looks at the security of a protocol through 3rd party software audits, bug bounties and protocol monitoring (both onchain and front end). Security is the most important section in this review, as it is essential to uphold robust safety measures in the DeFi space. Therefore, the questions in this section weigh the most out of all the others, making them a key point of our focus in intricately analysing and commenting on them. As such, audits and bug bounties are metrics that we use to evaluate how much a protocol values smart contract safety.

Protocol monitoring has become vital for a DeFi protocol in the past couple of years. Unchain monitoring allows detection of suspect activity and allows the protocol to act quickly. Front-end monitoring is also necessary. Several websites (Curve for example) have been attacked with various types of front-end security events. For these reasons we have added questions on protocol monitoring.

The score for this section is derived from the individual scores of the following questions:

  • 14) Is the protocol sufficiently audited? (%)
  • 15) Is there a matrix of audit applicability on deployed code (%)?
  • 16) Is the bounty value acceptably high (%)
  • 17) Is there documented protocol monitoring (%)?
  • 18) Is there documented web site front end monitoring (%)?

Scoring weight: 65

Summary

Smart contract audits are an indicator of code quality that is adherent to best practices in DeFi. Protocol audits occur when a third-party blockchain software security organization reviews a specific protocol’s smart contract code. Specifically, auditors conduct tests to look for overall quality in the software architecture in addition to the subtle blockchain-specific weaknesses that could be utilized by an attacker. Audits are one of the pillars of smart contract security in the DeFi space, and that is why it is the most important metric within DeFiSafety reviews.

Key Specifications:

  • A protocol that has undergone an audit is automatically perceived as being legitimized in the DeFi space. This is because most audits go through a protocol’s software with such a fine-tooth comb that it is sure to prevent vulnerabilities to some degree.
  • An audit typically weighs more if the report has been published before one of a protocol’s mainnet deployments. Equally so, having multiple audits performed before deployment is the best-case scenario. This assures users that the protocol’s team have a vested interest in making sure that their deployed smart contracts are safe, and that their users know it.
  • A public audit report is a necessity. A protocol that claims to be audited without having a public audit report can effectively be considered as unaudited. In the DeFi space, transparency is everything, and having a private audit report is non-transparent.
  • An audit report usually generates a good amount of feedback from the auditors for the developers of the protocol to implement. These can range from informational issues that are typically structure related, to critical issues that usually underline an important vulnerability in the code. It is of utmost importance that the developers of the team being audited implement these recommendations for the sake of safe code and best practices.

Scoring

If the smart contract addresses on the mainnet are not found or if the addresses are found but the code is hidden, the audit results will be zero. This is because even if an audit is available, there is no ability to verify that the code deployed is the same code that has been audited. This is especially important for new versions of protocols as new code being shipped for a V2 / V3 etc. is now no longer covered by the previous audit.

If the quality of the report does not reflect a proactive audit done on the code, then the authors reserve the right to reduce the score to as low as 0%. This attempts to cover valueless documents that say “Audit” and “PASS” but are not real audits, as no oversight is provided. Some audits in DeFi simply act as rubber stamps, so it is important to verify their methods.

The authors also reserve the right to reduce audit scores if they do not cover some economic issues. A smart contract audit that covers solidity well but ignores financial risks specific to DeFi has limited value (but not zero). Financial risk in DeFi is significant, making this audit question of massive importance.

With a public repository, anyone can check the differences between the audited and deployed code because all the information is publicly available. Assuming the audit takes place on code that cannot be seen publicly, then 25% is deducted from the usual score. This has the effect of making a 100% become 75%, and so forth. If the auditing firm probably indicates that their audit report is relevant to the deployed code, then full marks are regained.

This is the most important question in the process, as good audit practices almost always indicate good protocol development practices.

Percentage Score Guidance:

  • 100% Multiple Audits performed before deployment and the audit findings are public and implemented or not required
  • 90% Single audit performed before deployment and audit findings are public and implemented or not required
  • 70% Audit(s) performed after deployment and no changes required. The Audit report is public.
  • 65% Code is forked from an already audited protocol and a changelog is provided explaining why forked code was used and what changes were made. This changelog must justify why the changes made do not affect the audit.
  • 50% Audit(s) performed after deployment and changes are needed but not implemented.
  • 30% Audit(s) performed are low-quality and do not indicate proper due diligence.
  • 20% No audit performed
  • 0% Audit Performed after deployment, existence is public, report is not public OR smart contract address’ not found.

Deduct 25% if the audited code is not available for comparison.

How to improve this score

Your score for this question can improve by having your future deployments rigorously audited before deployment. In order to achieve full marks, you must have had your smart contracts audited twice before being deployed. However, even having just one audit performed before deployment gives you a great score for this metric. Another point you can seek to improve on is implementing the recommendations brought to you by the audit. We read every audit thoroughly, and not implementing important fixes will definitely affect your score negatively. Finally, having one audit published either before or after a deployment is infinitely better than having none at all. Even if you have just one audit, you’re on the right track. Just make sure that it is traceable to your own code, and that the audited smart contracts have their addresses publicly available.

Scoring weight: 5

Summary

A matrix of audit applicability is simply a document that indicates which audits are directly applicable to the released code. This has become more important as updatable protocols often have a large list of audits that have been performed. It is very difficult for an outsider to determine which audits are applicable and also if the code has been audited. Please refer to the example doc for reference. This format is ideal but we are very flexible as long as the information required is presented.

Scoring

Percentage Score Guidance:

  • 100% Current and clear matrix of applicability
  • 100% 4 or less clearly relevant audits
  • 50% Out of date matrix of applicability
  • 0% No matrix of applicability

How to improve this score

f you have many audits, make sure people can see which audits are applicable to the deployed code. As the Euler hack clearly showed, this can become valuable if things go south.

Scoring weight: 10

Summary

This section checks the value of the bug bounty, if it exists. A Bug Bounty program is another fundamental element of smart contract security. When a bounty offers a substantial reward, the incentives to outsource the search for bugs to the community become increasingly apparent. More efforts in bug searches will naturally lead to a higher chance of finding a critical issue in the protocol’s backend. Moreover, a high Bug Bounty reward not only signifies that the development team cares about finding bugs, but it also means that the code has been battle-tested to absolute confidence.

High TVL bug bounty rewards are specifically designed to incentivize black hat hackers to take the white hat route as there is a legal route to significant revenue. Our scoring metric reflects this.

Key Specifications:

  • We do consider Code Arena competitions as being a Bug Bounty program. Although the concepts between the two are slightly different, the fundamental goal of finding bugs for a cash reward is the same.

Scoring

First, a score of 0% is given if there is no Bug Bounty program for the protocol being reviewed or if the bug bounty is no longer available. Based on discussions with Immunefi we have given a priority towards active and very rewarding Bug Bounty programs as these improve the general incentives for safety in DeFi.

An active program means a third party (such as Immunefi or CodeArena) is actively driving hackers to the site. An inactive program would be a static mention of a Bug Bounty on the documents. A dead program is one that was offered in the past, but has since expired.

Percentage Score Guidance:

  • 100% Bounty is 10% TVL or at least $1M AND active program (see below)
  • 90% Bounty is 5% TVL or at least 500k AND active program
  • 80% Bounty is 5% TVL or at least 500k
  • 70% Bounty is 100k or over AND active program
  • 60% Bounty is 100k or over
  • 50% Bounty is 50k or over AND active program
  • 40% Bounty is 50k or over
  • 20% Bug bounty program bounty is less than 50k
  • 0% No bug bounty program offered / the bug bounty program is dead

How to improve this score

The whole idea of a Bug Bounty program is to increase the amount of eyes that are continuously checking your source code for bugs. How do you improve that? Through improving the monetary incentives of your program. In this scenario, the more incentives you put in for community members to find errors in your code, the more your code will be secure in the long run. The other side of the coin is that posting a million dollar bounty will naturally make you feel more inclined to be absolutely certain that your code does not have any flaws in it. Therefore, Bug Bounties are an incentive for both the users and the developers.

Scoring weight: 5

Summary

Protocol monitoring has become more common and more valuable as a means to detect and react to Defi security attacks. The intent is for bots to monitor onchain activity in order to detect or anticipate security risks. We feel in 2023 and beyond protocol monitoring is a must for a successful Defi protocol.

This is a new question. For new questions we generally keep requirements open and the weight of the question relatively low. Our requirements cover elements; active monitoring and incident response. Active monitoring involves automated bots that regularly monitor activity on the Blockchain. Generally, this activity would be specific to the protocol and ideally designed to detect the start of a security incident.

For incident response we are looking at documentation that indicates there is a response plan if he Blockchain monitoring detects bad activity. It is up to the protocol to define the criteria justifying and incident response. Internally their documentation should cover the team there execution plan and publication plan. Publicly the documentation should indicate the existence of such a plan and only present information which is acceptable in a public forum.

Scoring

Percentage Score Guidance:

  • 80% Documentation covering protocol specific threat monitoring
  • 60% Documentation covering generic threat monitoring
  • 40% Documentation covering operational monitoring
  • 20% Bug bounty program bounty is less than 50k
  • 0% No on chain monitoring

Add 20% for a documented incident response process

How to improve this score

Add and document protocol monitoring that is specific to the protocol and attempts to detect transactions that are suspicious. Add an incident response description. Define users should know that the protocol will react quickly and aggressively to limit the loss of any attack.

Scoring weight: 5

Summary

Attacks on the front end of Web 3 websites have become common over the past two years. This means that DeFi protocols must actively protect their websites in order to protect their users’ funds. This type of protection can be purchased from many web 2 cyber security providers. As this is website protection, no special Blockchain experience is required for the services.

We look for denial of service protection, dynamic name server protection, intrusion protection on the front-end or unwanted modifications. We don’t need details. We just need a documented indication that this protection has been implemented.If no specifics can be indicated, a generic statement indicating the site protects its website from security risks will allow a 60% score.

Scoring

Percentage Score Guidance:

25% for each of the elements documented. Documentation does need to be specific (for improved security).

  • DDOS Protection
  • DNS steps to protect the domain
  • Intrusion detection protection on the front end
  • Unwanted front-end modification detection

Or

60% for a generic web site protection statement

How to improve this score

Implement each level of protection and document its existence.

We have updated the admin controls section significantly based on what they learned doing reviews for 0.8. The questions are clearer and more focused. We removed the time lock and pause questions. Time lock we found very rarely utilized. It is difficult to prove that pause control is always beneficial. It depends on the implementation. For this reason we removed a question.

Scoring weight: 25

Summary

It is still true. Upgradability is a bug. Some of the safest protocols are immutable. The ability to upgrade a contract adds significant risk. We determine immutability by inspecting the code. If the code is upgradable then extra points are gained with the time lock, roles a multisig. An EOA upgradable contract gets no score. This question has a high weighted value of 25. However in discussion with Ernesto (senior dev at AAVE), he referenced the weakness Balancer had where AAVE updated their code to protect investors because Balancer (being immutable) could not. The balance seems to be leaning toward updateable being preferable.

Scoring

Percentage Score Guidance:

  • 100% Fully Immutable
  • 90% Updateable via Governance with a timelock >= 5 days
  • 80% Updateable with Timelock > 5 days
  • 70% Updateable via Governance
  • 50% Updateable code with Roles
  • 40% Updateable code MultiSig
  • 0% Updateable code via EOA

How to improve this score

Once your code is mature, convert to immutable.

Scoring weight: 15

Summary

Question 19 is answered entirely by the code. This question is answered entirely in the documentation. Even if you have immutable code it is important that this is described clearly. Defi users, the ones that are not developers, need to understand in simple English about the upgrade capability (or lack thereof) of your protocol. This focuses on documentation of the code upgradability situation. In other words, admin changes such as fees or other elements are not considered in this question. They are the subject of the next question.

Scoring

Percentage Score Guidance:

  • 100% Code is Immutable and clearly indicated so in documentation
  • 100% Code is upgradeable and clearly explained in non technical terms
  • 50% Code is upgradeable with minimal explanation
  • 50% Code is immutable but this is not mentioned clearly in the documentation
  • 0% No documentation on code upgradeability

How to improve this score

Clearly indicate in the documentation for the required information about upgradability or immutability. It is that simple.

Scoring weight: 10

Summary

The previous question was about upgradability of the deployed code. This question is about all of the other aspects that can be updated in a protocol. If there are no aspects that can be updated, you will get a 100%. Immutability is wonderful, though this must be clearly indicated. Admin control through governance also gets 100%, if clearly explained.

If you have various coefficients, pools, fees or other aspects that administrators can update, this question looks for information on that. Indicate how many addresses can perform the update and which addresses can perform which updates. Clearly explained what changes the updates will have to the deployed code. This should be in language a financial investor can understand.

Scoring

Percentage Score Guidance:

  • 100% If immutable code and no changes possible, no admins required
  • 100% Admin addresses, roles and capabilities clearly explained
  • 100% Admin control is through Governance and process clearly explained
  • 80% Admin addresses, roles and capabilities incompletely explained but good content
  • 40% Admin addresses, roles and capabilities minimally explained, information scattered
  • 0% No information on admin addresses, roles and capabilities

How to improve this score

Clearly indicate the required information in simple English.

Scoring weight: 5

Summary

The previous two questions described the addresses and capabilities that can change the Defi protocol. This question looks for the signers that can execute those changes. At the minimum, there should be a list of addresses of the signers for each MultiSig. If there is no MultiSig, this should be listed clearly also.

Finally, the list of addresses should probably be distinct humans. The goal here is to mitigate the risk at all signers of a MultiSig are in fact the same person. There are several ways to prove this. First, lest the real person’s names. We understand that some people would prefer to be anonymous on a high-value MultiSig. There are however methods to prove the signers are independent distinct humans yet still retain their anonymity. Gitcoin passport allows the proof of humanity to be transferred to an address without doxxing. Also, DeFiSafety has a MultiSig certification process. In this process each signer that tends a simultaneous video call. The other signers watch, each individual signer sends a transaction to the MultiSig. DeFiSafety will then assert that the MultiSig signers are all distinct people. The people do not require KYC for this process.

Scoring

Percentage Score Guidance:

  • 100% All signers of the admin addresses are clearly listed and probably distinct humans
  • 100% If immutable code and no changes possible, therefore no admins
  • 100% Admin control is through Governance thus no signers
  • 60% All signers of the admin addresses are clearly listed
  • 30% Some signers of the admin addresses are listed
  • 0% No documentation on the admin addresses

How to improve this score

Provide a list in a clear, understandable manner and use an accepted method to prove the signers are distinct humans.

Scoring weight: 5

Summary

Too many losses are attributable to lost keys. Everyone knows this is solvable. In the postmortem of hacks, the reasons for lost keys are never discussed. This means we do not have the data to improve. With this question we are adding the concept of a transaction signing policy. This is a document of best practices for signing the DeFi protocol transactions. This is not a requirement where all admin control is through governance voting, where the number of voters (hopefully) counteracts this requirement.

We want each protocol to publicly write down the process their signers use to sign transactions that change DeFi protocols. Once a process is in writing, when a key is compromised, the question becomes how? What went wrong? How can the process be improved so this does not happen again? Put this information in the postmortem (sanitized for privacy) and everyone knows how they can improve and all DeFi improves.

For DeFiSafety review scoring, only the wallets that affect the DeFi protocol need to follow this process (changing the code or coefficients). These are wallets that affect other people’s money. So DAO transactions are not required.

Any policy is better than no policy so our scoring reflects this. Auditing the process, so that DAO members are convinced the process is being followed adds to the score.

Scoring

Percentage Score Guidance:

  • 100% If immutable and no changes possible
  • 100% If admin control is fully via governance
  • 80% Robust transaction signing process (7 or more elements)
  • 70% Adequate transaction signing process (5 or more elements)
  • 60% Weak transaction signing process (3 or more elements)
  • 0% No transaction signing process evident

Evidence of audits of signers validating the process add 20%

How to improve this score

Added transaction signing policy. First, this is good transparency. While it may be obvious to some, it gives confidence to some others. It is a small step towards fixing the lost keys problem.

Scoring weight: 0

Summary

Some protocols don’t use oracles. Before, this distorted the score, as we had to give 100% in order not to penalize the protocol. This question fixes this problem. If the answer is no, the questions are ignored and do not affect the score. If the question is yes, the questions are used in the score as normal. There is nothing wrong with not having oracles. For some protocols it makes perfect sense.

Scoring

This question does not affect the score.

  • Yes The protocol uses Oracles and the next 2 questions are relevant
  • No If the protocol does not use Oracles, then the answer is No and the Oracle questions will not be answered or used in the final score for this protocol

Scoring weight: 10

Summary

The specific Oracle provider that a protocol uses for price data should be identified. This will inform users as to the origin and reliability of the data which protocols’ smart contracts depend upon. More specifically, the documentation should go over the different components of an Oracle including software function documentation (if the protocol provides its own price data source), Oracle source, the contracts that are using Oracles, and the timeframe between each price feed. This information is very important as Oracles are critical to the function of many protocols, and being more transparent about this data will lead to increasingly robust Oracle practices.

Key Specifications:

  • By Oracle source, we essentially mean the provider. This could be anyone from Chainlink to Tellor, MakerDAO, Compound, Band Protocol, Augur, etc. Every provider has their own philosophy, methods, and backend. Protocols identifying their Oracle provider goes a long way into users being able to do their due diligence, and therefore promotes overall transparency.

Scoring

When scoring this metric, we look specifically at how much information is available on a protocol’s Oracle. In order to get full points for this question, a protocol must identify its various components. These are: source, contracts, price feed timeframe, and basic software functionality (if they provide their own price data source). In the event that the protocol has no need for price feeds, they do not have a use for an Oracle. In this scenario, the protocol must adequately explain why they have no use for an Oracle in order to get full points for this question.

Percentage Score Guidance:

  • 100% The Oracle is specified. The contracts dependent on the oracle are identified. Basic software functions are identified (if the protocol provides its own price feed data). Timeframe of price feeds are identified.
  • 75% The Oracle documentation identifies both source and timeframe but does not provide additional context regarding smart contracts.
  • 50% Only the Oracle source is identified.
  • 0% No oracle is named / no oracle information is documented.

How to improve this score

Include a section within protocol documentation explaining the oracle it employs. If no oracle is used, state this. Pancakeswap documentation identifies the oracles that it uses, and justifies why it chose them. Protocols might consider using their documentation as a starting point. On the other hand, a protocol may simply not need a data source and should this be explained full marks will be awarded.

Scoring weight: 10

Summary

Flash loans are a fundamental part of DeFi, originally brought in by Aave. However, flash loan attacks are significant threats to DeFi protocols, and have crippled a number of them. This is due to flash loans being used as a means to manipulate the liquidity of certain token pairs in a protocol’s vaults, and subsequently affecting the accuracy of the prices reported by their Oracles. Protocols should thus implement measures to mitigate them, as this is very important for their users’ fund protection. A protocol that documents this kind of information is inherently making itself more trustworthy to investors.

Key Specifications:

  • Although flash loan exploits cannot currently be completely blocked, there are certainly methods that protocols can use to mitigate their extent. For instance, using multiple Oracle sources can prevent the reporting of inaccurate or very volatile price changes. This can be done by using a Chainlink Oracle that uses multiple data nodes in combination with a TWAP Oracle such as Uniswap’s. Other ways of preventing large-scale flash loan exploits include robust tokenomics such as minting caps, withdrawal caps, and more.

Scoring

We will score this question based on how much of the flash loan mitigation information is available in the protocol’s documentation. Countermeasures should be documented and explained so that users can understand what steps have to be taken to mitigate the risks of a potential flashloan attack. This is a simple “Yes” or “No” question.

  • Yes The protocol’s documentation includes information on how they mitigate the possibilities and extent of flash loan attacks.
  • No The protocol’s documentation does not include any information regarding the mitigation of flash loan attacks.

How to improve this score

Identify in your documentation what countermeasure the protocol employs to prevent a flashloan attack. An example of how this might be prevented can be found here and this should be documented in this way.