A Deep Dive Understanding of Smart Contract Vulnerabilities — Part 2

TutorialBoy
21 min readMar 30, 2024

Logic bugs in lending protocols

When considering how lending and borrowing-based DeFi protocols can break, it’s helpful to think about how bugs propagate at the software level and affect the business logic level. There are a lot of steps to forming and closing a bond contract. Here are some attack vectors to consider.

Ways lenders lose out

  • Bugs that enable the principal to reduce (possibly to zero) without making any payments.
  • The buyer’s collateral cannot be liquidated when the loan is not paid back or the collateral drops below the threshold.
  • If the protocol has a mechanism for transferring debt ownership, this could be a vector for stealing bonds from lenders.
  • The due date of the loan principal or payments is improperly moved to a later date.

Ways borrowers lose out

  • A bug where paying back the principal does not lead to principal reduction.
  • A bug or griefing attack prevents the user from making payment.
  • The principal or interest rate is illegitimately increased.
  • Oracle manipulation leads to devaluing the collateral.
  • The due date of the loan principal or payments is improperly moved to an earlier date.

If collateral is drained from the protocol, then both the lender and borrower lose out, since the borrower has no incentive to pay back the loan, and the borrower loses the principal.

As can be seen above, there are a lot more levels to a DeFi protocol getting “hacked” than a bunch of money being drained from the protocol (the kind of events that usually make the news).

Vulnerabilities in Staking Protocols

The kind of hacks that make the news are staking protocols getting hacked for millions of dollars, but that is not the only issue to look for.

  • Can rewards be delayed in payout, or claimed too early?
  • Can rewards be improperly reduced or increased? In the worst case, can the user be prevented from receiving any reward?
  • Can people claim principles or rewards that don’t belong to them, in the worst case draining the protocol?
  • Can deposited assets get stuck in the protocol (partially or fully) or be improperly delayed in withdrawal?
  • Conversely, if staking requires a time commitment, can users withdraw before the commitment time?
  • If the payout is in a different asset or currency, can the value of it be manipulated within the scope of the smart contract in question? This is relevant if the protocol mints its tokens to reward liquidity providers or stakers.
  • If there is an expected and disclosed element of risk of losing principal staking, can that risk be improperly manipulated?
  • Do key parameters of the protocol have admin, centralization, or governance risk?

The key areas to look at are the areas of the code that touch the “money exit” portions of the code.

There is a “money entrance” vulnerability to look for too.

  • Can users who have a right to participate in staking assets in the protocol be improperly prevented from doing so?

Rewards users receive have an implicit risk-reward profile and an expected time-value of money aspect. It’s helpful to be explicit about what those assumptions are and how the protocol could be caused to deviate from expectations.

Unchecked return values

There are two ways to call an external smart contract: 1) calling the function with an interface definition; 2) using the .call method. This is illustrated below

contract A {
uint256 public x;
function setx(uint256 _x) external {
require(_x > 10, "x must be bigger than 10");
x = _x;
}
}
interface IA {
function setx(uint256 _x) external;
}
contract B {
function setXV1(IA a, uint256 _x) external {
a.setx(_x);
}
function setXV2(address a, uint256 _x) external {
(bool success, ) =
a.call(abi.encodeWithSignature("setx(uint256)", _x));
// success is not checked!
}
}

In contract B, setXV2 can silently fail if _x is less than 10. When a function is called via the .call method, the callee can revert, but the parent will not revert. The value of success must be checked and the code behavior must branch accordingly.

msg.value in a loop

Using msg.value inside a loop is dangerous because this might allow the sender to “re-use” the msg.value.

This can show up with payable multicalls. Multicalls enable a user to submit a list of transactions to avoid paying the 21,000 gas transaction fee over and over. However, msg.value gets “re-used” while looping through the functions to execute, potentially enabling the user to double spend.

This was the root cause of the Opyn Hack.

Private Variables

Private variables are still visible on the blockchain, so sensitive information should never be stored there. If they weren’t accessible, how would the validators be able to process transactions that depend on their values? Private variables cannot be read from an outside Solidity contract, but they can be read off-chain using an Ethereum client.

To read a variable, you need to know its storage slot. In the following example, the storage slot of myPrivateVar is 0.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract PrivateVarExample {
uint256 private myPrivateVar;
constructor(uint256 _initialValue) {
myPrivateVar = _initialValue;
}
}

Here is the javascript code to read the private variable of the deployed smart contract

const Web3 = require("web3");
const PRIVATE_VAR_EXAMPLE_ADDRESS = "0x123..."; // Replace with your contract address
async function readPrivateVar() {
const web3 = new Web3("http://localhost:8545"); // Replace with your provider's URL

// Read storage slot 0 (where 'myPrivateVar' is stored)
const storageSlot = 0;
const privateVarValue = await web3.eth.getStorageAt(
PRIVATE_VAR_EXAMPLE_ADDRESS,
storageSlot
);
console.log("Value of private variable 'myPrivateVar':",
web3.utils.hexToNumberString(privateVarValue));
}

readPrivateVar();

Try Catch is hard to get right

Whereas a low-level call will simply return true or false based on success, a try-catch makes a distinction between a panic and a revert, and may silently fail if the return data cannot be parsed properly. It’s best to avoid it and simply use low-level calls.

Insecure Delegate Call

Delegatecall should never be used with untrusted contracts as it hands over all control to the delegatecallee. In this example, the untrusted contract steals all the ether in the contract.

contract UntrustedDelegateCall {
constructor() payable {
require(msg.value == 1 ether);
}
function doDelegateCall(address _delegate, bytes calldata data) public {
(bool ok, ) = _delegate.delegatecall(data);
require(ok, "delegatecall failed");
}
}
contract StealEther {
function steal() public {
// you could also selfdestruct here
// if you really wanted to be mean
(bool ok,) =
tx.origin.call{value: address(this).balance}("");
require(ok);
}
function attack(address victim) public {
UntrustedDelegateCall(victim).doDelegateCall(
address(this),
abi.encodeWithSignature("steal()"));
}
}

Upgrade bugs related to proxies

We can’t do justice to this topic in a single section. Most upgrade bugs can be generally avoided by using the hardhat plugin from Openzeppelin and reading about what issues it protects against. (https://docs.openzeppelin.com/upgrades-plugins/1.x/).

As a quick summary, here are issues related to smart contract upgrades:

  • selfdestruct and delegatecall should not be used inside implementation contracts
  • care must be taken that storage variables never overwrite each other during upgrades
  • calling external libraries should be avoided in implementation contracts because it isn’t possible to predict how they will affect storage access
  • deployer must never neglect to call the initialization function
  • not including a gap variable in base contracts to prevent storage collision when new variables are added to the base contract (this is handled by the hardhat plugin automatically)
  • the values in immutable variables are not preserved between upgrades
  • doing anything in the constructor is highly discouraged because future upgrades would have to carry out identical constructor logic to maintain compatibility.

Overpowered Admins

Just because a contract has an owner or an admin, it doesn’t mean that their power needs to be unlimited. Consider an NFT. It’s reasonably for only the owner to withdraw the earnings from the NFT sale, but being able to pause the contract (block transfers) could wreak havoc if the owner’s private keys get compromised. Generally, administrator privileges should be as minimal as possible to minimize unnecessary risk.

Speaking of contract ownership…

Use Ownable2Step instead of Ownable

This is technically not a vulnerability, but OpenZeppelin’s ownable can lead to loss of contract ownership if ownership is transferred to a non-existent address. Ownable2step requires the receiver to confirm ownership. This ensures against accidentally sending ownership to a mistyped address.

Rounding Errors

Solidity does not have floats, so rounding errors are inevitable. The designer must be conscious of whether the right thing to do is to round up or to round down, and in whose favour the rounding should be.

Division should always be performed last. The following code incorrectly converts between stablecoins that have a different number of decimals. The following exchange mechanism allows a user to take a small amount of USDC (which has 6 decimals) for free when exchanging for dai (which has 18 decimals). The variable daiToTake will round down to zero, taking nothing from the user in exchange for a non-zero usdcAmount.

contract Exchange {
uint256 private constant CONVERSION = 1e12;
function swapDAIForUSDC(uint256 usdcAmount) external pure returns (uint256 a) {
uint256 daiToTake = usdcAmount / CONVERSION;
conductSwap(daiToTake, usdcAmount);
}
}

Frontrunning

Frontrunning in the context of Ethereum (and similar chains) means observing a pending transaction and executing another transaction before it by paying a higher gas price. That is, the attacker has “run in front” of the transaction. If the transaction is a profitable trade, then it makes sense to copy the transaction exactly except to pay a higher gas price. This phenomenon is sometimes referred to as MEV, which means miner extractable value, but sometimes maximal extractable value in other contexts. Block producers have unlimited power to reorder transactions and insert their own, and historically, block producers were miners before Ethereum went to proof of stake, hence the name.

Frontrunning: Unprotected withdraw

Withdrawing Ether from a smart contract can be considered a “profitable trade.” You execute a zero-cost transaction (aside from the gas) and end up with more cryptocurrency than you started with.

contract UnprotectedWithdraw {
constructor() payable {
require(msg.value == 1 ether, "must create with 1 eth");
}
function unsafeWithdraw() external {
(bool ok, ) = msg.sender.call{value: address(this).value}("");
require(ok, "transfer failed").
}
}

If you deploy this contract and try to withdraw, a frontrunner bot will notice your call to “unsafeWithdraw” in the mempool and copy it to get the Ether first.

Frontrunning: ERC4626 Inflation attack, a combination of frontrunning and rounding errors

We’ve written in-depth about the ERC-4626 inflation attack in our ERC4626 tutorial. But the gist of it is that an ERC4626 contract distributes “share” tokens based on the percentage of “assets” that a trader contributes. Roughly, it works as follows:

function getShares(...) external {
// code
shares_received = assets_contributed / total_assets;
// more code
}

Of course, nobody will contribute assets and get no shares back, but they can’t predict that will happen if someone can frontruns the trade to get the shares.

For example, they contributes 200 assets when the pool has 20, they expect to get 100 shares. But if someone frontruns the transaction to deposit 200 assets, then the formula will be 200 / 220, which rounds down to zero, causing the victim to lose assets and get zero shares back.

Frontrunning: ERC20 approval

It’s best to illustrate this with a real example rather than describe it in the abstract

  • Suppose Alice approves Eve for 100 tokens. (Eve is always the evil person, not Bob, so we will keep convention).
  • Alice changes her mind and sends a transaction to change Eve’s approval to 50.
  • Before the transaction to change the approval to 50 is included in the block, it sits in the mempool where Eve can see it.
  • Eve sends a transaction to claim her 100 tokens to frontrun the approval for 50.
  • The approval for 50 goes through
  • Eve collects the 50 tokens.

Now Eve has 150 tokens instead of 100 or 50. The solution to this is to set the approval to zero before increasing or decreasing it, when dealing with untrusted approvals.

Frontrunning: Sandwich attacks

The price of an asset moves in response to buying and selling pressure. If a large order is sitting in the mempool, traders have an incentive to copy the order but with a higher gas price. That way, they purchase the asset, let the large order move the price up, and then they sell right away. The sell order is sometimes called “backrunning.” The sell order can be done by placing a sell order with a lower gas price so that the sequence looks like this

  • frontrun buy
  • large buy
  • sell

The primary defence against this attack is to provide a “slippage” parameter. If the “frontrun buy” itself pushes the price up past a certain threshold, the “large buy” order will revert making the frontrunner fail on the trade. See this resource for additional bugs and vulnerabilities related to slippage.

It’s called a sandwich because the large buy is sandwiched by the frontrun buy and the backrun sell. This attack also works with large sell orders, just in the opposite direction.

Learn more about frontrunning

Frontrunning is a massive topic. Flashbots has researched the topic extensively and published several tools and research articles to help minimize it’s negative externalities. Whether frontrunning can be “designed away” with proper blockchain architecture is a subject for debate which has not been conclusively settled. The following two articles are enduring classics on the subject:

Ethereum is a dark forest

Escaping the dark forest

Signature Related

Digital signatures have two uses in the context of smart contracts:

  • enabling addresses to authorize some transaction on the blockchain without making an actual transaction
  • proving to a smart contract that the sender has some authority to do something, according to a predefined address

Here is an example of using digital signatures safely to give a user the privilege to mint an NFT:

import "@openzeppelin/contracts/utils/cryptography/ECDSA.sol";
import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
contract NFT is ERC721("name", "symbol") {
function mint(bytes calldata signature) external {
address recovered = keccak256(abi.encode(msg.sender)).toEthSignedMessageHash().recover(signature);
require(recovered == authorizer, "signature does not match");
}
}

A classic example is the Approve functionality in ERC20. To approve an address to withdraw a certain amount of tokens from our account, we have to make an actual Ethereum transaction, which costs gas.

It’s sometimes more efficient to pass a digital signature to the recipient off-chain, then the recipient supplies the signature to the smart contract to prove they were authorized to conduct the transaction.

ERC20Permit enables approvals with a digital signature. The function is described as follows

function permit(address owner,
address spender,
uint256 amount,
uint256 deadline,
uint8 v,
bytes32 r,
bytes32 s
) public

Rather than sending an actual approved transaction, the owner can “sign” the approval for the spender (along with a deadline). The approved spender can then call the permit function with the provided parameters.

Anatomy of a signature

You’ll see the variables, v, r, and s frequently. They are represented in solidity with the datatypes uint8, bytes32, and bytes32 respectively. Sometimes, signatures are represented as a 65-byte array which is all of these values concatenated together as abi.encodePacked(r, s, v);

The other two essential components of a signature are the message hash (32 bytes) and the signing address. The sequence looks like this

  • A private key (privKey) is used to generate a public address (ethAddress)
  • A smart contract stores ethAddress in advance
  • An offchain user hashes a message and signs the hash. This produces the pair msgHash and the signature (r, s, v)
  • The smart contract receives a message, hashes it to produce msgHash, then combines it with (r, s, v) to see what address comes out.
  • If the address matches ethAddress, the signature is valid (under certain assumptions which we will see soon!)

Smart contracts use the precompiled contract ecrecover in step 4 to do what we called the combination and get the address back.

There are a lot of steps in this process where things can go sideways.

Signatures: ecrecover returns address(0) and doesn’t revert when the address is invalid

This can lead to a vulnerability if an uninitialized variable is compared to the output of ecrecover.

This code is vulnerable

contract InsecureContract {
address signer;
// defaults to address(0)
// who lets us give the beneficiary the airdrop without them// spending gas
function airdrop(address who, uint256 amount, uint8 v, bytes32 r, bytes32 s) external {
// ecrecover returns address(0) if the signature is invalid
require(signer == ecrecover(keccak256(abi.encode(who, amount)), v, r, s), "invalid signature");
mint(msg.sender, AIRDROP_AMOUNT);
}
}

Signature Replay

The signature replay happens when a contract doesn’t track if a signature has been used previously. In the following code, we fix the previous issue, but it’s still not secure.

contract InsecureContract {
address signer;
function airdrop(address who, uint256 amount, uint8 v, bytes32 r, bytes32 s) external {
address recovered == ecrecover(keccak256(abi.encode(who, amount)), v, r, s);
require(recovered != address(0), "invalid signature");
require(recovered == signer, "recovered signature not equal signer");

mint(msg.sender, amount);
}
}

People can claim the airdrop as many times as they want!

We could add the following lines

bytes memory signature = abi.encodePacked(v, r, s);
require(!used[signature], "signature already used");
// mapping(bytes => bool);
used[signature] = true;

Alas, the code is still not secure!

Signature Malleability

Given a valid signature, an attacker can do some quick arithmetic to derive a different one. The attacker can then “replay” this modified signature. But first, let’s provide some code that demonstrates we can start with a valid signature, modify it, and show the new signature still passes.

contract Malleable {
// v = 28
// r = 0xf8479d94c011613baeffe9239e4ff65e2adbac744c34217ca7d51378e72c5204
// s = 0x57af17590a914b759c45aaeabaf513d5ef72d7da1bdd19d9f2e1bc371ece5b86
// m = 0x0000000000000000000000000000000000000000000000000000000000000003
function foo(bytes calldata msg, uint8 v, bytes32 r, bytes32 s) public pure returns (address, address){
bytes32 h = keccak256(msg);
address a = ecrecover(h, v, r, s);

// The following is math magic to invert the
// signature and create a valid one
// flip s
bytes32 s2 = bytes32(uint256(0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141) - uint256(s));
// invert v
uint8 v2;
require(v == 27 || v == 28, "invalid v");
v2 = v == 27 ? 28 : 27;
address b = ecrecover(h, v2, r, s2);
assert(a == b);
// different signatures, same address!;
return (a, b);
}
}

As such, our running example is still vulnerable. Once someone presents a valid signature, its mirror image signature can be produced and bypass the used signature check.

contract InsecureContract {
address signer;
function airdrop(address who, uint256 amount, uint8 v, bytes32 r, bytes32 s) external {
address recovered == ecrecover(keccak256(abi.encode(who, amount)), v, r, s);
require(recovered != address(0), "invalid signature");
require(recovered == signer, "recovered signature not equal signer");
bytes memory signature = abi.encodePacked(v, r, s);
require(!used[signature], "signature already used"); // this can be bypassed
used[signature] = true;
mint(msg.sender, amount);
}
}

Secure signatures

You’re probably wanting some secure signature code at this point, right? We refer you to our tutorial on creating signatures in solidity and testing them in the foundry. But here is the checklist.

  • Use Openzeppelin’s library to prevent malleability attacks and recover to zero issues
  • Don’t use signatures as a password. The messages needs to contain information that attackers cannot easily re-use (e.g. msg.sender)
  • Hash what you are signing on-chain
  • Use a nonce to prevent replay attacks. Better yet, follow EIP712 so that usese can see what they are signing and you can prevent signatures from being re-used between contracts and different chains.

Signatures can be forged or crafted without proper safeguards

The attack above can be generalized further if hashing is not done on chain. In the examples above, the hashing was done in the smart contract, so the above examples are not vulnerable to the following exploit.

Let’s look at the code for recovering signatures

// this code is vulnerable!
function recoverSigner(bytes32 hash, uint8 v, bytes32 r, bytes32 s) public returns (address signer) {
require(signer == ecrecover(hash, v, r, s), "signer does not match");
// more actions
}

The user supplies both the hash and the signatures. If the attacker has already seen a valid signature from the signer, they can simply reuse the hash and signature of another message.

This is why it is very important to hash the message in the smart contract, not off-chain.

Signatures as identifiers

Signatures should not be used to identify users. Because of malleability, they cannot be assumed to be unique. Msg.sender has much stronger uniqueness guarantees

MISC

Solidity compiler versions have bugs

See a security exercise we hosted on Twitter here. When auditing a codebase, check the Solidity version against the release announcements on the Solidity page to see if a bug might be present.

Assuming smart contracts are immutable

Smart contracts can be upgraded with the Proxy Pattern (or more rarely, the metamorphic pattern). Smart contracts should not rely on the functionality of an arbitrary smart contract to remain unchanged.

Transfer() and send() can break with multi-signature wallets

The solidity functions transfer and send should not be used. They intentionally limit the amount of gas forwarded with the transaction to 2,300, which will cause most operations to run out of gas.

The commonly used gnosis-safe multi-signature wallet supports forwarding the call to another address in the fallback function. If someone uses a transfer or sends Ether to the multisig wallet, the fallback function could run out of gas and the transfer would fail. A screenshot of the gnosis-safe fallback function is provided below. The reader can clearly see there is more than enough operations to use up the 2300 gas.

If you need to interact with a contract that uses transfer and send, see our article on Ethereum access list transactions that allows you to reduce the gas cost of storage and contract access operations.

Is Arithmetic overflow still relevant?

Solidity 0.8.0 has built in overflow and underflow protection. So unless an unchecked block is present, or low level code in Yul is used, there is no danger of overflow. As such, SafeMath libraries should not be used as they waste gas on the extra checks.

What about block.timestamp?

Some literature documents that block.timestamp is a vulnerability vector because miners can manipulate it. This usually applies to using timestamps as a source of randomness, which should not be done anyway as documented earlier. Post-merge Ethereum updates the timestamp in exactly 12 second (or multiples of 12 second) intervals. However, measuring time in second-level granularity is an anti-pattern. On the scale of one minute, there is considerable opportunity for error if a validator misses their block slot and a 24 second gap in block production happens.

Corner Cases, Edge Cases, and Off By One Errors

Corner cases cannot be easily defined, but once you have seen enough of them, you start to develop an intuition for them. A corner case can be something like someone trying to claim a reward, but having nothing staked. This is valid, we should just give them zero reward. Similarly, we generally want to divide up rewards evenly, but what if there is only one recipient, and technically no division should happen?

Corner Case: Example 1

This example was taken from Akshay Srivastav’s twitter thread and modified.

Consider the case where someone can conduct a privileged action if a set of privileged addresses provide a signature for it.

contract VulnerableMultisigAuthorization {
struct Authorization {
bytes signature;
address authorizer;
bytes32 hashOfAction;
// more fields
}
// more codef
unction takeAction(Authorization[] calldata auths, bytes calldata action) public {
// logic for avoiding replay attacks
for (uint256 i; i < auths.length; ++i) {
require(validateSignature(auths[i].signature, auths[i].authorizer), "invalid signature");
require(authorizers[auths[i].authorizer], "address is not an authorizer");
}
doTheAction(action)
}
}

If any of the signatures are not valid, or the signatures don’t match to a valid address, the revert will happen. But what if the array is empty? In that case, it will jump all the way down to doTheAction without the need for any signatures.

Off-By-One: Example 2

contract ProportionalRewards {
mapping(address => uint256) originalId;
address[] stakers;
function stake(uint256 id) public {
nft.transferFrom(msg.sender, address(this), id);
stakers.append(msg.sender);
}
function unstake(uint256 id) public {
require(originalId[id] == msg.sender, "not the owner");
removeFromArray(msg.sender, stakers);
sendRewards(msg.sender,
totalRewardsSinceLastclaim() / stakers.length());
nft.transferFrom(address(this), msg.sender, id);
}
}

Although the code above doesn’t show all the function implementations, even if the functions behave as their names describe, there is still a bug. Can you spot it? Here is a picture to give you some space to not see the answer before you scroll down.

The removeFromArray and sendRewards function are in the wrong order. If there is only one user in the stakers array, there will be a divide by zero error, and the user won’t be able to withdraw their NFT. Furthermore, the rewards are probably not divided the way the author intends. If there were original four stakers, and one person withdraws, he will get a third of the rewards since the array length is 3 at the time of withdrawal.

Corner Case Example 3: Compound Finance Reward Miscalculation

Anyway, the point of Compound is to reward users for lending their idle cryptocurrency to other traders who might have a use for it. The lenders are paid both in interest and in COMP tokens (the borrowers could claim a COMP token reward to, but we won’t focus on that right now).

The Compound Comptroller is a proxy contract that delegates calls to implementations that can be set by the Compound Governance.

At governance proposal 62 on September 30, 2021, the implementation contract was set to an implementation contract that had the vulnerability. The same day it went live, it was observed on Twitter that some transactions were claiming COMP rewards despite staking zero tokens.

The vulnerable function distributeSupplierComp()

Here is the original code

/**
* @notice Calculate COMP accrued by a supplier and possibly transfer it to them
* @param cToken The market in which the supplier is interacting
* @param supplier The address of the supplier to distribute COMP to
*/
function distributeSupplierComp(address cToken, address supplier) internal {
// TODO: Don't distribute supplier COMP if the user is not in the supplier market.
// This check should be as gas efficient as possible as distributeSupplierComp is called in many places.
// - We really don't want to call an external contract as that's quite expensive.
CompMarketState storage supplyState = compSupplyState[cToken];
uint supplyIndex = supplyState.index;
uint supplierIndex = compSupplierIndex[cToken][supplier];
// Update supplier's index to the current index since we are distributing accrued COMP
compSupplierIndex[cToken][supplier] = supplyIndex;
if (supplierIndex == 0 && supplyIndex > compInitialIndex) {
// Covers the case where users supplied tokens before the market's supply state index was set.
// Rewards the user with COMP accrued from the start of when supplier rewards were first
// set for the market.
supplierIndex = compInitialIndex;
}
// Calculate change in the cumulative sum of the COMP per cToken accrued
Double memory deltaIndex = Double({mantissa: sub_(supplyIndex, supplierIndex)});
uint supplierTokens = CToken(cToken).balanceOf(supplier);
// Calculate COMP accrued: cTokenAmount * accruedPerCToken
uint supplierDelta = mul_(supplierTokens, deltaIndex);
uint supplierAccrued = add_(compAccrued[supplier], supplierDelta);
compAccrued[supplier] = supplierAccrued;
emit DistributedSupplierComp(CToken(cToken), supplier, supplierDelta, supplyIndex);
}

The bug, ironically, is in the TODO comment. “Don’t distribute supplier COMP if the user is not in the supplier market.” But there is no check in the code for that. As long as the user holds a staking token in their wallet (CToken(cToken).balanceOf(supplier);), then

Proposal 64 fixed the bug on October 9, 2021.

Although this could be argued to be an input validation bug, the users didn’t submit anything malicious. If someone tries to claim a reward for not staking anything, the correct computation should be zero. Arguably, it’s more of a business logic or corner-case error.

Real World Hacks

DeFi hacks that happen in the real world often times don’t fall into the nice categories above.

Pairity Wallet Freeze (November 2017)

The parity wallet was not intended to be used directly. It was a reference implementation that smart contract clones would point to. To implementation allowed for the clones to selfdestruct if desired, but this required all the wallet owners to sign off on it.

// throw unless the contract is not yet initialized.modifier only_uninitialized { if (m_numOwners > 0) throw; _; }
function initWallet(address[] _owners, uint _required, uint _daylimit) only_uninitialized {
initDaylimit(_daylimit);
initMultiowned(_owners, _required);
}

The wallet owners are declared

// kills the contract sending everything to `_to`.function kill(address _to) onlymanyowners(sha3(msg.data)) external {
suicide(_to);
}

Some literature describes this as an “unprotected selfdestruct” i.e. an access control failure, but this isn’t quite accurate. The problem was that the initWallet function was not called on the implementation contract and that allowed someone to call the initWallet function themselves and make themselves the owner. That gave them the authority to call the kill function. The root cause was that the implementation was not initialized. Therefore, the bug was introduced not due to faulty solidity code, but due to a faulty deployment process.

Badger DAO Hack (December 2021)

No Solidity code was exploited in this hack. Instead, the attackers obtain the Cloudflare API key and injected a script into the website frontend that altered user transactions to direct withdrawals to the attacker address. Read more in this article.

Attack vectors for wallets

Private keys with insufficient randomness

The motivation for discovering addresses with a lot of leading zeros is that they are more gas-efficient to use. An Ethereum transaction is charged 4 gas for a zero byte in the transaction data and 16 gas for a non-zero byte. As such,

Wintermute was hacked because it used the profanity address (writeup). Here is 1 inch’s writeup of how the profanity address generator was compromised.

The trust wallet had a similar vulnerability documented in this article (https://blog.ledger.com/Funds-of-every-wallet-created-with-the-Trust-Wallet-browser-extension-could-have-been-stolen/)

Note that this does not apply to smart contracts with leading zeros discovered by changing the salt in create2, as smart contracts do not have private keys.

Reused nonces or insufficiently random nonces.

The “r” and “s” point on the Elliptic Curve signature is generated as follows

r = k * G (mod N)
s = k^-1 * (h + r * privateKey) (mod N)

G, r, s, h, an N are all publicly known. If “k” becomes public, then “privateKey” is the only unknown variable, and can be solved for. Because of this wallets need to generate k perfectly randomly and never reuse it. If the randomness isn’t perfectly random, then k can be inferred. Insecure randomness generation in the Java library left a lot of Android bitcoin wallets vulnerable in 2013. (Bitcoin uses the same signature algorithm as Ethereum.) (https://arstechnica.com/information-technology/2013/08/all-android-created-bitcoin-wallets-vulnerable-to-theft/).

Most vulnerabilities are application specific

Training yourself to quickly recognize the anti-patterns in this list will make you a more effective smart contract programmer, but most smart contract bugs of consequence are due to a mismatch between the intended business logic and what the code actually does.

Other areas where bugs can occur:

  • bad tokenomic incentives
  • off by one error
  • typographical errors
  • admins or users getting their private keys stolen

Many vulnerabilities could have been caught with unit tests

Smart contract unit testing is arguably the most basic safeguard for smart contracts, but a shocking number of smart contracts either lack them or have insufficient test coverage.

But unit tests tend only to test the “happy path” (expected/designed behaviour) of contracts. To test the surprising cases, additional test methodologies must be applied.

Before a smart contract is sent for audit, the following should be done first:

  • Static analysis with tools such as Slither to ensure basic mistakes were not missed
  • 100% line and branch coverage through unit testing
  • Mutation testing to ensure the unit tests have robust assert statements
  • Fuzz testing, especially for arithmetic
  • Invariant testing for stateful properties
  • Formal verification where appropriate

For those unfamiliar with some of the methodologies here, Patrick Collins of Cyfrin Audits has a humorous introduction to stateful and stateless fuzzing in his video.

Tools to accomplish these tasks are rapidly becoming more widespread and easier to use.

Resources

Some authors have compiled a list of previous DeFi hacks in these Repos:

Secureum has been widely used to study and practice security but keep in mind the repo hasn’t been substantially updated for 2 years

You can practice exploiting solidity vulnerabilities with our Solidity Riddles repository.

DamnVulnerableDeFi is a classic wargame every developer should practice

Capture The Ether and Ethernaut are classics, but keep in mind some of the problems are unrealistically easy or teach outdated Solidity concepts

Some reputable crowdsourced security firms have a useful list of past audits to study.

Originally published at https://tutorialboy24.blogspot.com

--

--

TutorialBoy

Our mission is to get you into information security. We'll introduce you to penetration testing and Red Teaming. We cover network testing, Active Directory.