Heurist “Sybil” Analysis, Season 1

3 min readMay 28, 2024
Heurist miner fighting against bugs https://x.com/hjjlucky/status/1791597809931862442

We announced a “Sybil self-reporting campaign” on May 20th 2024 https://x.com/heurist_ai/status/1792631341411971580

To clarify, we’re not looking at address clusters or transaction patterns like the way it was for LayerZero. The word “Cheater” should better describe the situation.

Let’s revisit the mining rules in https://docs.heurist.ai/overview/incentivized-testnet

We’ve observed two issues in Stable Diffusion and LLM mining. Let’s explain the situations, and announce the next steps.

Stable Diffusion mining

There are some miners who generated low-quality images. For example: https://imagine.heurist.ai/share/sdk-image-3451e425b9-0x82D1b5c2F74e290111abb7864D1B7AC0aF707497-urepsh

Team has worked with several miners who produced such images. We found one thing in common: all of them ran multiple SD mining processes on a single GPU and some of the result images appeared as blurry / broken.

We were not able to reproduce such results in our testing environments. We confirmed that not all miners who run multiple SD processes suffer from such a problem. It’s likely caused by some combination of OS / GPU Driver / Software issues that randomly happens.

Those miners who ran multiple processes per GPU did not violate the mining rules. Therefore, we will make the following decisions:

  • No existing SD miners prior to this announcement will be punished.
  • Running multiple SD processes on one GPU is discouraged. We recommend the owners of community-made mining scripts such as https://github.com/Anonm81/heurist-miner-setup to remove multi-SD mining feature.
  • Moving forward, Imagine App users can report broken images by creating an issue in https://github.com/heurist-network/sybil-report including the image share link (the link can be obtained by “share” button after generating an image). Once confirmed, earned Waifu Points will be reduced for the creator of such images.

LLM mining

Using a simple yet powerful algorithm, we’ve identified 327 cheating LLM miners who did not execute the AI inference compute at all. Llama Points for these miners will be reduced to 0.


We ask AI this question

Write a futurist story about [keyword] with [N] words. Please must include the keyword [keyword] in your output

And then we detect if [keyword] exists in the response.

We did this test at random times for all LLM miners. We’ve already built a prototype version of the validator system that can send “probe” requests to each individual miner. The probe request is not differentiable from normal user requests, and therefore miners must respond to whatever requests based on its own algorithm.

When the [keyword] is not present, it’s obvious that the miner is not running AI model at all, but generate from a preset of sentences.

We’ve listed the first batch of confirmed cheaters in https://github.com/heurist-network/sybil-report/blob/main/llm-cheaters-05-11-and-05-20.csv including the probe job request and response of each address. Use `https://d2m8ger4zqfcbt.cloudfront.net/` to replace `s3://validator-history/` in the CSV list to download the source data.

Future Plans

We’ve been developing a much more sophisticated algorithm that can detect cheating in LLM miners without any modification to existing miner software. We’ll write a blog post detailing the methods next month. Such an algorithm will make decentralized LLM inference much more secure, without sacrificing generation speed or quality.

Special Thanks

Special thanks to periko, CarlosBL and Gerald for your collaboration in troubleshooting the SD mining issues.




Decentralizing AI model hosting and inference on ZK Layer-2