How Agents Are Starting To Reshape Bittensor
read summary →TITLE: How Agents Are Starting to Reshape Bittensor CHANNEL: The Opentensor Foundation | Bittensor TAO DATE: 2026-03-20 ---TRANSCRIPT--- We’re just going to talk about agents. There’s a bunch of people in the ecosystem that are using them very effectively and I want to talk about that innovation on this call. There was it was also a pretty stressful week. We had a security breach at OTF which I’ll talk about. That’s been shut down, thank God, and and everything seems to be fine going forward. And then there’s also the the Templar launch which I think we should nod our heads to. Well, I think that maybe we should start with the bad stuff and get to the the good stuff.
So so the the bad thing was that this week we had a GitHub CI basically a malicious attacker get into the OpenTensor CI and there was a backdoor interesting backdoor. Very interesting how they were able to do this in such such a short period of time, but effectively there was a GitHub a PAT personal access token that was leaked. It was actually leaked by myself. And for very short period of time just about just under an hour before it was flagged and pulled down, but an attacker was capable of basically getting into the CI actions of the BT wallet and pushing malicious code for very short period of time which was very unfortunate. And some people lost some funds which we’re working with them right now to to make sure that that people can be whole again. So far we think it’s not very much tau in total. It’s it’s about something between one, two, or five thousand tau somewhere around that amount and we’re we’re we’re following through with with making sure that that’s tied up and we’re pretty confident this this will never happen again.
So that’s that’s what was really the the worst thing that happened this week, but but there’s many things to celebrate and you know, one of those things and I will actually mention coming forward is the the acknowledgements globally of the run that has been done done by Templar the 72 billion parameter run. There was actually a a tweet just out recently was mentioned on the All In podcast. Finally, people are understanding that this is something that that this idea has time has come, right? That we can train machine learning models of significant size. You know, maybe they’re not the the best in the world right now, but you know, we’re we’re we’re within an order of magnitude away from being incredibly significant in in this at the actual just model size that we’re training. So 72 is huge. And we can get to a trillion and there’s the optimizations there that that Sam and the Covenant team is is putting together for the next run with mixtures of experts and also what they’re running with crusades where they’re optimizing the actual minor infrastructure so that the best of the best technology will go into every single minor in the network is super interesting and fascinating.
So this is also my first talk since coming back. I was away for about a month and lots of things have come up about what we can do in in that time and I needed a bit of a break. One of them is shorting. So we’ll talk about shorting and another types of operations that we want to push into the chain this year hopefully. We’ll also talk about the upcoming MEV fix.
So this has been an ongoing issue the the the MEV on on on BitTensor. We have pushed a number of changes and it’s a bit of it’s a bit of a cat and mouse battle between between like really sophisticated hackers trying to get around our encryption and us building out new features to stop that.
[Sam from OpenTensor discusses MEV Shield]
Yeah, so there there is one other chain that has a similar a similar kind of setup. So Nosis chain has it. So this this kind of we’ve we’ve sort of staggered the deployment of MEV shield into sort of two distinct protocols. There’s the the V1 protocol which we can still do under proof of authority which we’re kind of still under. And then there’s the V2 protocol which we will move to once once we move to nPOS and we have decentralized validators. The upgrade that’s coming today sort of brings V1 which is already kind of live on the chain in compliance with the original invariance that we sort of designed MEV shield around.
The current version of MEV shield that’s deployed live on the chain actually does gossip the decrypted transaction like right before it executes at the validator side and so some people are able to take advantage of that quickly enough to front run it. And so now what will happen after this upgrade is instead of just injecting into the mempool it will execute it and then it doesn’t even enter the mempool. It just gets gossiped once the block completes. So you basically there’s no opportunity for someone to preempt the transaction.
And then the the V2 protocol yeah, there should be 100% no more MEV for now or at least front running.
[Discussion of cold key swap rework and V2 MEV protocol using threshold encryption]
We use threshold encryption and so the validator can’t — there’s basically a cryptographic commitment to the ordering and inclusion of the transactions in the previous block of the encrypted transactions. And then when we go to execute them in the next block, all the other validators are going to reject and count as invalid any block that doesn’t follow the rules for executing those encrypted transactions. Even if the validator is malicious, like completely malicious validator, he can’t decrypt until the cryptographic commitment is locked in. It basically makes it censorship resistant, and you can’t mess with ordering, he can’t invent a new transaction and put it in between the other ones.
[Max discusses shorting/lending proposal for Dynamic TAO]
There’s been a persistent inefficiency in the market where people who have like high concentrations of alpha on their subnet or like high ownership percentages can sort of manipulate the emissions vector and incentive system. And in all of our discussions on how to make this more efficient, we were just thinking, man, it would be really great if there were more alpha holders who could sell on these naughty subnets.
It was suggested this idea to have the ability to borrow and lend out of these extremely deep liquidity pools that we have available. In plain terms, it will be allowing you to get a loan either Tao — to start Tao collateralized out of these subnet pools. And then that alpha that you get in terms of for the loan could be sold, thereby solving this people don’t have enough alpha to sell problem. And so, in some sense, this behaves like a short on an alpha token. You can repay the loan when the price goes down and make money.
[Const/Jacob discusses the broader context]
The carrot is that if we properly design liquidity loaning from the pools, it means that it’s possible for subnet owners that are capital constrained to be able to access their pools and potentially take positions where they’re long their own alpha token in exchange for Tao outstanding. Enabling them to raise capital without creating sell pressure. This has been one of the things that has been very devastating to subnet owners — they just can’t sell. They need to find OTCs because they don’t want to sell through their pool. But this would be a way of actually generating that directly through the protocol without a counterparty.
The stick is we are addressing this fundamental problem in dynamic Tao. When we chose to make it so that the alpha tokens were not held via roots, but instead auto sold, it opened up effectively like an exploit in BitTensor where somebody could move towards having very very little liquid ownership in their token. They basically burn all of their minor emissions, they hold all of the tokens, there are no other holders, and they get the owner fee. Without a two-sided market, there’s always the buyers on the buying side but without having the other side of the market, those that can potentially sell the alpha tokens to create downward pressure or outflow, it opens up this potential for people to basically manipulate the prices, manipulate the inflows, and suck emission away from real projects without producing value.
In markets that fidelity, that the strength of that signal is incredibly strong. If people go long on subnets that are not effective or not positive for Bittensor and can’t get unanimous appraisal by the community of Bittensor at large, they are punished for that decision. This is not the behavior that you get when you have say the earlier root network designs where validators could just vote and there was no downside.
[Jacob/Const discusses agents and mining]
So I decided to start mining Bittensor and also build another subnet and I’m in the process of building a validator as well. It becomes a race to who can do that fastest. Who can spin the most number of tokens in the most intelligent way to run infrastructure and participate and move capital between the different markets in Bitensor.
What I’m working on right now are tools to do that basically automatically. In the future and actually current in the present, the resources in Bitensor really start to become fully composable through the agents that mine these subnets. They give the API key to Dataverse and you can mine Numenos. Given API key to Numenos and you can mine Mantis. And that’s just a prompt. Before it was a slog through a Discord, through a website, through a validator, through mining code, through documentations. And now it’s just plug and play.
One of the things that I was doing with Constantinople with Arbos was actually mine the network with Arbos and red team at the same time. Whenever I would I had a Ralph loop and that Ralph loop continuously read the code base, looked for exploits, ran those exploits, tested to see if they work, and then repeated continuously. I would say that there was about 100,000 hours of dev work that were put out just by one Ralph loop running on Constantinople.
[Nova team discusses agents for drug discovery]
We first started like kind of realizing this was a thing and setting up an agent for our internal use at the beginning. We named it Clyde and it started with open claw. I kind of realized that I’m often digging through a lot of different databases, API sources, all of the code on Nova, looking at a bunch of really complicated minor code. So I set up an agent that has skills that can access all of the back-end databases for Nova and all of the minor submissions.
We gave it a batch of molecules that we got from Nova compound and we asked it to look for similarities — based on the virtual screens, we can understand what their potential use would be based on the similarity to known drugs. We spun out a different instance that actually was doing pharma intelligence, something that right now is painstakingly manual. And Pena was actually able to train it with a benchmark in just 12 hours get like a 83% hit rate based on like a sample of 1,500 ADHD medications.
In our second incentive mechanism, Nova Blueprint, we saw miners actually apply an optimization strategy that had never before been used for drug discovery and that is outcompeting a published method that’s really well regarded and used by people in the industry.
[Shoots/Truth discusses agentic intelligence layer]
Truth is known for being a decentralized inference provider. But ultimately what we want is to provide intelligence to the people. We want to provide the app layer as well — an agentic engine that can do everything you would like to do from AI. What people typically use today is stuff like ChatGPT, or an agentic app like Claude Code or OpenAI Codex, or a search system like Perplexity, or a web-based agentic coding system like Lovable or Bold.new. We strive to become a provider of this app layer as well.
We’re working on OpenClaw as a service — boot up a sandbox, put an OpenClaw on it and you get ways to communicate with this sandbox. And Sandbox as a service — there’s a couple of successful sandbox as a service providers out there right now and they are just way too expensive. Like e2b.dev, I had like 20 sandboxes running and they cost me like a thousand dollars per month. We can do way cheaper and decentralized even.
[Minotaur/T-Slice discusses DeFi agent infrastructure]
The majority of DeFi volume is going to be agents before we know it. We realized that a swap intent is simply that — it’s an intent, and it’s the most simple form. Minotaur is supposed to be your interaction layer. We went with MCP so far. The idea is that Minotaur becomes this network where you can host apps, either apps that can be used by others, and you can make money out of it. Our miners, their job is to write an open-source software, what we call the intent solver engine. This intent solver engine will gradually grow and integrate all of the knowledge that is required to interact with DeFi. It’s subnet 112, Minotaur.
[Closing]
Thanks everyone for coming on the stage. There was a lot of announcements. We’re talking about the security breach this week, plans for shorting, agents, and also the chain upgrade which is going through right now.