heading · body

Transcript

Are Skillmd Files The Quantum Error Codes Of Industrial Ai

read summary →

Hello community. So great that you are back. Today we talk about Okay, on the one side we have our AI our stochastic noise in the AI engine. We have unmapped neural power phrase. We have a void of uncertainty hallucination and non-deterministic output here of our probabilistic artificial intelligence system. And then the other side we have this in industry where they kind of rely on here an infrastructure reliability of quality of reproducibility. They want to have a clear risk in place. They go here for optimization of the profit structure and they are not at all interested here in exploring new technology. So therefore, we have a rift between current AI system and the industrial uptake of this AI system and we simply ask why and we find a solution. So, welcome to this video. Wherever you currently read in science and here I just go for error correction and lattice quantum electrodynamics and quantum reference frames here from April 7, 2026 or you go here for quantum computing and you read here quantum error correcting code here. This is here where they say, “Hey, we obtained a new quantum maximum distance separability codes and some new quantum error correcting codes that are better than everything else on this planet.” You are just amazed on quantum computing that we do make significant progress, but quantum computing is still not there even not in academia. And you might ask also, “Why? Why is this happening?” So, to my viewers who are fresh to this channel, hello and welcome. A quantum bit is simple. It’s a vector in a two-dimensional complex Hilbert space where we have we operate here with multiple continuous probability amplitudes here and classical computing, you know, an error correction method is simply repetition, but in quantum regimes we have three problems. The first problem is the no cloning theorem. Unitarity, linearity, it is almost impossible to create an independent identical copy of an unknown quantum state, no? We cannot simply back up a qubit. Second, we have a wave function collapse. So, every measurement, remember the double slit experiment here, it collapses here a superposition into a single state destroying our quantumness and then of course our classical errors are discrete, but our quantum errors are continuous. So, we do have a lot of problems here in the mathematical correction. Great. But anyway, the main idea that you should take away here from this video is kind of a foundational insight here at the very beginning. I think it was 1991 to 1995 here on QEC that we do not need to correct here for infinite errors here, no? Mathematically it was shown that a single qubit error E, here we have it, can be expressed as a linear combination in the simplification of the century as a linear combination of Pauli matrices, no? Where X is here a bit flip, Z is a phase flip and guess what Y is a combination of both. So, when we measure now an error checking operator in our network, the quantum state is projected now onto those discrete Pauli operators. This means the measurement forces you to continuous error to collapse now into a discrete manageable one that we can then reverse. Does this remind you of anything? Yes, we are doing more or less exactly the same on a different level of complexity of mathematical complexity currently in artificial intelligence, no? Remember from my last videos I showed you we have currently this beautiful idea. We have an LLM large vision language model in this in the in the middle. This is our core intelligence and then we have a lot of skill MD files and either we go with an API to a supercomputer, we have to a database cluster array, a graph knowledge or standard graph pipeline. Everything is now a skill and everything is now beautiful because it’s deterministic. And in those skill we have a defined workflow, a deterministic workflow 0.1 to 0.17. And the AI just has to perform this and it is beautiful. And you might ask, “Why do we do this?” Well, at first it is so much cheaper for our AI companies like OpenAI and Anthropic not to train another beautiful AI intelligence, but to just go here on the outer periphery on the outer rim here, our AI harnessing sphere, and do there over there all our deterministic calculation that is so simple and then just feed it into here our input to our core LLM. And you know, in the last four videos I asked you, “Why don’t we let the AI itself learn? Why we outsource it here in markdown file where everybody can read it and we have a deterministic run, but the execution of the deterministic step is not deterministic, but nobody tells it is.” So, question is, “Why industry doesn’t buy and integrate the current state of AI to the fullest extent?” Why is it? Now, the answer is simple. You cannot trust the result of an AI that is reproducible, that it is the truth of the data and you can’t have an explanation that is consistent over time. In this video here where I showed you AGI that skill will serve us better, the idea was, no? Industry cannot and will not scale purely statistical LLMs for mission-critical task in finance, in medical pipelines, in aviation, whatever you have, no? A statistical model inherently possesses a non-zero hallucination rate. And if a model is even 99% accurate, this 1% failure rate is catastrophic when you execute it here millions of enterprise database operation, no? So, therefore it was the hope with encoding the capabilities into skill markdown files and Python script we can solve here the enterprise trust problem. But yeah, we encounter a lot of other problems. As you find all of this here in this video here starting from linguistic interfaces and interpretability to whatever. Think about the other analog that I started this video with. Scientists, we don’t trust quantum computing. And you might say, “Hey, why? No, this is science pure, no?” Because we need we know that we need quantum error correction codes, no? So, therefore I asked you another banana pro to create this image. And yeah, it just shows you a fault-tolerant quantum architecture. This is the state of the art, no? In the middle you have your supercooled qubit array here with lattice code and everything and you have millikelvin and what the heck. And then around this, no? You have your quantum error correction manifold. And this is you the what you need because the continuous chaos that is happening here and the decoherence you have on your quantum chip, no? You can’t let it out. You have to tame the beast here and you need quantum error correction manifolds that interact here. And yeah, physical core assembly, integrated control and whatever, never mind. But this is the image, no? And now compare this to this image I generated here I think two, three weeks ago and you’re going to smile. Because again, in the middle we have the central intelligence that is a probabilistic system, a statistical system works with probabilities in here probabilities here in a complex Hilbert space. Here we are a simpler case, no? We don’t have quantum computing at the core of our LLMs and real LLMs here. We just go here with some simple neural networks that are almost understandable, no? And then we also have to have this sphere, this harnessing sphere here to tame the beast. And when I and when I created this, no? You understand or I understand that I’m more or less have the same image in my brain and the complexity that we are dealing with in theoretical physics and quantum computing and in computer science, it means in artificial intelligence, no? We are dealing more or less with the same problem. Think about it. If you look at the topological QEC lattice. If you’re saying, “Let’s encode this here in a lattice error network, no?” And let’s say it detects here there’s a quantum bit flip error happening here in this particular qubit, no? And all that we know from the mathematical theorem, it is so similar. It is a complete different beast. Yes, and it is a simplification. Yes, I know this, but it is so similar what we do currently in artificial intelligence, no? Because this is another image that I This is the way I think, no? We have an AI core at the middle and then this AI is growing. This AI gets new task. You have supervised fine-tuning. You have an alignment process, reinforcement learning here by human feedback and whatever, verifiable verifiers. But in principle, you know, the core is a probabilistic system, no? It’s a statistical model. And then we build here this beautiful scaffolds around it in artificial intelligence, no? These are our skill MD files and whatever we’re going to invent. And then in my last video I showed you we have now just in one model more than 460 defined skills. This is nuts because now we start to have a deterministic atomization here of AI capabilities that we map from a continuous manifold to a autonomous hopefully self-learning, but broken down to the lowest level skill ensemble of hundreds of skill, which is nonsense. And why we do is why we build this scaffolding around our AI systems? Because we have one topic we want we Anthropic, OpenAI, we have to sell AI to industry. Those companies have to create here revenues, profits. Otherwise, all the investments, all the venture capitalist, all after seeing what’s happening right now. This will be a market crash nobody wants. So, it has to be a success. So, therefore, we have to convince we, those companies have to convince industry you can trust the AI system. As a scientist, we know we cannot trust this. So, this becomes now a problem of an intertwined structure, especially if you think that currently we are working on self-learning AI. So, this AI bubble that I depicted here is growing. We want that the system is self-learning by itself autonomously, not like an open claw, but in a controlled environment we define here exactly here what is happening. You can go either with a classical reinforcement learning Google Cloud by divergence or you build here deterministic layers here as control layers in the scaffold around our AI sphere to make sure that you stay within baby steps here of the other state and your state transition here of your AI system is just minimal. So many methodologies, just have a look at my last videos in the last month. In my last video here we I guess said, “Hey, do an experiment. Ask your AI whatever you like. Go with an Opus 4.6 or whatever you have. Ask how far can you compress Gödel’s incompleteness theorem, the original proof from 1931? How much can you compress it? How much can you have a compactification of information and try it here in the mathematical language because I was sitting here exactly here in the lecture hall where Gödel gave his presentation at the University of Vienna. And I had 1 year that a professor tried to explain all the complexity of Gödel incompleteness theorem to us. I can tell now ask your AI. Ask it in the mathematical notation and then ask it here in a natural language, let’s say English explanation. And then try to do a compactification. And I told you if your AI has any problems, go with two different axes, go with either synthetic fixed point lemma or go with a Kolmogorov complexity. This is from the 1960s. So, this is old, old, boring stuff. Every AI system should know about this, no? And then feel it yourself what it means a compactification of information. And especially this is true if you go for a probabilistic system like our AI core intelligence or if you do the compactification on a deterministic harness system. If you did experience this, if you did the experiment yourself and you have no a feeling what I was talking about, great. And now bring it to the next level because we want to have this as a self-learning system implementation. And now the problems really start. So, those are the two images I have in my brain in my understanding what is currently happening. And maybe I’m holding on here to one central visualization here in my brain. But you see the problems we have currently in theoretical physics here in quantum computing here with full tolerant quantum architecture here where we have here our beautiful qubit array and our quantum error correction manifold built around it. And the problem we have currently in computer science here with our artificial intelligence implementation on an industrial AI implementation where we have to build a scaffolding here to show our industrial partners and the client and the enterprises here that they can trust this system because we build this scaffolding around it and this is a deterministic scaffolding. You know the truth is that the execution of the deterministic scaffolding is also based on a probability distribution, but yeah, this is only for experts. So, therefore, this is my current state of understanding where we are. But you know, as a scientist, of course, I have to benchmark my ideas versus other ideas, no? So, luckily, I just found I’ve ending now here my presentation as I said, “Hey, who can I benchmark myself against, no?” So, I just discovered that McKinsey published a new report on AI, on industrial AI just days ago. March 25, 2026. McKinsey, here you have it, state of AI trust in 2026. And I said, “Hey, what amazing topic that suddenly here our McKinsey and the top four care about the topic of AI trust in industry? Seems to be of somehow of importance, no?” And just one quote of this study, have a look yourself. As AI adaptation grows, McKinsey tells us, 74% of the respondents that McKinsey asked in the industrial sectors identify the inaccuracy as highly relevant risks. So, it is that the industry understands more or less exactly what is the problem with current AI systems, no? And now the question is, can those IPOs here of open AI and Anthropic where they build now around their core AI intelligence this beautiful scaffolding here of deterministic system of skills MD, of thousands of skill markdown files and whatever files? And they try to tame the beast, the complexity of a probabilistic system that has the beauty of really coming up with new ideas. Can this convince industry, industrial partners to really invest more in AI and therefore provide revenues for this IPOs and post-IPO companies? This is here the $1,000 question. So, absolutely fascinating to see that even here the consultants try to solve this. And if you read the report, you will see they come from a different complete perspective. My framing is here more on the scientific facts and they go here with marketing and management and whatever. So, but it is interesting that we kind of come here to the same conclusion. Can you trust an AI system with your absolute essential industrial decisions? Or what are the risks that you’re going to face? And this is here something that is currently completely a non-deterministic system. So, therefore, I ask myself, how can we focus more on the intelligence of the AI core itself? How can we start here to loosen up here on this, yeah, bamboo scaffolding? I was in Asia and I was amazed how they build the construction sector there, build skyscraper with bamboo. It is It is gorgeous. You have to see this. So, this inspired here me to create this image here with 9 over 9 over 2. But can we open up our scaffolding? Can we let a little bit of this chaotic intelligence of an AI system come out more? And this is what I’m absolutely interested in. I’m not interested here to limit here the intelligence of an AI system with more and more scaffolding to elude here the industry that all these AI systems are safe. They are not safe. They are not that you can 100% trust them. We are not there yet. And if you think that the future core of an AI system will be a quantum computing system, you just can smile about that. Yeah, therefore, in my last video, I showed you there is no research on I think this is the beauty that we can build now, I really build now a supervised fine-tuning, reinforcement learning. And then we build our at the last fear here our test time scaffolds around this core AI. And if you do this and showed you this here in this video, a tiny 4 billion model, 4 billion pre-trained parameter, a tiny, tiny nothing of an LLM. If we train it here for a particular domain knowledge, it will reveal here a Gemini 3 model, a Gemini 3 Pro model here because we are on the right track to bring back here all the information, all the knowledge, all the insight, all the procedural know-how back into the central intelligence here of our AI. We don’t distribute it around here our harness sphere, but we go the way we invest in more money for the training here and we can increase here the intelligent here of our AI system. But knowing that the risk of hallucination is still there, knowing that the complexities are completely untamed, and we have to somehow deal with this for the development of a better AI system. But I think it is just a step, you know, that we say, “Okay, let’s stop this. Let’s build here this harness around it.” I think we have to really focus here on the core problem and then this is how to make our AI system itself more reliable, more trustworthy, and higher performance. Those are my thoughts. I hope you had a little bit of fun. Maybe you could laugh about my ideas. Maybe you say something, “Hey, this is completely not the way I see it.” Please leave a comment. Would be great to see you in my next video.