Bang!

I rarely read business books. Extremely rarely. However, the decade-old “The Decline and Fall of Nokia” by (mainly) David J. Cord has something biographic to it. It’s like a detailed painting of a crowd: adds clarity and context to an external viewer.

Nevertheless, I consider, in my own view, that it contains a lot of parallels (or rather, counterexamples) about what is happening nowadays, with the advent of this generation of AI. There are many tech businesses that have to suffer, adding fuel to this fire seems to be this new tech.

What tech? you say ?

This one:

source
LLM Ability to complete an expert-level long task

What’s that you say? That is the ability of LLMs (so not all AI) to complete long tasks. To read: If you present a complicated problem to an expert, how much time does the expert spend in order to solve it. In summary, the latest commercially available LLM, is now solving 2 hours worth of expert-level in an instance. (95C.I is 1 hour and 10 minutes to 4 hours). In Health, for example, the number is much, much higher. This is, without a doubt an exponential beggining. It started with X’s grok4, and is thorowly confirmed by OpenAIs GPT5.

To put this in perspective: the jump from o1 to o3 is as big as the jump from o3 to gpt5. In the four months it passed from April…

The blindspot.

This is the reality that everybody has to swallow. In an analogy with what happened with Nokia, there is a counterintuitive (fr some) blindspot that is manifesting for businisess. This only comes to confirm my predictions (to be fair, not mine to start with, but predictions of smarter people) that: The boost in productivity will manifest itself only for the supperior level of expertise. The best people in a field will be able to milk these tooks better. Considering they assume the stool and start putting in the effort to rub the tities 🙂

This is also the big blindspot. Many organizations seem to think that because high expertise is easier to find, you can mix it with the low expertise of a random team and boost productivity, when reality is exactly the other way around. In all this turmoil we’ll find an increasing number of Nokias. Bigger or smaller, but still, blind.

Let’s hope more and more business leaders understand this aspect before the exponential growth of this capability, turns into a 1/T law for the business.

Bang!

The Mind

As part of an effort to increase and infuse technology, and related practical abilities into the daily lives and activities of future graduates, a certain university in Romania tries to pilot a program. I am a part of this program, designing two of future courses and running the pilot with them. This activity got me in a meditative state about the current state of Technology (AI and s**t)

My long-lasting statement is that the current state of the new wave of technology is just beginning to show its potential, and the future has way more surprises ahead.

Things change fast, and we have reached a state of profound revolution in knowledge. Last time I gave an argument against the infamous plateau of generative models. Or against transformers in general. I weighted against the problem of “hierarchies of scale”, and tried to frame it in a story.

The current hypothesis

My current argument is that it seems that we have managed to tame certain bits and pieces of the way our own mind figure out data. I do not believe that we know exactly what part of our mind we managed to tame, but we tamed it pretty well. It’s like when we domesticated the dog. We really didn’t know what a dog is, but we used it pretty clever.

Dive into my mind

Let’s take the example of AlfaFold.

The long story short is: There is an active research field in biology and chemistry trying to figure out protein structures. This is extremely important for various fields, like medicine, just to give a grasping example.
Researches have found several hundred structures via a classical exhaustive metod (x-ray cristalography).
Then they tried to move it into the power of computing, with various sub-mediocre results.
Then someone (Prof. David Baker) tought to make a game (called Foldit) where gamers will actually help predict a structure *(following some rules, dooh). Spectacular results.
Then someone tought to do train some DNNs in a reinforcement setup to play the game. Very good results.
And then, (2018-2020) the AlfaFold team threw Transformers at it. In a very clever setup.
Finally Awarding them the nobel prize in chemistry, last year. For discovering the structure of 200 million proteins. And more. I leave the details to be researched independently.

This helped the field of medicine spitting out a vaccine for mallaria, and few drugs targeting antibiotic-resistant bacteria, etc.

That is why

I think what is happening is virtually unimaginable.

Some of my somewhat intimate friends know that one of my predictions is that the medicine will benefit from a new branch, or a new approach, where the clinical model will be challenged. I am not arguing against the clinical model and the clinical epistemology, but bear in mind, that medicine will change dramatically over the course of next years. And it is not because of AlfaFold only. Things started to converge once technology started to offload knowledge, things started to converge once usecases like “drug-repurposing” cristalized their way into mainstream medicine.

And that is why I think, once the dust settles on the current crisis in the IT world, the future looks challenging, and way bigger than it was before.

And finally, that is why I think we have reproduced a valuable piece of the Human Mind at scale.

Cheers!

Decoupling of scales and the plateau

It is a well known fact that LLMs will reach multiple plateaus during their development. Right about now there should be one. Even Bill Gates is talking about it. Even Yan LeCun. (look it up on X). Altough, to be honest, I’m not really counting LeCun as an expert here. He’s more like a Mike Tyson of the field ( What? Too much? Too Early? 🙂 )

This time, the reason behind these plateaus is not the lack of technology (as it was the case with previous AI waves (50s, 80s – the LeCun Era – 2000s).
This time, the reason is extremely profound and extremely close to humans. It is concerning an interesting aspect of epistemology: the decoupling of scales issue.
What this means is, in a nutshell is, that you cannot use an emergent level as an indicator for an underlying reality without stopping on the road and gathering mode data (experiments and observations) , and redoing your models entirely.

My argument is that our models, now, at best, they do clever “Unhobbing” (a basic feedbackloop that somewhat emulates a train of tought by iterating answers)

We have not yet seen, at all a real bake-in of a new “underlying reality model”.

I don’t care about AGI, and when or if we will truly reach general intelligence! I care about the fact that we are very far away from drawing the real benefits out of these LLMs. We have not yet even battled the first plateau. And there are many coming.

The Powerhouse

Lots of discussions happening right now in relation to what people call nowadays #AI. Some sterile, some insightful, some driven by fear, some driven by enthusiasm, some driven by alcohol 😊

I am defining the “powerhouse” as being that technology that allows us to create the most value. In industry, in sales, in research, in general :).

In this light, during the information age, the one we are just barely scratching the surface of, there were multiple powerhouses that we can remember and talk about.

  1. The internet and the communication age powerhouse for example, that we have not yet exhausted. Not by far, in terms of potential productivity. This is something that we can understand easily. It’s not worth going into details here.
  2. The data-churn powerhouse. The ability to store, look-through, transform and stream data. I would argue that this is also easily understandable. However, we may stop a bit here and make a few points:
    • Transforming data and searching data, big data, involves something very resemblant to intelligence. It is not for nothing that we have a certain area of data exploration that is called business “intelligence”. This though could be one of the first serious encounters with #AI. It is so long ago, that most people don’t even bother to call this #AI, although it is very #AI😊
    • Big data is the foundation of unsupervised learning models, so, Let’s not forget about this.
  3. The computer vision capabilities are somewhat taken for granted. Things like OCR, or face recognition (not identification).
  4. Then there is a generation of computer vision that really produces value, things like medical assisting software (for echo imaging, CT, MRI, and other types of imaging). You know, this is still #AI, some in the form of clever algorithms, some supervised learning, and some unsupervised learning. I think about this as yet another powerhouse.
  5. Then, there is despotic computer vision, things like face identification that can, and is really used at scale. We know about the use in China but let me tell you something about that: it is used the same here also. We’re just more discrete about it. And yes, I see this as yet another powerhouse. I know. Too many versions of the computer vision one.
  6. Another interesting powerhouse is the expansion of the same level of capabilities in other domains: – drug repurposing, voice synthetization, clever localization, etc.

All of this is #AI at its best. We basically off-load parts of our human intelligence to machines that can scale certain cognitive processes better than us, and that are more equipped when it comes to sensors.

We now have a new type of “powerhouse”, we refer to it as LLMs. Some of the value prolific applications are becoming apparent right about now. Bear in mind that this is only the beginning. There is a whole new class of problems that can now become part of the information age. Many of these problems are not even known to us right now. This is happening because the link between humans and these artificial creations is now more intimate. It is language itself.

We have, basically, spent this short time that we have spent in the information age to:

  • teach computers to do calculations for us 😊
  • teach computers to remember
  • teach computers to communicate
  • teach computers to read
  • teach computers to see
  • teach computers to speak

None of these leaps were given up upon, they are all being accumulated in the problems that we solve using computers.

LLMs aren’t going anywhere. I promise you that, They are just bringing up possibilities. So, hearing all kinds of “informed opinions” stating that there is a great difference between expectations and reality with this advent of LLMs, and this is bad news for the entire industry is bull$hit.
Real bull$hit, not different than the one I expect it to be 😉

Cheers!