Decoupling of scales and the plateau

It is a well known fact that LLMs will reach multiple plateaus during their development. Right about now there should be one. Even Bill Gates is talking about it. Even Yan LeCun. (look it up on X). Altough, to be honest, I’m not really counting LeCun as an expert here. He’s more like a Mike Tyson of the field ( What? Too much? Too Early? šŸ™‚ )

This time, the reason behind these plateaus is not the lack of technology (as it was the case with previous AI waves (50s, 80s – the LeCun Era – 2000s).
This time, the reason is extremely profound and extremely close to humans. It is concerning an interesting aspect of epistemology: the decoupling of scales issue.
What this means is, in a nutshell is, that you cannot use an emergent level as an indicator for an underlying reality without stopping on the road and gathering mode data (experiments and observations) , and redoing your models entirely.

My argument is that our models, now, at best, they do clever “Unhobbing” (a basic feedbackloop that somewhat emulates a train of tought by iterating answers)

We have not yet seen, at all a real bake-in of a new “underlying reality model”.

I don’t care about AGI, and when or if we will truly reach general intelligence! I care about the fact that we are very far away from drawing the real benefits out of these LLMs. We have not yet even battled the first plateau. And there are many coming.

The Powerhouse

Lots of discussions happening right now in relation to what people call nowadays #AI. Some sterile, some insightful, some driven by fear, some driven by enthusiasm, some driven by alcohol šŸ˜Š

I am defining the ā€œpowerhouseā€ as being that technology that allows us to create the most value. In industry, in sales, in research, in general :).

In this light, during the information age, the one we are just barely scratching the surface of, there were multiple powerhouses that we can remember and talk about.

  1. The internet and the communication age powerhouse for example, that we have not yet exhausted. Not by far, in terms of potential productivity. This is something that we can understand easily. Itā€™s not worth going into details here.
  2. The data-churn powerhouse. The ability to store, look-through, transform and stream data. I would argue that this is also easily understandable. However, we may stop a bit here and make a few points:
    • Transforming data and searching data, big data, involves something very resemblant to intelligence. It is not for nothing that we have a certain area of data exploration that is called business ā€œintelligenceā€. This though could be one of the first serious encounters with #AI. It is so long ago, that most people donā€™t even bother to call this #AI, although it is very #AIšŸ˜Š
    • Big data is the foundation of unsupervised learning models, so, Letā€™s not forget about this.
  3. The computer vision capabilities are somewhat taken for granted. Things like OCR, or face recognition (not identification).
  4. Then there is a generation of computer vision that really produces value, things like medical assisting software (for echo imaging, CT, MRI, and other types of imaging). You know, this is still #AI, some in the form of clever algorithms, some supervised learning, and some unsupervised learning. I think about this as yet another powerhouse.
  5. Then, there is despotic computer vision, things like face identification that can, and is really used at scale. We know about the use in China but let me tell you something about that: it is used the same here also. Weā€™re just more discrete about it. And yes, I see this as yet another powerhouse. I know. Too many versions of the computer vision one.
  6. Another interesting powerhouse is the expansion of the same level of capabilities in other domains: – drug repurposing, voice synthetization, clever localization, etc.

All of this is #AI at its best. We basically off-load parts of our human intelligence to machines that can scale certain cognitive processes better than us, and that are more equipped when it comes to sensors.

We now have a new type of ā€œpowerhouseā€, we refer to it as LLMs. Some of the value prolific applications are becoming apparent right about now. Bear in mind that this is only the beginning. There is a whole new class of problems that can now become part of the information age. Many of these problems are not even known to us right now. This is happening because the link between humans and these artificial creations is now more intimate. It is language itself.

We have, basically, spent this short time that we have spent in the information age to:

  • teach computers to do calculations for us šŸ˜Š
  • teach computers to remember
  • teach computers to communicate
  • teach computers to read
  • teach computers to see
  • teach computers to speak

None of these leaps were given up upon, they are all being accumulated in the problems that we solve using computers.

LLMs arenā€™t going anywhere. I promise you that, They are just bringing up possibilities. So, hearing all kinds of “informed opinions” stating that there is a great difference between expectations and reality with this advent of LLMs, and this is bad news for the entire industry is bull$hit.
Real bull$hit, not different than the one I expect it to be šŸ˜‰

Cheers!

Some concerning technical and ethical reviews

A lot, has been going on lately. So much so, that I do not even know how to start reviewing it.

I’ll just go ahead and speak about some technical projects and topics that I’ve been briefly involved in and that are giving me a fair amount of concern.

Issue number x: Citizen-facing services of nation states

A while back, I made a “prediction”: the digitalization of citizen facing services will be more present, especially as the pandemic situation is panning out. (here) and (here). I was right.
Well, to be completely honest, it was not really a prediction as I had two side (as a freelancer) projects that were involving exactly this. So I kind of had a small and limited view from inside.

Those projects ended, successfully delivered, and then came the opportunity for more. I kindly declined. Partly because I’m trying to raise a child with my wife, and there’s only so much time in the universe, and partly because I have deep ethical issues with what is happening.

I am not allowed to even mention anything remotely linked with the projects I’ve been involved in, but I will give you a parallel and thus unrelated example, hoping you connect the dots. Unrelated in terms of: I was not even remotely involved in the implementation of the example I’m bringing forward.

The example is: The Romanian STS (Service for Special Telecommunications) introduced the blockchain technology in the process of centralizing and counting citizen votes, in national or regional elections that are happening in Romania. You can read more about it here, and connect the dots for yourselves. You’ll also need to know a fair amount about the Romanian election law, but you’re smart people.

The Issue?

Flinging the blockchain concept to the people so that the people misunderstand it. Creating a more secure image that necessary. Creating a security illusion. Creating the illusion of decentralized control, while implementing the EXACT opposite. I’m not saying this is intentional, oh, no, it is just opportunistic: it happened because of the fast adoption.
Why? Blockchain is supposed to bring decentralization, and what it completely does in the STS implementation is the EXACT opposite: consolidate centralization.

While I have no link with what happened in Romania, I know for a fact that similar things shave happened elsewhere. This is bad.

I do not think that this is happening with any intention. I simply think there is A HUGE AMOUNT of opportunistic implementations going on SIMPLY because of the political pressure to satisfy the PR needs, and maybe, just maybe, give people the opportunity to simplify their lives. But the implementations are opportunistic, and from a security perspective, this is unacceptable!

Ethically

I think that while we, as a society, tend to focus on the ethics in using AI and whatnot, we are completely forgetting about ethics in terms of increased dependency of IT&C in general. I strongly believe that we have missed a link here. In the security landscape, this is going to cost us. Bigtime.