A possible AI counter-narrative

in Popular STEMlast year

Are advances in the field of Artificial Intelligence going to slow down sooner than people think?


Introduction

image.png

Pixabay license from 简体中文 at source. Search string: "data center"

Today, @cmp2020 sent me a message with a quote from an article like this one from Wired, OpenAI’s CEO Says the Age of Giant AI Models Is Already Over. According to this article, Sam Altman, the CEO of OpenAI, is now saying that the technique of just making bigger and bigger Large Language Models (LLMs) has reached a point of diminishing returns, and OpenAI is now looking for new directions to advance. Altman confirms that GPT-5 is not under development. On one hand, given all of the recent hype about the AI apocalypse, this is surprising.

On the other hand, I have already argued (privately) that the rapid growth of AI capabilities might slow down sooner than people think.

Off the top of my head, I can think of three reasons why AI's growth might start to level off after the recent explosion that we have seen. First, by posting AI content all over the Internet, people are polluting the training data for future AI generations; Second, future AI models will only get incremental increases in the amount of available data; and third, as Altman pointed out, the cost of increasing scale may be prohibitively expensive. Two of these have been obvious to me for quite a while, but it is only today that I realized the third obstacle.

Pollution of training data

Here on the Steem blockchain, we have had many conversations about the difficulty of distinguishing AI-generated content from human content. Of course, the AI systems of the future will have the same difficulty disambiguating the two. So, when this generation of AI systems pushes out errors and people publish it on the Internet, those errors will feed back into the training data for the next generation.

The more content the AIs produce, the more they're going to be choking on their own content. Any errors from the current generation will be amplified in the next generation. In the end, I can imagine that it might produce a feedback loop that gums everything up and dramatically slows down future iterations.

The low-hanging fruit has now been plucked

The current generation of LLMs appeared on the scene in an explosive way, but this is because they were able - for the first time, ever - to ingest a sizable percentage of the Internet. In short, these early generations of LLMs got to harvest all of the digitized data from people over the course of more than 30 years, and even centuries if you count ancient books that have been digitized.

Once the Internet has been fully-ingested, however, future data growth can only be incremental. We might not expect to see the same explosion from a year or two's worth of training data that we saw from ingesting data from decades or centuries. This means that future advances must depend far more on algorithmic improvements, and less on increasing data access.

In a metaphor, by ingesting massive amounts of data, the car went from 0 to 200 in 30 seconds, but advancing to 210 or 220 might take another 30 seconds or 60 or 90. And there might even be a maximum speed limit that no car can pass with currently known techniques.

On one hand, AI advocates have argued that people don't sufficiently appreciate the impact of exponential growth - and they're right. But, it may be a mistake to assume that exponential growth in this field will continue indefinitely.

In the book, Future Hype - The Myths of Technology Change, Bob Seidensticker argued for a spotlight & S-Curve model of technology. Basically (as I recall), he suggested that most technologies develop slowly at first, then hit a phase that looks like exponential growth, then finally level off to finish the S-Curve. People can be blind to this because the spotlight tends to shine on the technology during its phase of exponential (seeming) growth.

Dollars and Cents

According to the above-referenced article, OpenAI has already spent $100 million dollars building the current generation of models. Even to a well-funded company, that is a lot of money. It's not clearly stated, but the implication of the article (to me) is that getting to the next level in size is prohibitively expensive. We've been accustomed to technology that improves at the rate of Moore's Law, but who's to say that AI will advance at the same pace?

Certainly, for quantum computing, there has been an argument that Moore's Law does not apply. It's also conceivable that Moore's Law doesn't apply to progress in artificial intelligence.

Conclusion

Like the old quote says, "Making predictions is hard, especially about the future." When the whole world is poised to see the AI revolution unfolding in real time during the next couple of years, I'm not about to be the one who claims that they're all wrong.

I do think, however, that there are some reasons to wonder if the hype might be --- overhyped. To recap: there is a possibility that future AI training might choke on its own pollution; Future data ingestion will switch from a multi-decadal "big bang" to much smaller increments; and the cost of growth may not follow the Moore's Law path that we're all accustomed to where technology is concerned.

One last unrelated point that I might elaborate on in a future article has to do with AI & censorship. I just wanted to jot it down so it doesn't slip my mind. When Internet data is being used to train our AI companions, censorship becomes particularly dangerous. People complain about bias in AI, but there's really no such thing. The bias is in the training data. The problem of so-called AI bias, then, will be compounded by censorship. With all the risks of AI, this is one that I think deserves more awareness.

What do you think? AI rapid growth without bounds, or will things slow down?


Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".




Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.


image.png

Pixabay license, source

Reminder


Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.

Sort:  

I think I agree with you. Though I would point out that what caused this great leap forward in AI development was not using the entire internet to train the model, but rather the development of the transformer model. So it is not exactly fair to say that the development of AI was spurred on mainly by training it on the whole internet. Plenty of other models were attempted and didn't work because they hadn't solved the problems which the transformer solves. Considering that, it is certainly possible that a new type of model (that isn't a transformer, or is a variant of the transformer) would perform exponentially (or seemingly exponentially) better than the transformer model currently does for the same reason that the transformer performed better than other models with the same data.

Though personally, I think improvement may come in figuring out how to "school" these models. IE make them learn from humans like students do in school (though obviously in different ways). IE make it so the models can ask a qualified teacher questions and remember the answers they are given. This may solve hallucination problems, and somehow find a way to make the model aware when it does not know something. Obviously this kind of is counter intuitive to how artificial intelligence works, but there's a reason why we don't just have students figure stuff out for themselves, and now that we've made AI capable of learning on its own, I think it could be vastly improved by figuring out how to make it learn from human interaction. I forget where I read it, but I read or heard somewhere (maybe from you) that AI or computers do not perform best at chest by themselves, but rather it is AI with humans that perform the best at chest. If that statement is true, it probably is applicable to this idea.

 last year 

Short reply for now, since I'm on my phone, but I mentioned the chess point in my article on Monday.

ask a qualified teacher

!

Very well pointed out! Thank you very much!

I only hesitate to agree on your first point, the choking of data while being captured in a pollution loop. The ingestion of data from the internet has not necessarily to be unvalued. It could be focussed stronger on digitalized literature from scientifical publications (in the broadest sense) than on everyone's writings. But if so, your second point will get still more weight. And in the end, we will have to de-hype the recent furor.

On the other hand, using the so called AIs more and more could gain income by implementing services one has to pay for. This could result in an ongoing development despite the fact there is not much more to improve.

The bias of AI-driven information in regard to sources with fake information plus the impact of censorship will surely get the most important facts to discuss and to observe.

 last year 

Thanks for the reply.

It could be focussed stronger on digitalized literature from scientifical publications (in the broadest sense) than on everyone's writings.

I have a feeling that scientific literature is going to be inundated with AI-generated content, too. I follow the Retraction Watch blog, and it turns out that the scientific journals struggle with many of the same sorts of inauthentic content problems that we have here on Steem.

You raise a good point that even if the quality doesn’t change, the scale certainly will increase.

I think that technology in this matter should be put into action but with great prudence

@Christopher Palmer lovely write up. Let's wait and see. Time will tell

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.032
BTC 59561.47
ETH 3012.64
USDT 1.00
SBD 3.77