Bursting the AI Bubble
  • 09 Aug 2024
  • 8 Minutes to read
  • Dark
    Light

Bursting the AI Bubble

  • Dark
    Light

Article summary

Thank you to Matt Konwiser for sharing his experience and expertise in our knowledge base.

AI and Automation Specialist | Columnist and Speaker | Teacher | Student

August 8, 2024

...except it may not be a bubble.

I get why so many people think it's a bubble that's about to pop. For example, let's look at what's come out just this past week:

In addition, people are (finally) starting to be concerned with the natural resource and energy usage of AI. The MIT Technology Review discussed that impact in an article published just a few months ago. Google's emissions have jumped nearly 50% in the past 5 years due to AI use. Elon Musk even went as far as to say we will be out of electricity as early as next year.

For virtually every negative or concerning news piece that's published though, we also see cutting edge news. Shobhit Varshney posted this a few days ago highlighting Meta's 600,000 GPU AI processing beast, which (according to NVIDIA spec sheets for the H100) can consume a staggering 420 million watts (by comparison, the average home uses about 30 kWh/day). It's impressive for sure, but also note, this AI model architecture can consume the same amount of energy as nearly 14,000 average sized homes every day.

This post though was not about the energy consumption, it was about the innovation, and that's what we need to focus on to understand what's going on. Going back to the original point, is this an AI bubble analogous to the famed "Internet" or "dot-com" bubble?

In my opinion: No.

Set the Wayback Machine to 2000

There is a difference between being in a startup mode on the way to profitability and simply building something cool without having a way to monetize it. Many of the .com startups and big web providers and search providers were popular, fast growing, and widely used but had no real way to make money, yet investors were sinking cash into these businesses because the stock prices kept climbing.

When the cash ran low and investors wanted their returns, it became evident that there weren't any, and wouldn't be for the foreseeable future.

Pop.

That may well have given birth to the practice of harvesting and selling your data, SEO, and ad banners. Without those, companies like Google, Facebook, Yahoo, etc would just be popular sites operated out of a garage somewhere because no one would want to back them.

In addition to that, because the Internet was so new (for consumer mass market use), everyone wanted to be a part of it. People had ideas for something to build and money to burn.

Sidenote: It was also the period where interest-only mortgages were issued. This is where banks were issuing loans based on hopes of aggressive appreciation of real estate allowing people with low incomes to buy massive homes with minimal payments. When people couldn't afford their mortgages and housing prices depreciated, houses were foreclosed, the homeowners had to write massive checks to the banks, and banks lost big.

When the bubble burst, all of these shenanigans were exposed for that they were. Much of the money tied up in the market was gone and so were many other assets. People lost investments, homes, credit scores and savings.

The AI "Situation" Today

Don't think of the AI climate today as a balloon or bubble which can be popped. It's more a block of marble to be shaped. There's substance and solidity to it. It's tangible, and there are already myriad use cases proved to positively impact efficiency, profitability and workflow. Like marble, it's also massive, expensive, and takes a lot of skill to turn it into something with value that others will appreciate.

The difficulty with AI is twofold:

  • There is significant investment in AI today without an understanding of what the final objective is. People are just building the next big thing, but it's neither cost effective nor without collateral impact to do so.

  • There are many risks and governance concerns with this technology. It can negatively impact people and businesses in almost equal proportion to its positive impacts, and with every new innovation or model developed, there are more risks being uncovered to counter the benefits.

All that said, AI can be monetized and profitable as long as the difficulties are weighed as part of the investment and R&D strategies. That's where I think we're now seeing a turning point.

Companies were expecting faster returns, and are learning that AI is far more expensive than originally projected (or no one bothered to do the calculations in the first place). This doesn't mean AI is a gimmick or a waste. It means that companies must be far more frugal and measured with their AI science projects and production deployments to build an ROI that justifies the use.

In other words, this is not a market issue; it's an individual business issue. Not every AI company is in financial troubles or publicly discussing their budget concerns. The ones that seem to be most at risk are the ones who are promoting new models and new training every month as if there was a competition to have the coolest model. The businesses consuming AI who struggle the most either jumped in without planning, or are using models which are more consumer-oriented and less a fit for Enterprise workloads.

Unfortunately though, because of the influence the "big consumer AI" companies have, their announcements have wide-reaching repercussions on consumer confidence, stability of the AI field, and the stock market.

Overbuilt Not Overvalued

What would happen if land workers in the 1800s came across so much gold they didn't have the resources to smelt it or the need to spend all of it?

At some point it would have lost value to these people mainly because the effort to mine it and store it would have been greater than what they trade, make or spent it for. Mind you, it doesn't make the gold less rare or less valuable. Until the trade routes are established, more efficient smelting and casting is developed, and designers increase their use of it for their crafts and jewelry, it's got less value. Once those things happen though, the flow of gold from land to market will happen, and the demand will grow as will the profits.

Welcome to the AI conundrum.

The AI situation is playing out the same way.

It was discovered. It was mined and worked. People abandoned their jobs and trades to learn it. Every time a new model was spun up, hundreds and thousands of variants would manifest. Everyone wanted to have the biggest, shiniest piece or the latest process to form it. Everyone wanted to buy it. Everyone wanted to invest in its claims.

Then it happened.

Businesses slowed their purchase and adoption because they realized the risks and wanted to be more cautious in finding the use cases. The R&D companies who were spending money like water and consuming huge amounts of resources found their wares piling up on shelves and their funding and revenue drying up.

This should be a temporary problem.

Once demand catches up with supply because businesses are able to iron out the pipeline, process of creation and curation, and the best ways to utilize it, the logjam will ease and production will match need. In addition, use cases will be better tuned, and both governance and environmental impact will be figured in.

The race to be first will be replaced with a focus on human good and practical, responsible, safe AI because consumers and we hope businesses will only fund those projects and not work with AI companies who don't operate under those rules.

The surplus will be cleared, revenue and profitability will be high, and demand will be strong (at least that's the way it could play out).


Where Does This Leave Us?

Well isn't that the million dollar question, but it has a very simple answer.

We just need to be smarter about how we identify/build AI projects and invest in AI companies

Yeah. It's that simple.

The light at the end of the tunnel isn't a massive re-evaluation of AI companies or whether they have value. This period could be nothing more than an "AI correction" if we have the self control to actually correct ourselves.

Since small bites always hit better, here are the four things I believe we all need to do so this correction doesn't become a bubble.

  1. Don't panic. This may not be a bubble. I think it's a right-sizing and a sanity check for a profitable, beneficial technology. Be more concerned about inappropriate use and unnecessary, expensive projects.

  2. Invest wisely. AI isn't a popularity contest. Don't feed races for the biggest and best models. Focus on human good and business benefits.

  3. Be responsible. Know that AI can cause significant reputational and personal damage due to hallucinations, bias, and disinformation and be smart about how and when to use it.

  4. Assess and Measure. Look at the energy consumption, operational costs, benefits, productivity, and risks as a single dashboard and re-evaluate often to ensure you're not drifting (AI pun) away from your objectives

AI practitioners need to keep their passion in check. Everyone should remember that ecologically responsible compute systems plus AI ethics including transparent and fully explainable AI are more important than winning a race or making money.

With something which could be both as beneficial and harmful as AI, if the priority is profit there may well be human collateral damage and wasted efforts.

This is a great time for some introspection. While there is some volatility in the market and shaky news about AI futures, ask yourself what outcomes you really want from AI, and what are you willing to spend (and sacrifice) to get it. Perhaps that will bring the entire industry around to a point of unilateral support for more responsible, explainable, safe, and accessible AI.

As always, the thoughts and ideas within are mine alone and do not necessarily reflect the point of view of my company, and everything is subject to change.


Was this article helpful?

ESC

Eddy AI, facilitating knowledge discovery through conversational intelligence