By Ayman Haydar, CEO, MMPWW 

A few weeks ago, some of the biggest tech leaders descended on Washington to attend a closed-door meeting on policing AI. Yes, you read that correctly. The same tech CEOs – Musk, Zuckerberg, Gates, Altman etc. – continually under fire for having too much power within their own respective monopolies – are now being consulted on how best to mitigate the risks posed by AI. 

It’s like history repeating itself again and again. Give the big guys a seat at the table to shape policy that benefits them, allow them to get even more powerful and then worry about the consequences after. It would be laughable if it wasn’t so alarming.

It feels like the ‘chat’ surrounding generative AI hit a new record the minute Chat GPT went mobile, and it hasn’t slowed down since. Even before we became complicit in granting this technology further entry to our lives, the warning signs were already on the wall. 

Multiple AI ethicists and experts had expressed concerns about the acceleration of AI adoption without limitations. The fact that we were now making it so easily accessible for us to pick up, create and fire off content at will, just added to the greater concern of misinformation spreading quicker online.

Downloads of the Chat GPT app currently stand at over 5 million, which shows our appetite, or at the very least our curiosity, for pocket AI. We might as well have it on our smart watch as well at this rate. We don’t use our human brains anymore, so why don’t we just let the machines take control? It’s sad. 

The ways in which we are willingly granting this technology a free pass into every area of our lives is concerning. We’ve seen how automation can be used for good, but that doesn’t mean we should outsource every decision to a bot. We’re at risk of losing the joy of discovery as we go deeper down the rabbit hole into wonderland here. 

Just like medicine is the cure to health issues, our reliance on it has made us weaker in other ways. Great innovation! But too much of a good thing and just watch how our internal organs start to fail. Can you relate now? 

The Global Race 

The race to the top is clearly a big motivator for the rest of big tech as it tries to close the gap on OpenAI’s advantage. In turn, Open AI is pressing ahead with its plans to mass market AI, but at what cost? 

Chinese tech giant Tencent isn’t going to be sidelined either, recently unveiling its own rival chatbot with some capabilities on par with ChatGPT. Clearly this is a fight which is going to play out on the global stage. 

With U.S. inflation cooling to its lowest point in more than two years, investor sentiment is turning more positive, particularly around big tech and by extension AI-led businesses.

Companies like Nvidia and AMD have seen their shares climb more than 180% and 74% year to date, respectively. Nvidia’s surge past $1 trillion market cap now puts it in the same league as Amazon, Alphabet, Microsoft, and Apple. Few other tech stocks qualify for this. 

Speaking of Apple, there’s something to be said for how shrewd they are at predicting the general mood. As the AI debate rages on, between ‘we must regulate’ and ‘here’s something new to try’ Cook is deliberately playing down Apple’s AI ambitions. 

That doesn’t mean that they aren’t in the race. In fact, rumor is they have significantly ramped up its spending on artificial intelligence in recent weeks, believing that their own chatbot, Ajax, is reportedly more powerful than Chat GPT. If I had to guess, I think Apple will wait for their competitors to run the gauntlet with all the headaches and complications first. Hold back and launch a better product or just replicate like they do with their phones. Same same, no difference really; that’s generally the Apple way.  

Whichever way you look at it, disruption is coming across all industries and digital advertising is no exception. Marketers are reportedly wary about the black box nature of using Google’s Performance Max, Meta’s Advantage+ and TikTok’s Smart Performance Campaigns (SPC) to run their campaigns due to a lack of visibility about how AI systems are used to deliver results.

Elsewhere, Microsoft and Google are in their own separate battle to develop the best AI-assisted search experience, leaving publishers concerned about a drop in CTRs if results are presented in a way to keep visitors on the search engine itself. Then there’s the other challenge of stopping AI scraping legitimate content from publishers to repost elsewhere. The New York Times and Disney have reportedly blocked Open AI’s web crawler, GPTBot, from scanning their platforms for content.

Understanding The Full Picture

However, this is only one ‘fix’ for one issue with AI. Truthfully, we’re moving way too quickly to understand the bigger implications of this technology, solving issues in silo rather than looking at the overall challenges. And that’s before self-serving motives and lots of money come into play. 

You’ll remember a while back that the founder of Chat GPT, Sam Altman, professed he was increasingly worried about what he has unleashed on the world too. He jokingly once said “AI will most probably lead to the end of the world, but in the meantime, there will be great companies”. 

He later followed this up (along with some other hundred signatories) with a signed open letter arguing that the development of AI needs to be reassessed. In the letter, there was a very stark warning: AI should be categorized as more of a threat to our very existence. Overly dramatic, or a perceptive take on what’s happening right now?  

To be clear, I doubt that Altman and co. are acting with total sincerity here. It’s one thing to sign your name to something, but another to put something in motion. The data centers still run. AI development continues… It’s just nice words at the end of the day coming from the people who do have the power to act and choose not to. 

I think it’s more about distracting us from the very real shortcomings that current AI systems have. If these guys position the argument more about the long-term threat, there’s more noise about that rather than the need to regulate sooner rather than later. It also gives companies time to play catch up with their own AI models. Hypocrisy at its best. 

Regulation: What Comes Next? 

Collectively we all need to examine our usage of AI moving forward. Regulation is the first step here. A framework we can all look to for guidance. Big tech may have already got a head start, but this time we are becoming quicker to catch on. 

Just like the EU was the first to bring tighter data privacy legislation into play with GDPR, it’s now moving forward with its AI act, which could be the first regulatory framework for AI systems globally. Washington aiming to pass bipartisan legislation within the next year is an ambitious target, especially as it is seemingly being guided by those with the most to gain from controlling AI’s capabilities and limitations to boost their bottom line. 

It’s all too hypothetical at this stage, so until then, we need to decide: Do we set our brains to autopilot and let AI ‘save’ the day, or do we still see the value in thinking for ourselves? The film Wall-E may not be labeled as an apocalyptic movie in the traditional sense, but in no way do I want to end up like an Axiom human, oblivious to life with a robot in charge of my every move…. Do you?