The danger of Meta’s new supercomputer is the company behind it

Source Node: 1884057

Did you miss a session from GamesBeat’s latest event? Head over to the GamesBeat & Facebook Gaming Summit & GamesBeat Summit: Into the Metaverse 2 On Demand page here.


In late January, Meta — Facebook’s parent company — announced it was working on an artificial intelligence supercomputer that will be the fastest in the world after its completion this summer. The AI Research SuperCluster, or RSC, will help the company build new AI models that could eventually form the backbone of the metaverse. 

The announcement renewed debate over the ethics of adding yet another advanced supercomputer to our arsenal of rapidly emerging large-scale technologies. Reactions to the RSC ranged from “scary” to “dystopian.”

But evolving technology isn’t the problem. Instead, it’s the humans behind it — like profit-obsessed CEOs with access to troves of consumer data — that should concern us.

Business leaders are increasingly adopting AI technology to make their operations more efficient, increase productivity, boost innovation, and appeal to more customers. In one poll of business executives last year, more than 85% said AI is becoming a “mainstream technology” at their company. 

AI is now a pervasive part of our everyday lives. It’s used to curate suggested content on Netflix and Spotify. It helps estimate the speed of traffic on navigation applications. When you type a question into Google, AI scours the internet for relevant content.

Because we now encounter AI on a daily basis, the shortcomings and ethical implications of the technology have been increasingly thrust into the public eye.

The infamous Facebook whistleblower reports last fall sparked congressional hearings and increased scrutiny over the social media platform’s algorithms. Internal documents revealed Facebook knowingly uses AI to push users toward radical content to boost user engagement. The company now says it hopes to use AI to address those problems.

There was outrage late last year when Amazon’s Alexa told a 10-year-old girl to plug a phone charger halfway into an outlet and touch a penny to the exposed prongs. The voice assistant reportedly found the challenge “on the web.”

Ethical considerations like these and more — from the effects of AI’s racial biases to concerns over privacy and surveillance — will take on more significance as AI continues to advance. And the next generation of artificial intelligence is rapidly emerging.

Elon Musk’s OpenAI and Google’s DeepMind claim to be developing artificial general intelligence — autonomous technology that can learn and understand intellectual human tasks. OpenAI’s system wrote a convincing essay on why recycling is bad for the world. Meta’s supercomputer is clearly going down the same road.

And then there’s artificial general intelligence — AI that isn’t just limited to a few specific capabilities but can learn a variety of different tasks, much like humans can. (My own company, FutureAI, is in this space.) There are certainly risks involved with AGI technology. Since AGI can learn independently from humans, it will one day far exceed human intelligence. But before we start worrying about doomsday scenarios featuring armies of Terminator-like robots, we must remember that humans will be the ones who program and operate AGIs.

AI and its more general future cousin AGI are goal-directed systems. That means the behavior of AGIs will be a direct result of the goals humans give them. If we program AGIs with human-like goals of wealth, money, and power, then they will have potentially dangerous, human-like flaws. 

It’s not hard to imagine a scenario in which this happens. We’ve already witnessed the consequences — election interference and other efforts to sway opinion, for example — of artificial intelligence falling into the wrong hands. 

With increasingly intelligent technology, malicious actors could more effectively create profits or gain political control. If, for example, CEOs began experimenting with AGI systems that could monetize consumer data even more efficiently, we might have cause for concern. 

But if humans train AGI systems to pursue goals of exploration and discovery, they’ll be more benign. In this scenario, AGIs will uncover phenomena beyond human comprehension.

Of course, it’s impossible to guarantee that the future of AGI will be entirely positive. There’s always a possibility that hackers could usurp AGI systems. And since AGI will eventually be able to design successive generations on its own, humans will have little control over it at that point, no matter the goals humans originally prescribed for it.

That’s why it’s incumbent on us to direct the first few generations of AGI with benevolent goals. When they begin to create their own future generations, AGIs will follow the same good-natured rules we did in creating them.

Right now, the reality is that Meta can’t be stopped from developing its supercomputer AI, and the company has put enormous resources behind its pursuit of the metaverse. But that could change. Meta just lost $200 billion of its market value due to a drop in Facebook users. So as ethical issues emerge with the metaverse AI and are widely publicized, we can expect that Meta will also be reined in by its users.

More importantly, there are no doubt even more intelligent AI systems to come on the heels of Meta’s supercomputer. While we can’t stop the insatiable demand for advanced technologies, we can choose how — and who — gets to program the ones that have yet to come.

Charles J. Simon is founder and CEO of FutureAI, a D.C.-based early-stage deep technology company developing artificial general intelligence.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Source: https://venturebeat.com/2022/02/04/the-danger-of-metas-new-supercomputer-is-the-company-behind-it/

Time Stamp:

More from AI – VentureBeat