‘Artificial General Intelligence’ is next phase of AI

Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.

For better or worse, the same is true of the character of war. This is the reason why the Ministry of Defense – like its counterparts in China and Russia – is investing billions of dollars to develop and integrate AI into defense systems. It is also the reason why DoD is now embracing initiatives that envision future technologies, including the next phase of AI – artificial general intelligence.

AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way as humans. Unlike AI, which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes associated with the human brain, including common sense, background knowledge, transferability, abstraction, and causality. Of particular interest is the human ability to generalize from scarce or incomplete input.

While some experts predict that AGI will never happen or is at least hundreds of years away, these estimates are based on the approach to simulating the brain or its components. There are many possible shortcuts to AGI, many of which lead to custom AGI chips to increase performance in the same way that today’s GPUs do machine learning.

Accordingly, an increasing number of researchers believe that sufficient computing power already exists to achieve AGI. Although we generally know what parts of the brain do, what is missing is insight how the human brain works to learn and understand intellectual tasks.

Also Read :  Convallaria to be published by Sony Interactive Entertainment

Given the amount of research currently underway – and the demand for computers capable of solving problems related to speech recognition, computer vision and robotics – many experts predict that the emergence of AGI is likely to occur gradually over the next decade. Nascent AGI capabilities continue to evolve and at some point will equal human capabilities.

But with the continuous increase in hardware performance, later AGIs will greatly exceed human mental capabilities. Whether this means “thinking” faster, learning new tasks more easily, or evaluating more factors in decision-making remains to be seen. At some point, however, the consensus will be that AGIs have surpassed human mental capabilities.

Initially there will be very few real “thinking” machines. Gradually, however, these initial machines become “mature”. Just as today’s executives rarely make financial decisions without consulting spreadsheets, AGI computers will begin to draw conclusions from the information they have processed. With greater experience and complete focus on a specific decision, AGI computers can reach the right solutions more often than their human counterparts, further increasing our dependence on them.

In a similar way, military decisions will only begin in consultation with an AGI computer, which will gradually be empowered to evaluate competitive weaknesses and recommend specific strategies. While the science fiction scenarios in which these AGI computers gain complete control over weapon systems and unlock their Masters are highly unlikely, they will undoubtedly become integral to the decision-making process.

We will collectively learn to respect and trust the recommendations of AGI computers, giving them progressively more weight as they show increasing levels of success.

Also Read :  This Pore-Cleansing Device Has Been a Staple in My Skincare Routine for 2 Years, and It's Up to 50% Off Now

Of course, AGI’s early attempts will include some bad decisions, just as any inexperienced person would. But in decisions involving large amounts of information that must be balanced, and predictions with multiple variables, the skills of computers—married with years of training and experience—will make them superior strategic decision makers.

Little by little, AGI computers will have control over larger and larger parts of our society, not by force, but because we listen to their advice and follow it. They will also become increasingly capable of swaying public opinion through social media, manipulating markets, and even more powerful at engaging in the kinds of infrastructure skullduggery currently attempted by today’s human hackers. .

AGIs will be goal-driven systems in the same way that humans are. While human goals have evolved through eons of survival challenges, AGI goals can be set to be anything we like. In an ideal world, AGI goals would be set for the benefit of humanity as a whole.

But what if those initially controlling the AGIs are not benevolent minds seeking the greater good? What if the first owners of powerful systems use them as tools to attack our allies, disrupt the existing balance of power or take over the world? What if an individual despot gained control over such AGIs? This would obviously pose an extremely dangerous scenario that the West must plan for now.

While we will able to program the motivations of the initial AGIs, the motivations of the people or companies creating these AGIs will be beyond our control. And let’s face it: individuals, nations and even corporations have historically sacrificed the long-term common good for short-term power and wealth.

Also Read :  Government launches £1.5 million AI programme for reducing carbon emissions

The window of opportunity for such concern is quite short, only in the first few AGI generations. Only during that period will humans have such direct control over AGIs that they will undoubtedly do our bidding. After that, AGIs will set goals for their own benefit that include exploration and learning and need no conflict with humanity.

In fact, except for energy, AGI and human needs are largely unrelated.

AGIs don’t need money, power, territory, and don’t need to worry about their individual survival – with appropriate backups, an AGI can be effectively immortal, regardless of whatever hardware it’s currently running on.

In the meantime, however, there is a risk. And as long as such a risk exists, developing the first AGI must be a top priority.

Charles Simon is the CEO of FutureAI, an early stage technology company that develops algorithms for AI. He is the author of “Will Computers Revolt? Preparing for the Future of Artificial Intelligence,” and the developer of Brain Simulator II, an AGI research software platform, and Sallie, a prototype software and artificial entity that can see, hear, speak in real time and learning mobility.

Do you have an opinion?

This article is an op-ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own that you would like to submit, please email Federal Times Senior Managing Editor Cary O’Reilly.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button