GOOGLE’S ARTIFICIAL INTELLIGENCE BUILT AN AI THAT OUTPERFORMS ANY MADE BY HUMANS
Researchers at Google Brain have just announced the creation of AutoML — an artificial intelligence that can actually generate its own AIs. Even more impressive, researchers have already presented AutoML with a difficult challenge: to build another AI that could then create a ‘child’ able to outperform all of its human-made counterparts.
Google researchers automated the design of machine learning models using a technique know as reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
This child AI, which researchers are calling NASNet, was tasked with recognizing objects, people, cars, traffic lights, handbags, backpacks, and more in a real-time video. AutoML, in the meantime, evaluates NASNet’s performance and then uses that information to improve NASNet, repeating and refining this process thousands of times.
What Does This Mean for the Future?
There are some obvious concerns with this new technology. If an AI can create an even smarter AI, then couldn’t this just continue to happen over and over again, and if so, what would these AIs be capable of? Should we be wary about playing God? We’ve seen the movies — perhaps these could serve as a potential warning about what could happen if the technology were able to outsmart us and, worse, decide to take over our world as we know it. This might sound like a completely far-out theme from a sci-fi thriller, but who’s to say this isn’t possible? It certainly seems like this is where technology is heading. How can we ever be sure AI won’t decide that we as a species have outlived our usefulness? Would these super robots see us as primitive apes?
Researchers might assure us that these systems won’t lead to any sort of dystopian future and that we have nothing to fear, but how can we be so sure?
Big corporations such as Amazon, Apple, Facebook, and a few others are all members of the Partnership on AI to Benefit People and Society, which is an organization that claims to be focused on the responsible development of artificial intelligence.
There is also the Institute for Electrical Engineers (IEE), which has proposed ethical standards for AI, and DeepMind, another research company owned by Google’s parent company, Alphabet, which recently announced the creation of a group that focuses on the moral and ethical development of AI.
Should We Be Concerned?
Why do we need super AI in the first place? Doesn’t the fact that these robots are incapable of feeling real emotion and empathy concern the creators? Or is it so important to them to create something so intelligent that it outweighs the potential risks?
Technology can be an amazing tool, and has already brought us so much, but at what point is it too far and when should we stop and really take a look at what we are doing? When, if ever, is it too late? Movies like The Matrix, Terminator, and Transformers can serve as a warning for what is possible if too much power is given to this AI.
Popular alternative researcher David Icke has been warning about the risks that come with the advancement of artificial intelligence for many years, and after seeing him speak at a conference in September and hearing him out, I fully understand where this wariness comes from. In his book The Phantom Self, Icke talks extensively on this topic. To hear him explain these concerns further, check out the interview below.
We would love to hear your thoughts on this! Are super smart AI necessary for the advancement of our society, or should researchers exercise more caution about playing God?