Future of AI reviewed

Now we’re at the end of the semester and we had to read another article about AI: “The seven deadly sins of predicting the future of AI”. Unlike the article in the beginning, this one did not suggest that AI will massively outperform us and then either kill the human race, or make us immortal. One of the big flaws in the first article is, according to the second one, that a lot of reasoning in there is built on exponentialism, that means on the assumption that there is and will be exponential growth. In a lot of different cases however, it turns out that something that looks exponential in the beginning is actually just part of an s-curve. That means that the big technological steps that we are currently making will eventually start flattening out again and it will get harder to make even more progress.

In my opinion was the second article way better then the first one. The first one way quite futuristic and made some assumptions which were maybe questionable. As example the one that we will keep having exponential growth. Or the assumption that an AI can get a simple task and then starts to maximize the result from this task by inventing ways to get rid off all possible distractions or factors which could influence the process negatively, hence eradicating the humans. Because according to the second article an AI that would be smart enough would get the fact, that this is not the goal of this task and it would be smart enough to get the fact that it just harms humans.

When it comes to the authors of the article, I’d have to say that I trust the second one more, than the one from the first article. Obviously the first one had done extensive research about the field and he talked to a lot of different experts. But in the end the second author worked in AI and had made some own experiences in the field.

My view did not really change during the semester. I still believe, that we can get an AI that outperform us humans in a lot of tasks. But I do not think that we will have a super AI that will kill us all. During the semester we saw a lot of different approaches to AI. Some of them did learn certain things on their own (decision trees), but this kind of trees will not start to teach themselves to learn previously unknown domains. Others like expectimax did rely even more on our rules. So this semester we have not really seen AI approaches which are able to generalize well.

Future of AI

There are multiple different levels of AI. The basic one is ANI (artificial narrow intelligence). An ANI is specialized in one area. So as example it can be a champion in a specific board game, but it can’t do other things like example recognize handwriting. The next level would be the so called AGI (Artificial General Intelligence). It’s sometimes also called Human-Level intelligence, because it’s supposed to be as smart as a human in every aspect. And the third level of AI that we have is the ASI (Artificial superintelligence. This one is defined as an AI that is much smarter than the best human in practically every field. An AI qualifies as ASI as soon as there’s a machine that is just a little smarter than a human.

Nowadays we have AI’s that are on ANI level. There was just recently an AI that did beat the best GO player (Alpha Go). And there are also a lot of other AIs which are better than humans in a certain field. Interestingly the machines are usually better than humans in topics were “thinking” is necessary. As example doing calculus, translate languages and many others. But in things that wouldn’t really require “thinking” for humans, as example recognize a cat as a cat, humans outperform computers. This comes from millions of years of evolution where the human race perfected such skills.

The interesting thing about reaching AGI is, that this will only be temporary. Because when a computer reaches this state, it will already be outperforming humans. It will be faster, a human brain has around 200 Hz whereas a cpu runs at around 2 GHz. It can have more storage, because a computers hardware can easily be expanded. And the computer will have more “uptime”, because we humans need sleep. A computer however can operate 24/7. So it won’t be long from there to reach the ASI level. And then it can very well be, that the AI will have exponential growth on its intelligence level. That means according to the blog post, that an AI could 90 minutes after reaching ASI already be 170’000 faster than a human being.

Interesting in the article was when they talked about different approaches on how to get better AIs. So one approach would build to try to rebuild the human brain and train it. Another one was to find an approach that was different from the human brain, but that would be more suitable for a computer. An example would be our airplanes that are not flapping like birds, but use a different approach that’s suitable for a machine. However, for me it was unclear which approach is considered more promising.

And a second question that I had after the article was, if it’s possible to teach an AI our understanding of moral. So if we reach an ASI, could it be that this ASI has our moral guidelines implemented and will obey those, even if it has another goal that may could be reached faster if it doesn’t follow them?

All in all the article was very interesting but it got dark fast. I think it can very well be, that we will have at one point an AI that can outperform humans. But I don’t think, that this means that this will be the end of humanity. I see it more as a chance to improve our life quality and to solve problems that we humans are not (yet) able to solve. As example cure rare diseases or solve theoretical hard problems (e.g P vs NP).