Future of AI reviewed

Now we’re at the end of the semester and we had to read another article about AI: “The seven deadly sins of predicting the future of AI”. Unlike the article in the beginning, this one did not suggest that AI will massively outperform us and then either kill the human race, or make us immortal. One of the big flaws in the first article is, according to the second one, that a lot of reasoning in there is built on exponentialism, that means on the assumption that there is and will be exponential growth. In a lot of different cases however, it turns out that something that looks exponential in the beginning is actually just part of an s-curve. That means that the big technological steps that we are currently making will eventually start flattening out again and it will get harder to make even more progress.

In my opinion was the second article way better then the first one. The first one way quite futuristic and made some assumptions which were maybe questionable. As example the one that we will keep having exponential growth. Or the assumption that an AI can get a simple task and then starts to maximize the result from this task by inventing ways to get rid off all possible distractions or factors which could influence the process negatively, hence eradicating the humans. Because according to the second article an AI that would be smart enough would get the fact, that this is not the goal of this task and it would be smart enough to get the fact that it just harms humans.

When it comes to the authors of the article, I’d have to say that I trust the second one more, than the one from the first article. Obviously the first one had done extensive research about the field and he talked to a lot of different experts. But in the end the second author worked in AI and had made some own experiences in the field.

My view did not really change during the semester. I still believe, that we can get an AI that outperform us humans in a lot of tasks. But I do not think that we will have a super AI that will kill us all. During the semester we saw a lot of different approaches to AI. Some of them did learn certain things on their own (decision trees), but this kind of trees will not start to teach themselves to learn previously unknown domains. Others like expectimax did rely even more on our rules. So this semester we have not really seen AI approaches which are able to generalize well.

Leave a Reply

Your email address will not be published. Required fields are marked *