Brian Hertzog

View Original

Bridging The Gap To Artificial Intelligence

I've never written anything like this. I'm not pretending to be a futurist or computer scientist, but lately I've been thinking about economic trends that will have a significant impact on future generations (for investing purposes). Last week, this announcement spread rapidly across the web. A team of researches created a "chatbot" that passed the Turing test--essentially convincing 30% of a human panel that it (the program) was human. Following the announcement there was speculation on the validity of the achievement because the bot assumed the personality of a 13-year-old boy, so any errors were attributed to the bot's perceived age. Legitimate or not, the accomplishment of the researchers should be noted. We're getting closer.

That's when I started thinking about this post. What does it take to give something "artificial intelligence"? Here are my three and a half principles that programs or robots will need to adopt a more human-like understanding of the world.

1 – The desire to pass on data or replicate (reproductive instincts)

The first and perhaps the most fundamental instinct is the desire to transfer data. This is evident in every living organism including the most “basic” such as bacteria, and plants. Could you create an algorithm with the ability to archive experiences and share with them potential mates? E.g. if Robot 1 plays 100 chess games and then talks with Robot 2, who's played 100 different chess games, Robot 3 would have to inherit a data sample of Robot 1 + Robot 2.

2 – Self-preservation (survival instincts)

The next most basic instinct required for AI is that of self-preservation. A robot would have to possess an inventory of its components and some basic understanding of their function (like we do) so that If something is broken, it would seek help, repair, etc. AI would have to assess its environment, determine if it's safe to operate, and learn from its mistakes as we do.

3 – Obtain energy (hunting/foraging instincts)

Closely related to number 2, AI needs to understand how it receives energy. When born, humans aren't able to survive if left alone, a robot would have to understand it requires energy to continue to operate (the same way we realize we need food and water). A program would need to identify and obtain a sustainable source of energy e.g. scanning for electric outlets, etc.

3.5 – Time

This last one is only half a principle because I'm not totally sure AI would need to understand time (at least in the same way we do). Humans understand that life is approximately 100 years. We're aware that as we age, certain things become increasingly difficult, like having children. Also, as we learn, we reference things in relation to time e.g. if we burned ourselves on a stove when we're young, we no longer touch stovetops without testing for heat. It would be interesting to provide a program with an understanding that no matter what, after 100 years it would be terminated.

Summary

I'm relatively ignorant of the research being done in Artificial Intelligence. I wish I could provide hundreds of references to studies attempting the above principles and their subsequent results. Alas, I have no such references. I simply wanted to share some thoughts. That said, I'm confident that we're getting closer to singularity. IBM's Watson has already proven its ability to beat top Jeopardy players, Deep Blue has beaten the world's best chess players. What do we have to do to close the last mile? I'd love to read your thoughts and ideas in the comments below.