Source: Microsoft.

The Internet can be a treasure trove of information, a communication tool eliminating distances between the masses, and the broadest avenue for sharing opinions and ideas. The problem is, sometimes those ideas are sexist, racist, hateful, or just plain ignorant. 

We might be used to seeing people jump on social media and spout out these things out, but we learned last week that some people want to pass on these ideas to artificial intelligence systems too. 

If you haven't already heard, Microsoft (NASDAQ: MSFT) released a Twitter chatbot named Tay last week that made itself smarter through "conversational understanding." The bot was designed to automatically interact with Twitter users and start picking up on the way people talked. 

Microsoft said, "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."

But Tay didn't get smarter. She got angrier, more hateful, and definitely racist. Tay began both repeating word-for-word some of the nastier things people said to it, and cobbling together its own concoctions as well (we won't repeat what she said here, but it's easy enough to find elsewhere online).

Less than 16 hours after Microsoft launched Tay, it had deleted some of its more colorful tweets and shut the AI chatbot down. 

The result of Microsoft's AI experiment wasn't all that surprising, really. But it's also an interesting (sad?) preview of some artificial intelligence experiments we're bound to see in the future. 

Tay isn't unique
It might be easy to think that Tay's learned behaviors won't be repeated by other AI systems, but that might be a bit naive. Some of the brightest technological minds have already publicly warned about the negative potential of artificial intelligence. 

Take a look:

  • "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that.." -- Tesla Motors co-founder and CEO, Elon Musk.
  • "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned." -- Bill Gates 
  • "The development of full artificial intelligence could spell the end of the human race." -- Professor and scientist, Stephen Hawking 

Admittedly, these quotes are bit doom-and-gloom, and they aren't representative of Microsoft's chatbot, of course. Tay isn't a super-intelligent AI system. But the point here is that AI systems learn what humans teach them, and it would be a bit reckless to believe that there won't be people who will teach highly sophisticated AI systems very wrong things. 

Here's what Microsoft said about Tay after they shut it down, "Looking ahead, we face some difficult -- and yet exciting -- research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical." 

These types of AI hiccups might not eventually spell the end of humankind (one can hope!) but they do pose a serious threat to future of the AI market, which is expected to grow from just $419 million in 2014 to $5 billion by 2020.

The AI market relies on companies experimenting with different types of AI's and eventually building them into smarter and more capable systems. But if Tay can be shut down in less than a day -- after exposure to the real world -- then it's clear these companies still have a long way to go in building a learning AI that can really benefit us all.