Watson and Waffles Part 1

https://vimeo.com/180517533 Link to Part 2

Watson for Entertainment

What we wanted to create was a game that took a step beyond virtual reality to give a sense of immersion with familiar faces and your own drawings coming to life.

Applying Watson’s Vision Recognition API we can get a tailored experience for individual users. This has the ability for players to feel connected to characters that take on the appearance of friends or family. Props and objects can also be drawn into the game giving players the ability to go beyond gesture recognition and have their art recognized for what it is. Whether it took them a few seconds or a few hours to create, the image can be translated into a recognizeable in-game object. This application can go beyond games, similar to VR hitting the gaming market but having so many applications is several other fields.

Let’s say you forgot the name of an object and can only use your own scribbles as a search term. This opens new possibilities when it comes to searching inventories, referencing art, and more. Being able to spawn objects in a virtual world can also be used in the professional field, such as architecture. Imagine being able to draw and scale a house around you, quickly drawing in spaces for the windows and doors with the drawing skills you’ve had since you were a kid.

On the first day of the hack I asked several participants if I could take a picture of their face, exaggerated expression encouraged. What we did with those photos was teach Watson to learn the difference between emotional states such as sad or angry. My favorite was teaching Watson about smiles.

Once we could tell the difference in emotions we needed to apply these images to an in-game character model. So, we had Watson analyze the images for where the person’s face was located and crop it accordingly. For gamers, this process would be as simple as selecting a folder with a collective of images and let Watson take care of the rest. Then voila, you can see you and your friends exploring the depths of a dark dungeon.

This wasn’t the only information involved with generating the characters properly. We also gathered the information for the gender and age to spawn an appropriate mesh. This meant if my own pictured showed up in game it would be on a female character while my teammates would be on a male character body. In addition to this the emotions were analyzed to apply the proper animation so someone smiling could be jumping with joy in a corner while someone angry could be yelling at a wall. These are all just the tip of the iceberg when it comes to tailoring an experience with small details unique to the user.

Now we needed to use Watson in a way that was interactive and acted as an essential game mechanic. Which brings us to spawning objects through the user’s own art.

When the game starts you’re given clues generated by Watson to find a certain person. You’ll soon discover to find this person you will need to access blocked off areas that will require you to draw the appropriate object to overcome each obstacle. In here we had a mesh spawn once the drawing was recognized by Watson, but there is always room to get even more creative and have your drawings be the actual objects you interact. This could result in a much more memorable experience for the gamer giving them ownership over their accomplishments.

It’s been an intensive weekend but I was happy to see what came out of it.

Likes: 0

Viewed:

source

(Visited 1 times, 1 visits today)

About The Author

You might be interested in

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.