5. Will AI mean we stop saying please?
My second session at SXSW was 'AI and Technology for Real Humans' about the experience and challenges of developing Artificial Intelligence personas that users are happy to engage with. Speakers were Wally Brill from Google, Ed Doran from Microsoft, Andrew Hill from Mercedes Benz and Anna Pickard from Slack.
This was one of those where, to be honest, the actual content was different from my expectations, which was probably my fault. The speakers were expert, and took a light hearted approach to the nuances of developing AI interfaces. It was a bit of a jump if you don't currently - consciously - engage with any AI personas, and I suppose the learning was that this is going to be a thing. Systems are going to be talking to us, and they're going to be doing it everywhere, and each one with a unique tone of voice. Not sure what you think about that. Andrew Hill talked about the 'personality' of people's cars - the tone of voice that drivers will want their car to take when it tells them about traffic problems. Neutral? Jokey? Apologetic?
The video is people playing with Robot puppies in the Sony House. You stroked them and tickled them, and tried to lure the puppies to come and play with them. How you can engage with AI, even puppy AI, is weird.
The concept for Meaning is that it will enable real people to communicate with each, in a friendly supportive way, and this made me think about the underlying infrastructure. What does the tone of the interface need to be like to encourage young people in challenging contexts to feel you are supported, and mentors in the Global North to feel that what you are doing is really making a difference? One of a many more things that SXSW made me think about.
The quote of the session was from Anna Pickard, saying how, though she teaches her children to be polite, it's weird when you introduce that to AI. Siri doesn't need people to say please, doesn't mind if we are rude or dismissive. Is this going to affect how we relate to one another...?
Feel free to comment...
This was one of those where, to be honest, the actual content was different from my expectations, which was probably my fault. The speakers were expert, and took a light hearted approach to the nuances of developing AI interfaces. It was a bit of a jump if you don't currently - consciously - engage with any AI personas, and I suppose the learning was that this is going to be a thing. Systems are going to be talking to us, and they're going to be doing it everywhere, and each one with a unique tone of voice. Not sure what you think about that. Andrew Hill talked about the 'personality' of people's cars - the tone of voice that drivers will want their car to take when it tells them about traffic problems. Neutral? Jokey? Apologetic?
The video is people playing with Robot puppies in the Sony House. You stroked them and tickled them, and tried to lure the puppies to come and play with them. How you can engage with AI, even puppy AI, is weird.
The concept for Meaning is that it will enable real people to communicate with each, in a friendly supportive way, and this made me think about the underlying infrastructure. What does the tone of the interface need to be like to encourage young people in challenging contexts to feel you are supported, and mentors in the Global North to feel that what you are doing is really making a difference? One of a many more things that SXSW made me think about.
The quote of the session was from Anna Pickard, saying how, though she teaches her children to be polite, it's weird when you introduce that to AI. Siri doesn't need people to say please, doesn't mind if we are rude or dismissive. Is this going to affect how we relate to one another...?
Feel free to comment...
Comments
Post a Comment