Dr. Sue Keay, Chief Operating Officer, Australian Centre for Robotic Vision
Working in a robotics lab we often host tours from people who work in a wide range of sectors. They all have an interest in how disruptive technologies, like the ones we are working on in robotic vision, will impact their business. To make a very broad generalisation, we see a lot of interest in disruption coming from people in the services sector, particularly financial services.
One thing that drives robotics researchers crazy is the assumption that because we are in “robotics”, we do robotic process automation and can speak knowledgably about chatbots and other virtual robots. The line in the sand for us is whether the technology is embodied and physically interacts with the world. If it is software, an iPhone app, or a chatbot, it does not meet our test of what is a “real” robot. A robot is an autonomous machine that senses, thinks, and interacts with the physical environment.
Traditionally, robotic process automation hasn’t involved what we would call real robots, but things are changing. With the advent of nudge robotics, robots being used to influence human decision-making, that is now changing. We are starting to see robots becoming part of robotic process automation.
If it is software, an iPhone app, or a chatbot, it does not meet our test of what is a “real” robot
Robots can now act as a concierge, guiding customers through the steps required to prepare for common transactions, such as opening a bank account. They can interact with customers and ask questions that require a response and move the customer further along the process until it reaches a level of complexity that requires the involvement of a human. The aim is not to replace human staff but to let them focus on issues requiring higher powers of cognition. This is, “taking the robot out of the human,” as Professor Leslie Willcocks describes robotic process automation. Tasking a robot to deal with routine, repetitive requests, exactly as most RPA systems do, except with a physically active robot involved.
Robots, by virtue of being “embodied” (having an active physical presence in the world) can use techniques that are not possible to deliver from a mobile phone, touch screen, or chatbot. Robots can physically interact with humans and be positioned in locations where humans can be persuaded to interact with them. For example, positioning a social robot at the front entrance of a building, it can greet customers as they enter, and encourages interaction via voice and gestures. Many people will choose to interact with an animated object versus joining a queue, or taking a number and waiting for service from a person.
We are just starting to navigate the uncharted waters of the ethics of nudging human behaviour using robots. While people are often attached to possessions like cars, and anthropomorphise them by giving them human names, social robots can take emotional connection to another level. At what point will we need regulations to prevent people from using robots to take advantage of others?
Modern social robots like Softbank Robotics Pepper robot can both recognise and display emotion, and behave accordingly, for example, cheering you up when they sense you are down. But we lose sight of the fact that robots are programmed machines at our peril. Robotic nudging can improve the effectiveness of RPA but trust in robots will be undermined when they begin to be heavily applied in retail settings and are prompt us to buy things and to respond to advertising.