How to use robots to improve developmental outcomes in children
Dr. Ayanna Howard, Dean of the College of Engineering at The Ohio State University, talks about the importance of verbal feedback—even if it is from a robot.
In 2005, Dr. Ayanna Howard was faced with a difficult realization: to pursue her ambitious goals in robotics, she would have to resign from her role at NASA. It wasn’t an easy decision. Howard had worked for fifteen years at NASA. At NASA’s Jet Propulsion Laboratory, Howard led a team of engineers and scientists working on advancing the intelligence of robots for future Mars missions.
The work was innovative and satisfying. However, Howard decided that it was time to pursue research in the area that had drawn her to robotics in the first place—to understand how intelligent technologies can adapt to and function within a human-centered world. Howard became the director of the Human-Automation Systems (HumAnS) Lab at the Georgia Institute of Technology. In 2013, she founded Zyrobotics, a startup that develops educational products and services for children with differing needs.
In this conversation with the Amazon re:MARS team, Howard talks about a particularly meaningful intersection of technology and education: the design of algorithms and user interfaces to engage children of diverse abilities in STEM education and coding.
How did you get interested in utilizing robotics to help students of diverse abilities develop STEM skills?
I ran a STEM workshop for middle school students when I was part of the faculty at Georgia Tech. One of the students in our 2010 workshop was visually impaired. She was having a difficult time in getting hands on with the tools and interfaces for programming that we had provided in the workshop. Back then, I wasn’t as familiar with the difficulties that children with diverse abilities face in being able to develop skills in fields like computer programming and robotics.
I began to look at the options available to help my student. What I found was startling. At the time, many visual screen readers cost over a thousand dollars. They were not only expensive; they also fell short in terms of providing engaging interfaces for students.
The work was innovative and satisfying. However, Howard decided that it was time to pursue research in the area that had drawn her to robotics in the first place—to understand how intelligent technologies can adapt to and function within a human-centered world. Howard became the director of the Human-Automation Systems (HumAnS) Lab at the Georgia Institute of Technology. In 2013, she founded Zyrobotics, a startup that develops educational products and services for children with differing needs.
In this conversation with the Amazon re:MARS team, Howard talks about a particularly meaningful intersection of technology and education: the design of algorithms and user interfaces to engage children of diverse abilities in STEM education and coding.
How did you get interested in utilizing robotics to help students of diverse abilities develop STEM skills?
I ran a STEM workshop for middle school students when I was part of the faculty at Georgia Tech. One of the students in our 2010 workshop was visually impaired. She was having a difficult time in getting hands on with the tools and interfaces for programming that we had provided in the workshop. Back then, I wasn’t as familiar with the difficulties that children with diverse abilities face in being able to develop skills in fields like computer programming and robotics.
I began to look at the options available to help my student. What I found was startling. At the time, many visual screen readers cost over a thousand dollars. They were not only expensive; they also fell short in terms of providing engaging interfaces for students.
The situation is not much better today. We face problems in getting students with disabilities to pursue careers in STEM. Research has indicated that students with disabilities make up as much as twelve percent of the population in secondary school, but only one percent of the Advanced Placement math and science courses.
What were some of the approaches you designed to tackle this problem?
The first thing we did was to develop new robot hardware platforms to provide students with more accessible interfaces. We focused on developing enhancements to the existing screen readers. We developed mechanisms to provide students with visual disabilities haptic feedback. We also programmed robots so that they could give verbal confirmations after they performed specific tasks.
My team utilized the LEGO MINDSTORMS NXT robot kit for our experiment. It was cost-effective. Furthermore, it had a proven track record of engaging students through competitions such as the FIRST LEGO League. We reprogrammed a remote for the Nintendo Wii remote to provide haptic feedback. It had a variety of buttons that we could use to provide different inputs. Crucially, it also had a motor that could be used to create vibrations for haptic feedback.
What were some of the approaches you designed to tackle this problem?
The first thing we did was to develop new robot hardware platforms to provide students with more accessible interfaces. We focused on developing enhancements to the existing screen readers. We developed mechanisms to provide students with visual disabilities haptic feedback. We also programmed robots so that they could give verbal confirmations after they performed specific tasks.
My team utilized the LEGO MINDSTORMS NXT robot kit for our experiment. It was cost-effective. Furthermore, it had a proven track record of engaging students through competitions such as the FIRST LEGO League. We reprogrammed a remote for the Nintendo Wii remote to provide haptic feedback. It had a variety of buttons that we could use to provide different inputs. Crucially, it also had a motor that could be used to create vibrations for haptic feedback.
We asked students to use our setup to take an introductory course in programming. The results were encouraging. In response to a survey conducted after the experiment, students said that they now felt they were more capable of working with robots and computers. We saw a significant uptick in the number of students who said that they felt that they could work with robots and computers after growing up.
How did you build on the success of this first experiment?
I began to talk to parents of children with varied abilities. I also started going to local clinics and hospitals to talk with clinicians about using robots to provide therapy for children in their homes. Using feedback from parents and clinicians alike, my team began to design new innovations and experiments to determine the best ways robots could provide improved therapy to children at home.
There are two innovations we worked on that come to mind.
The first was designed around determining if robots could be effective in providing therapy to children with cerebral palsy. As you might know, over half of children diagnosed with cerebral palsy have difficulty in reaching, grasping, and manipulating objects.
For our experiment, we developed a new virtual reality (VR) game called Super Pop. The VR system consisted of a laptop running a 64-bit Windows operating system and a Microsoft Kinect camera. Bubbles appeared on the computer screen during the game. Children had to pop as many bubbles as possible by moving their arms. Our near-two-foot-tall robot Darwin provided verbal feedback throughout the experiment. He would say encouraging things like “Wow! Good game! Let’s play another!” at the end of every game.
The interesting thing we found was that positive reinforcement matters—even if it is from a robot. Children exhibited a higher percentage of successful reaches when Darwin’s feedback was present.
There was another experiment we developed for children with autism spectrum disorder. We worked with children who were as young as three years old. Timing is everything when it comes to children diagnosed with autism. Research has shown that the younger children are when they enter an intervention, the more gains they can make in improving their developmental skills.
Many autism therapies are task-focused. Children complete specific assignments under the guidance of a therapist.
For our sessions, we asked children to play a card matching game that involved the taking of turns on a tablet. Kinect cameras mounted on tripods captured as much of the interaction as possible. By being able to measure factors like the body angle of the child relative to the table, the camera detected how engaged the child was during a particular session. A robot provided visual prompts to the child when it sensed that the child was disengaged. This experiment, like the one I described earlier, was successful. The presence of the robot resulted in a tangible lift in terms of improving engagement.
What were some of the biggest problems you encountered?
Early on, we found that the robots didn’t always work as expected when they were in the homes of children. We couldn’t control for things like lighting and furniture in their homes as we would in a lab, which is a much more controlled environment.
As a result, we had to design our robots with the assumption that every environment was going to be different. To simplify matters, we stripped away everything that came in the way of the most important function: the robot being able to play with children and provide helpful feedback.
There’s an important lesson here. You should always aim to scale back the complexity of human-robot interactions. This is especially important after COVID-19, as the adoption of robots in our society increases in places like retail outlets, airports, and assembly lines.
Staying focused on the essentials will also allow for increased use of robots in education. As parents, we often struggle to meet work-life demands in a post-pandemic world. My sincere wish is that every child gets quality one-on-one time with a robot. Every child deserves individualized attention, and there’s no greater joy than seeing the eyes of a child light up during one of these interactions.
How did you build on the success of this first experiment?
I began to talk to parents of children with varied abilities. I also started going to local clinics and hospitals to talk with clinicians about using robots to provide therapy for children in their homes. Using feedback from parents and clinicians alike, my team began to design new innovations and experiments to determine the best ways robots could provide improved therapy to children at home.
There are two innovations we worked on that come to mind.
The first was designed around determining if robots could be effective in providing therapy to children with cerebral palsy. As you might know, over half of children diagnosed with cerebral palsy have difficulty in reaching, grasping, and manipulating objects.
For our experiment, we developed a new virtual reality (VR) game called Super Pop. The VR system consisted of a laptop running a 64-bit Windows operating system and a Microsoft Kinect camera. Bubbles appeared on the computer screen during the game. Children had to pop as many bubbles as possible by moving their arms. Our near-two-foot-tall robot Darwin provided verbal feedback throughout the experiment. He would say encouraging things like “Wow! Good game! Let’s play another!” at the end of every game.
The interesting thing we found was that positive reinforcement matters—even if it is from a robot. Children exhibited a higher percentage of successful reaches when Darwin’s feedback was present.
There was another experiment we developed for children with autism spectrum disorder. We worked with children who were as young as three years old. Timing is everything when it comes to children diagnosed with autism. Research has shown that the younger children are when they enter an intervention, the more gains they can make in improving their developmental skills.
Many autism therapies are task-focused. Children complete specific assignments under the guidance of a therapist.
For our sessions, we asked children to play a card matching game that involved the taking of turns on a tablet. Kinect cameras mounted on tripods captured as much of the interaction as possible. By being able to measure factors like the body angle of the child relative to the table, the camera detected how engaged the child was during a particular session. A robot provided visual prompts to the child when it sensed that the child was disengaged. This experiment, like the one I described earlier, was successful. The presence of the robot resulted in a tangible lift in terms of improving engagement.
What were some of the biggest problems you encountered?
Early on, we found that the robots didn’t always work as expected when they were in the homes of children. We couldn’t control for things like lighting and furniture in their homes as we would in a lab, which is a much more controlled environment.
As a result, we had to design our robots with the assumption that every environment was going to be different. To simplify matters, we stripped away everything that came in the way of the most important function: the robot being able to play with children and provide helpful feedback.
There’s an important lesson here. You should always aim to scale back the complexity of human-robot interactions. This is especially important after COVID-19, as the adoption of robots in our society increases in places like retail outlets, airports, and assembly lines.
Staying focused on the essentials will also allow for increased use of robots in education. As parents, we often struggle to meet work-life demands in a post-pandemic world. My sincere wish is that every child gets quality one-on-one time with a robot. Every child deserves individualized attention, and there’s no greater joy than seeing the eyes of a child light up during one of these interactions.