Mattel scraps kid-centered AI device amid privacy concerns
- Mattel, the toymaker whose sales rose 3% to $974.5 million in Q2 2017 from a year earlier, canceled plans to sell a voice-activated device for children after pressure from parent and privacy groups, MediaPost reported. The company’s Aristotle gadget, which was described as a kid version of Amazon Alexa, had been touted as a baby monitor and voice-controlled computer.
- The connected device was met with alarm from advocacy groups who claimed Aristotle would spy on kids from infancy through adolescence. The Campaign for a Commercial-Free Childhood said children’s bedrooms should be free of corporate snooping and data collection that could be used for ad targeting. More than 1,500 people signed a petition asking Mattel not to release Aristotle.
- Sen. Ed Markey (D-Massachusetts) and Rep. Joe Barton (R-Texas) questioned Mattel CEO Margaret Georgiadias about the device, saying in a letter that it "has the potential to raise serious privacy concerns."
Mattel's experience with Artistotle is an important example for other brands considering entering the digital assistant space, where consumers are concerned about potential privacy issues. The main takeaway is, like with any new technology, it is important to research how customers will react. More broadly, mobile technology is opening up a host of potential new privacy and legal issues that brands need to pay attention to, including some around proximity marketing.
While Mattel may be withdrawing its Aristotle device from the market, tech firms are pushing forward with marketing their voice-activated digital assistants that are likely to act as virtual babysitters, along with smartphones, tablets and TVs that populate millions of homes. The difference between Aristotle and digital assistants like Amazon Alexa or Google Assistant is that Mattel’s device was going to be marketed specifically to children as a kind of toy.
Mattel said Aristotle could be programmed to play a lullaby, emit white noise or turn on a night light to soothe a waking baby back to sleep. The monitor could send data on nap times and diaper changes to an app on parents' smartphones and upload it to the cloud, with permission. In addition, Aristotle could be programmed to help buy diapers, reinforce good manners in kids (by requiring the word "please" in voice commands) and teach them a foreign language, per Mattel.
But conscientious parents don’t like the idea of a Big Brother-esque device monitoring their kids and collecting information on them. Josh Golin, executive director of the Campaign for a Commercial-Free Childhood, mentioned additional worries, such as using kids for AI experiments, training youngsters to accept constant surveillance of their activities and teaching babies to form bonds with inanimate objects rather than focusing on building real relationships with other humans.
Mattel in a statement acknowledged its sensitivity to privacy issues. The toymaker last year agreed to pay $200,000 to the New York Attorney General’s office after being accused of violating the federal Children’s Online Privacy Protection Act. Mattel was alleged to have collected children’s information without parental authorization and to have shared that information with third parties.
This isn't the only time Mattel has been in the hot seat of late. In September, the company was found to have violated regulations by not disclosing that some of the videos in a Barbie mobile app for kids were actually ads. A main concern was that the users of "Barbie Sparkle Blast" are very young and likely might not understand the ads' intent, especially without clear labeling.