How Household AI Threatens Privacy
Canada Euthanasia Targets Children?
Biden AND Trump Censored COVID Info
Trump: āThey Knew I Did Nothing Wrongā
The Governmentās āPropaganda Platformā
How Household AI Threatens Privacy
As AI continues to evolve, our privacy is put at risk. We have to pray that this industry be well regulated and that we be protected. This piece is from MIT and calls for consideration and prayer.
From MIT Technology Review. In the fall of 2020, gig workers in Venezuela posted a series of images to online forums where they gathered to talk shop. The photos were mundane, if sometimes intimate, household scenes captured from low anglesāincluding some you really wouldnāt want shared on the Internet.
Get involved in state level prayer with IFA.
In one particularly revealing shot, a young woman in a lavender T-shirt sits on the toilet, her shorts pulled down to mid-thigh.
The images were not taken by a person, but by development versions of iRobotās Roomba J7 series robot vacuum. They were then sent to Scale AI, a startup that contracts workers around the world to label audio, photo, and video data used to train artificial intelligence.
They were the sorts of scenes that internet-connected devices regularly capture and send back to the cloudāthough usually with stricter storage and access controls. Yet earlier this year, MIT Technology Review obtained 15 screenshots of these private photos, which had been posted to closed social media groups. ā¦
iRobotāthe worldās largest vendor of robotic vacuums, which Amazon recently acquired for $1.7 billion in a pending dealāconfirmed that these images were captured by its Roombas in 2020. All of them came from āspecial development robots with hardware and software modifications that are not and never were present on iRobot consumer products for purchase,ā the company said in a statement. They were given to āpaid collectors and employeesā who signed written agreements acknowledging that they were sending data streams, including video, back to the company for training purposes. According to iRobot, the devices were labeled with a bright green sticker that read āvideo recording in progress,ā and it was up to those paid data collectors to āremove anything they deem sensitive from any space the robot operates in, including children.ā
In other words, by iRobotās estimation, anyone whose photos or video appeared in the streams had agreed to let their Roombas monitor them. iRobot declined to let MIT Technology Review view the consent agreements and did not make any of its paid collectors or employees available to discuss their understanding of the terms.
While the images shared with us did not come from iRobot customers, consumers regularly consent to having our data monitored to varying degrees on devices ranging from iPhones to washing machines. Itās a practice that has only grown more common over the past decade, as data-hungry artificial intelligence has been increasingly integrated into a whole new array of products and services. Much of this technology is based on machine learning, a technique that uses large troves of dataāincludingĀ our voices,Ā faces,Ā homes, and other personal informationāto train algorithms to recognize patterns. The most useful data sets are the most realistic, making data sourced from real environments, like homes, especially valuable. Often, we opt in simply by using the product, as noted in privacy policies with vague language that gives companies broad discretion in how they disseminate and analyze consumer information.
The data collected by robot vacuums can be particularly invasive. They have āpowerful hardware, powerful sensors,ā says Dennis Giese, a PhD candidate at Northeastern University who studies the security vulnerabilities of Internet of Things devices, including robot vacuums. āAnd they can drive around in your homeāand you have no way to control that.ā This is especially true, he adds, of devices with advanced cameras and artificial intelligenceālike iRobotās Roomba J7 series.
This data is then used to build smarter robots whose purpose may one day go far beyond vacuuming. But to make these data sets useful for machine learning, individual humans must first view, categorize, label, and otherwise add context to each bit of data. This process is called data annotation.
āThereās always a group of humans sitting somewhereāusually in a windowless room, just doing a bunch of point-and-click: āYes, that is an object or not an object,āā explains Matt Beane, an assistant professor in the technology management program atĀ the University of California, Santa Barbara, who studies the human work behind robotics.
The 15 images shared with MIT Technology Review are just a tiny slice of a sweeping data ecosystem. iRobot hasĀ saidĀ that it has shared over 2 million images with Scale AI and an unknown quantity more with other data annotation platforms; the company has confirmed that Scale is just one of the data annotators it has used.
James Baussmann, iRobotās spokesperson, said in an email the company had ātaken every precaution to ensure that personal data is processed securely and in accordance with applicable law,ā and that the images shared with MIT Technology Review were āshared in violation of a written non-disclosure agreement between iRobot and an image annotation service provider.ā In an emailed statement a few weeks after we shared the images with the company, iRobot CEO Colin Angle said that āiRobot is terminating its relationship with the service provider who leaked the images, is actively investigating the matter, and [is] taking measures to help prevent a similar leak by any service provider in the future.ā The company did not respond to additional questions about what those measures were.
Ultimately, though, this set of images represents something bigger than any one individual companyās actions. They speak to the widespread, and growing, practice of sharing potentially sensitive data to train algorithms, as well as the surprising, globe-spanning journey that a single image can takeāin this case, from homes in North America, Europe, and Asia to the servers of Massachusetts-based iRobot, from there to San Franciscoābased Scale AI, and finally to Scaleās contracted data workers around the world (including, in this instance, Venezuelan gig workers who posted the images to private groups on Facebook, Discord, and elsewhere).
Together, the images reveal a whole data supply chaināand new points where personal information could leak outāthat few consumers are even aware of.
āItās not expected that human beings are going to be reviewing the raw footage,ā emphasizes Justin Brookman, director of tech policy at Consumer Reports and former policy director of the Federal Trade Commissionās Office of Technology Research and Investigation. iRobot would not say whether data collectors were aware thatĀ humans, in particular, would be viewing these images, though the company said the consent form made clear that āservice providersā would be.
āWe literally treat machines differently than we treat humans,ā adds Jessica Vitak, an information scientist and professor at the University of Marylandās communication department and its College of Information Studies. āItās much easier for me to accept a cute little vacuum, you know, moving around my space [than] somebody walking around my house with a camera.ā
And yet, thatās essentially what is happening. Itās not just a robot vacuum watching you on the toiletāa person may be looking too. ā¦
How and why our data ends up halfway around the world
With the raw data required for machine-learning algorithms comes the need for labor, and lots of it. Thatās where data annotation comes in. A young but growing industry, data annotation is projected to reachĀ $13.3 billionĀ in market value by 2030.
The field took off largely to meet the huge need for labeled data to train the algorithms used in self-driving vehicles. Today, data labelers, who are oftenĀ low-paid contract workers in the developing world, help power much of what we take for granted as āautomatedā online. They keep the worst of the Internet out of our social media feeds by manually categorizing and flagging posts, improve voice recognition software by transcribing low-quality audio, and help robot vacuums recognize objects in their environments by tagging photos and videos.
Among the myriad companies that have popped up over the past decade,Ā Scale AI has become the market leader. Founded in 2016, it built a business model around contracting with remote workers in less-wealthy nations at cheap project- or task-based rates on Remotasks, its proprietary crowdsourcing platform.
In 2020, Scale posted a new assignment there: Project IO. It featured images captured from the ground and angled upwards at roughly 45 degrees, and showed the walls, ceilings, and floors of homes around the world, as well as whatever happened to be in or on themāincluding people, whose faces were clearly visible to the labelers.
Labelers discussed Project IO in Facebook, Discord, and other groups that they had set up to share advice on handling delayed payments, talk about the best-paying assignments, or request assistance in labeling tricky objects.
iRobot confirmed that the 15 images posted in these groups and subsequently sent to MIT Technology Review came from its devices, sharing a spreadsheet listing the specific dates they were made (between June and November 2020), the countries they came from (the United States, Japan, France, Germany, and Spain), and the serial numbers of the devices that produced the images, as well as a column indicating that a consent form had been signed by each deviceās user. (Scale AI confirmed that 13 of the 15 images came from āan R&D project [it] worked on with iRobot over two years ago,ā though it declined to clarify the origins of or offer additional information on the other two images.)
iRobot says that sharing images in social media groups violates Scaleās agreements with it, and Scale says that contract workers sharing these images breached their own agreements.
But such actions are nearly impossible to police on crowdsourcing platforms. …
For its part, iRobot says that it shares only a subset of training images with data annotation partners, flags any image with sensitive information, and notifies the companyās chief privacy officer if sensitive information is detected. …
The company specified, āWhen an image is discovered where a user is in a compromising position, including nudity, partial nudity, or sexual interaction, it is deletedāin addition to ALL other images from that log.ā It did not clarify whether this flagging would be done automatically by algorithm or manually by a person, or why that did not happen in the case of the woman on the toilet.
iRobot policy, however, does not deem faces sensitive, even if the people are minors. ā¦
Surprise: you may have agreed to thisĀ
Robot vacuum manufacturers themselves recognize the heightened privacy risks presented by on-device cameras. āWhen youāve made the decision to invest in computer vision, you do have to be very careful with privacy and security,ā says Jones, iRobotās CTO. āYouāre giving this benefit to the product and the consumer, but you also have to be treating privacy and security as a top-order priority.ā
In fact, iRobot tells MIT Technology Review it has implemented many privacy- and security-protecting measures in its customer devices, including using encryption, regularly patching security vulnerabilities, limiting and monitoring internal employee access to information, and providing customers with detailed information on the data that it collects.
But there is a wide gap between the way companies talk about privacy and the way consumers understand it.
Itās easy, for instance, to conflate privacy with security, says Jen Caltrider, the lead researcher behind Mozillaās ā*Privacy Not Includedā project, which reviews consumer devices for both privacy and security. Data security refers to a productās physical and cyber security, or how vulnerable it is to a hack or intrusion, while data privacy is about transparencyāknowing and being able to control the data that companies have, how it is used, why it is shared, whether and for how long itās retained, and how much a company is collecting to start with.
Conflating the two is convenient, Caltrider adds, because āsecurity has gotten better, while privacy has gotten way worseā since she began tracking products in 2017. āThe devices and apps now collect so much more personal information,ā she says.
Company representatives also sometimes use subtle differences, like the distinction between āsharingā data and selling it, that make how they handle privacy particularly hard for non-experts to parse. When a company says it will never sell your data, that doesnāt mean it wonāt use it or share it with others for analysis.
These expansive definitions of data collection are often acceptable under companiesā vaguely worded privacy policies, virtually all of which contain some language permitting the use of data for the purposes of āimproving products and servicesāālanguage that Rich calls so broad as to āpermit basically anything.ā
Indeed, MIT Technology Review reviewed 12 robot vacuum privacy policies, and all ofĀ them, including iRobotās, contained similar language on āimproving products and services.ā Most of the companies to which MIT Technology Review reached out for comment did not respond to questions on whether āproduct improvementā would include machine-learning algorithms. But Roborock and iRobot say it would. ā¦
Robot vacuums are just the beginning
The appetite for data will only increase in the years ahead. Vacuums are just a tiny subset of the connected devices that are proliferating across our lives, and the biggest names in robot vacuumsāincluding iRobot, Samsung, Roborock, and Dysonāare vocal about ambitions much grander than automated floor cleaning. Robotics, including home robotics, has long been the real prize.
Consider how Mario Munich, then the senior vice president of technology at iRobot, explained the companyās goals back in 2018. In aĀ presentationĀ on the Roomba 980, the companyās first computer-vision vacuum, he showed images from the deviceās vantage pointāincluding one of a kitchen with a table, chairs, and stoolsānext to how they would be labeled and perceived by the robotās algorithms. āThe challenge is not with the vacuuming. The challenge is with the robot,ā Munich explained. āWe would like to know the environment so we can change the operation of the robot.ā
This bigger mission is evident in what Scaleās data annotators were asked to labelānot items on the floor that should be avoided (a feature that iRobot promotes), but items like ācabinet,ā ākitchen countertop,ā and āshelf,ā which together help the Roomba J series device recognize the entire space in which it operates.
The companies making robot vacuums are already investing in other features and devices that will bring us closer to a robotics-enabled future. The latest Roombas can be voice controlled through Nest and Alexa, and they recognize over 80 different objects around the home. Meanwhile, Ecovacsās Deebot X1 robot vacuum has integrated the companyās proprietary voice assistance, while Samsung is one of several companies developing ācompanion robotsā to keep humans company. Miele, which sells the RX2 Scout Home Vision, has turned its focus toward other smart appliances, like its camera-enabled smart oven.
And if iRobotās $1.7 billion acquisition by Amazon moves forwardāpending approval by the FTC, which is considering the mergerās effect on competition in the smart-home marketplaceāRoombas are likely to become even more integrated into Amazonās vision for the always-on smart home of the future.
Perhaps unsurprisingly, public policy is starting to reflect the growing public concern with data privacy. From 2018 to 2022, there has been aĀ marked increaseĀ in states considering and passing privacy protections, such as the California Consumer Privacy Act and the Illinois Biometric Information Privacy Act. At the federal level, the FTC isĀ consideringĀ new rules to crack down on harmful commercial surveillance and lax data security practicesāincluding those used in training data. In two cases, the FTC has taken action against the undisclosed use of customer data to train artificial intelligence, ultimately forcing the companies,Ā Weight Watchers InternationalĀ and the photo app developerĀ Everalbum, to delete both the data collected and the algorithms built from it.
Still, none of these piecemeal efforts address the growing data annotation market and its proliferation of companies based around the world or contracting with global gig workers, who operate with little oversight, often in countries with even fewer data protection laws. ā¦
What do you think of this? Share your thoughts and prayers below.
(Excerpt from MIT Technology Review. Photo Credit: Canva)
Partner with Us
Intercessors for America is the trusted resource for millions of people across the United States committed to praying for our nation. If you have benefited from IFA's resources and community, please consider joining us as a monthly support partner. As a 501(c)3 organization, it's through your support that all this possible.


We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Privacy Policy
Comments
How do you protect yourself when it is coming down to the “requirement” of accepting all of the “privacy and security” policies if you want to do business. If you get a consumer loan, or for phone contracts, even mortagage companies are requiring access to data that has nothing to do with your ability to pay back the loan, And anything else you “agree to the privacy/security policy” for. We must agree or we cannot function.
Yes, I pray God will provide us discernment to sidstep for as long as we can these policies put inplace to invade our lives. And pray for His protection when we cannot.
Thank you for alerting us to what is going on in this field. If this technology is shared with the CCP’s already aggressive facial and body movement recognition programs it could be universally invasive and weaponized against specific groups or individuals.
I don’t understand a lot of this article but I know enough to be very wary of signing even the most innocent-seeming memberships, agreements or other documents that have long, vague privacy and “permission” clauses.
Father, God you know things that we are not even aware of. Please give your people discernment on how to watch, pray and be active to strike against the evil plans of the enemy and his people. Last have the ears to hear what the Spirit is telling us and quickly be lead by you to respond. IJN I pray.