“Totalitarianism, at its essence, is an attempt at transforming reality into fiction.”
Amazon fires delivery drivers who refuse ‘biometric consent’ form
Cameras powered by artificial intelligence will record and store information about the driver’s face, location, movement, driving style, and even if the driver yawns or shows signs of drowsiness on-shift.
By David McLoone For LIFE SITE News
Amazon delivery drivers across the country face the prospect of losing their jobs if they refuse to consent to intrusive new biometrics technology inside their vans and trucks. The technology would capture and store personal information on a “driver account.”
75,000 drivers in the U.S. were asked by the tech giant to sign new contracts at the end of March that permit Amazon to use camera technology, powered by artificial intelligence (AI), to identify and store information about the driver: his face, location, movement, driving style, and even if the driver yawns or shows signs of drowsiness on-shift. Information collected is then shared with the dispatcher.
Failure to comply with the request for consent will result in the termination of that driver’s employment with Amazon — or the related third-party delivery service partner (DSP) which employs them — a copy of the “Vehicle Technology and Biometric Consent Agreement” obtained by Motherhood confirmed.
Amazon disclosed in the form that vehicles will be “video-monitored by cameras that are both internal and external and that operate while the ignition is on and for up to 20 minutes after the ignition is turned off.”
“Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs.”
“This Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance … as a condition of delivery [sic] packages for Amazon, you consent to the use of Technology,” the form states.
The technology is being provided by Netradyne, a fleet management AI-technology start-up from San Diego. In a February announcement, reported by The Information, Amazon said the company’s four-lens “Driveri” camera would be installed in its delivery vehicles for “safety” reasons, as well as improving the “quality of the delivery experience.”
A presentation from Netradyne demonstrates the capabilities of the technology, including identifying a driver’s “seatbelt compliance” and “distraction” level, which ranges from using a cell-phone to simply “looking down.” Driving style is also closely monitored, with events like “hard acceleration” and stop sign violations being recorded and swiftly reported to dispatchers.
Deborah Bass, a spokeswoman for Amazon, stated that the decision to implement round-the-clock surveillance on their drivers was made “to help keep drivers and the communities where we deliver safe.”
Bass explained that Amazon previously “piloted the technology from April to October 2020 on over two million miles of delivery routes and the results produced remarkable driver and community safety improvements — accidents decreased 48 percent, stop sign violations decreased 20 percent, driving without a seatbelt decreased 60 percent, and distracted driving decreased 45 percent.”
“Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” she added.
Eva Blum-Dumontet, Senior Research Officer at Privacy International, a U.K.-based charity dedicated to protecting privacy rights across the globe, mockedBass’ contention that Amazon is “worried about road safety,” calling the notion “disingenuous.”
“The only thing they are concerned about here is their reputation and ensuring they can draw maximum profit from their drivers,” she said, adding that if Amazon “were truly concerned about road safety, the solution would be actually hiring employees and offering them enough protection so that they are not enticed to complete more tasks than it is safe to do so.”
In like manner, a number of employees (remaining nameless for fear of retaliation from Amazon) soon expressed concern that the company will use the countless hours of footage as “a punishment system,” likening the system to “Big Brother.”
Giving substance to driver concerns, the “biometric consent” form detailed that “Amazon may … use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account.”
One driver, Vic, quit his job delivering packages for Amazon in the Denver, Colorado, area after learning of the requirement to have AI-powered cameras constantly watch him while working, he told Reuters. “It was both a privacy violation, and a breach of trust … And I was not going to stand for it,” he said.
The installation of high-tech cameras is just the latest in a line of increasingly invasive biometric requirements imposed by Amazon, Vic said, explaining that drivers were already asked to install a monitoring app, Mentor, which logged a number of driving details.
“If we went over a bump, the phone would rattle, the Mentor app would log that I used the phone while driving, and boom, I’d get docked,” he said.
Biometrics technology, including facial recognition software, is becoming increasingly sophisticated, giving rise to new ethical concerns. In January, researchers at Stanford University, California, published a paper in which they claim it is possible to teach a computer to recognize a person’s political leanings, purely from scanning their face.
Using a collection of over one million images, freely taken from dating websites and from public Facebook profiles, the team claims the machine correctly predicted political orientation 72% of the time, which is “remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).”
Lead analyst on the team, Michal Kosinski, warned that it is supremely easy to obtain images through “ubiquitous CCTV cameras and giant databases of facial images.”
On account of this, the technology could be used for nefarious purposes, he noted, since “unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.”
The researchers added that “even a crude estimate of an audience’s psychological traits [based on facial recognition] can drastically boost the efficiency of mass persuasion. We hope that scholars, policymakers, engineers, and citizens will take notice.”
The device can read ‘neural signals coming from my brain, down my spinal cord along my arm, to my wrist.’
By Raymond Wolfe
January 29, 2021 (LifeSiteNews) — New information about Facebook’s effort to develop brain-reading technology came to light last month after a recording from a company meeting was leaked to the press.
Speaking with Facebook founder and CEO Mark Zuckerberg and other top executives of the social media giant, Chief Technology Officer Mike Schroepfer previewed a sensor device that he said can read “neural signals coming from my brain, down my spinal cord along my arm, to my wrist.”
He added that “this sensor that we are building detects [neural signals], interprets them, and allows me to control [the] device.” This includes, for instance, typing or playing video games with mental commands.
Schroepfer’s revelations are the latest from the Big Tech company’s secretive, years-long quest to put out a neural device. The project began with plans for a “brain mouse” that would allow users to type with their minds, as the director of Facebook’s now-defunct research lab, Building 8, announced in 2017.
Since then, Facebook has purchased neural interface startup CTRL-Labs, the developer of an experimental wristband that purports to give users the ability to operate computers by thinking.
In a post announcing the acquisition of CTRL-Labs in 2019, the head of Facebook Reality Labs, Andrew Bosworth, claimed that the wristband “will decode” neural signals and “translate them into a digital signal your device can understand.” “It captures your intention so you can share a photo with a friend using an imperceptible movement,” or by “intending to,” he added.
Earlier that year, Facebook had revealed details of a separate thought-reading headset in a paper published in Nature Communications. Researchers backed by the company claimed that the algorithm for the headset technology could interpret speech from brain signals with 61-76% accuracy.
The research team published another paper in 2020 detailing an artificial intelligence system that can translate thoughts to text in real time by analyzing brain data. The AI had an error rate that was as low as 3%, according to the study. In the leaked audio from December, Mike Schroepfer noted that Facebook uses artificial intelligence prolifically to censor certain users, celebrating that AI bots remove 95% of “hate speech.”
“Our investments in technology aren’t just about keeping our services running,” Schroepfer said. “We are paving the way for breakthrough new experiences that, without hyperbole, will improve the lives of billions.”
At the same time, Schroepfer noted Facebook’s damaged public image, which has suffered due to a wave of damning privacy scandals. Just last year, Facebook paid out $550 million to settle a class-action lawsuit that argued the company illegally collected biometric data through its facial recognition practices.
Besides Facebook, several other notable tech companies have ventured into neural technology. Last March, Microsoft patented a cryptocurrency system that incorporates wearable sensors to track users’ brain waves. Neuralink, a startup founded by Tesla CEO Elon Musk, wants to go even further, with implantable computer chips to treat neural disorders.