A Must-Watch Documentary
For more information about this film, please visit this link:
A Must-Watch Documentary
For more information about this film, please visit this link:
Amazon fires delivery drivers who refuse ‘biometric consent’ form
Cameras powered by artificial intelligence will record and store information about the driver’s face, location, movement, driving style, and even if the driver yawns or shows signs of drowsiness on-shift.
By David McLoone For LIFE SITE News
Amazon delivery drivers across the country face the prospect of losing their jobs if they refuse to consent to intrusive new biometrics technology inside their vans and trucks. The technology would capture and store personal information on a “driver account.”
75,000 drivers in the U.S. were asked by the tech giant to sign new contracts at the end of March that permit Amazon to use camera technology, powered by artificial intelligence (AI), to identify and store information about the driver: his face, location, movement, driving style, and even if the driver yawns or shows signs of drowsiness on-shift. Information collected is then shared with the dispatcher.
Failure to comply with the request for consent will result in the termination of that driver’s employment with Amazon — or the related third-party delivery service partner (DSP) which employs them — a copy of the “Vehicle Technology and Biometric Consent Agreement” obtained by Motherhood confirmed.
Amazon disclosed in the form that vehicles will be “video-monitored by cameras that are both internal and external and that operate while the ignition is on and for up to 20 minutes after the ignition is turned off.”
“Using your photograph, this Technology, may create Biometric Information, and collect, store, and use Biometric Information from such photographs.”
“This Technology tracks vehicle location and movement, including miles driven, speed, acceleration, braking, turns, and following distance … as a condition of delivery [sic] packages for Amazon, you consent to the use of Technology,” the form states.
The technology is being provided by Netradyne, a fleet management AI-technology start-up from San Diego. In a February announcement, reported by The Information, Amazon said the company’s four-lens “Driveri” camera would be installed in its delivery vehicles for “safety” reasons, as well as improving the “quality of the delivery experience.”
A presentation from Netradyne demonstrates the capabilities of the technology, including identifying a driver’s “seatbelt compliance” and “distraction” level, which ranges from using a cell-phone to simply “looking down.” Driving style is also closely monitored, with events like “hard acceleration” and stop sign violations being recorded and swiftly reported to dispatchers.
Deborah Bass, a spokeswoman for Amazon, stated that the decision to implement round-the-clock surveillance on their drivers was made “to help keep drivers and the communities where we deliver safe.”
Bass explained that Amazon previously “piloted the technology from April to October 2020 on over two million miles of delivery routes and the results produced remarkable driver and community safety improvements — accidents decreased 48 percent, stop sign violations decreased 20 percent, driving without a seatbelt decreased 60 percent, and distracted driving decreased 45 percent.”
“Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” she added.
Eva Blum-Dumontet, Senior Research Officer at Privacy International, a U.K.-based charity dedicated to protecting privacy rights across the globe, mockedBass’ contention that Amazon is “worried about road safety,” calling the notion “disingenuous.”
“The only thing they are concerned about here is their reputation and ensuring they can draw maximum profit from their drivers,” she said, adding that if Amazon “were truly concerned about road safety, the solution would be actually hiring employees and offering them enough protection so that they are not enticed to complete more tasks than it is safe to do so.”
In like manner, a number of employees (remaining nameless for fear of retaliation from Amazon) soon expressed concern that the company will use the countless hours of footage as “a punishment system,” likening the system to “Big Brother.”
Giving substance to driver concerns, the “biometric consent” form detailed that “Amazon may … use certain Technology that processes Biometric Information, including on-board safety camera technology which collects your photograph for the purposes of confirming your identity and connecting you to your driver account.”
One driver, Vic, quit his job delivering packages for Amazon in the Denver, Colorado, area after learning of the requirement to have AI-powered cameras constantly watch him while working, he told Reuters. “It was both a privacy violation, and a breach of trust … And I was not going to stand for it,” he said.
The installation of high-tech cameras is just the latest in a line of increasingly invasive biometric requirements imposed by Amazon, Vic said, explaining that drivers were already asked to install a monitoring app, Mentor, which logged a number of driving details.
“If we went over a bump, the phone would rattle, the Mentor app would log that I used the phone while driving, and boom, I’d get docked,” he said.
Biometrics technology, including facial recognition software, is becoming increasingly sophisticated, giving rise to new ethical concerns. In January, researchers at Stanford University, California, published a paper in which they claim it is possible to teach a computer to recognize a person’s political leanings, purely from scanning their face.
Using a collection of over one million images, freely taken from dating websites and from public Facebook profiles, the team claims the machine correctly predicted political orientation 72% of the time, which is “remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).”
Lead analyst on the team, Michal Kosinski, warned that it is supremely easy to obtain images through “ubiquitous CCTV cameras and giant databases of facial images.”
On account of this, the technology could be used for nefarious purposes, he noted, since “unlike many other biometric systems, facial recognition can be used without subjects’ consent or knowledge.”
The researchers added that “even a crude estimate of an audience’s psychological traits [based on facial recognition] can drastically boost the efficiency of mass persuasion. We hope that scholars, policymakers, engineers, and citizens will take notice.”
After reading numerous articles from behavioral psychiatrists and watching interviews with former tech executives regarding the “addiction code” designed into most apps and social media sites, my suspicions about the nefarious intentions behind these digital playgrounds have been confirmed. There are indeed evil tech wizards out there who are after your brains, and perhaps even your soul. It sounds like the plot of a Sci-Fi thriller, but it’s neither fiction, nor theory; it’s a very real conspiracy of our time. Don’t take my word for it; there is plenty of evidence out there which proves that many apps and sites are designed to get you hooked and keep you distracted. A very informative book on this subject is Glow Kids by Nicholas Kardaras, an addiction expert, who does an excellent job pointing out that kids are the most vulnerable victims of what he calls “digital heroin.” Many apps that kids use, such as Snapchat, Music.ly, Vine, Facebook, Roblox, Talking Tom, Minecraft, Angry Birds, amongst others, are intentionally addictive. The developers behind such apps have carefully studied the psychology of addiction and their technology is designed accordingly. Combine that knowledge with the user data collected from each app, and you have the perfect tool for psychological exploitation.
How the ethics of this phenomenon are not scrutinized more heavily, especially by legislators, is also very questionable in itself. In that respect, some have compared technology to the tobacco industry. As much as I can understand where this analogy is coming from, I would argue that the problem is far worse. First of all, the entire technology industry is not to blame, and I wouldn’t compare it to tobacco, which at least is labeled with very clear warnings about its health risks. I believe that a more accurate depiction of this matter would be to compare the techies who capitalize on addiction to dealers of crack-cocaine. In fact, studies have shown that there are few differences between a brain high on cocaine and a brain “high” on an addictive app.
Imagine if a candy company decided to use a drug like cocaine as a secret ingredient in their candy products, to keep people hooked, boost sales and/or guarantee that consumers will keep returning to them for more candy… Now imagine what would happen to that candy company if their secret ingredient became exposed, and subsequently, their malignant intention. What would happen? For starters, they’d be put on trial in court for criminal activity, and they would end up serving a sentence of some kind, along with other penalties.
People implementing “addiction codes” in their software are just as corrupt, and yet, they are not liable for their premeditated actions. Cognitive and behavioral disorders are on the rise, especially in kids, who are being prescribed pharmaceutical drugs for mental health issues, which, in many cases, are a consequence of the cyber drugs that are distributed by Silicon Valley’s digital drug lords.
Reality is stranger than Science-Fiction.
The attribute of life is what makes something smart. And yet, so many lifeless concepts and gadgets are given the name “smart.”
“Smart” might be the biggest marketing scam of our time.
Design Technology specialist and Biomimicry pioneer Prashant Dhawan ponders the validity of the “Smart City” in the video below:
There is a new trend happening. And it’s been happening for quite some time now. But today, it’s getting harder to ignore it. This trend has been quietly creeping into various aspects of our lives, disguised under many forms. It sneaks in wirelessly, through apps and devices, reducing the presence of human beings.
The trend I am referring to does not have an official name, but I like to call it “Remove the human.” The recent developments and innovations presented by technology have gradually been removing our species from all kinds of interactions and tasks; all for the supposed sake of efficiency. For the companies implementing these consumer technologies, diminishing human synergies could mean economically efficient results. For the consumer, less human interaction could mean better and faster service. That’s all good, theoretically. For now. But to what ends exactly are the means of this trend leading us to? What exactly is the end objective here? If efficiency is the true reason behind this trend, surely that’s not reason enough to justify this growing “remove the human” trend. Is human capital really so worthless? History has shown that mankind has been productive and efficient throughout the centuries. The Roman Empire alone demonstrates the value of human forces. And there were no apps or bots to assist us back then. One could argue that the worth of human capital depends on human efficiency toward a purpose. However, as we become more dependent on technology to complete our tasks for us, so does our efficiency decline (skills not used eventually dissolve). The result is that our worth in the workforce diminishes as well. And thus spoke the troubling trend I speak of.
I’m not trying to imply that companies behind certain technologies are intentionally eliminating humans, but I do wonder if they ever stop for a moment just to think about the greater implications of this trend they’re creating. Nor am I implying that the technological developments we are seeing today are all necessarily bad. I am simply concerned about where we are headed as human beings in a world where our presence seems to be less and less desirable in the face of technology.
To a certain degree, I can understand not wanting to deal with people. Anyone who has worked in sales knows the feeling of dread when faced with a difficult customer. Or the feeling of being a customer, waiting in line to check out at a grocery store or pharmacy, impatiently watching the cashier struggling to find the price codes, or having another customer ahead of you taking their sweet time to pull out their wallet while they’re chatting on the phone. It’s enough to make any person opt for an automated system or online shopping instead. But then again think of all the times you’ve dealt with an automated voice assisting you over the phone; the agony of going around in circles, having to repeat yourself, raising your voice and pressing more buttons just to deal with another automated voice that only wastes more of your time. And then think of the relief you’ve felt when finally, you were able to talk to a human, who was able to assist you because he or she could actually understand the issue you were calling about. Or how about that time when you did opt for the automated check-out, which took 20 minutes longer because of an error, increasing your frustration, until human assistance came along to resolve the problem in two seconds, restoring your serenity. Human performance isn’t as inefficient as we might perceive it to be.
I think of the days when I used to go to Tower Records or Sam Goody to buy CDs, long before the dawn of digital music. Going to the store and picking out an album was a special kind of ritual; people enjoyed spending time browsing through music and looking at the album art. The staff would often be like a guide, giving you their take on some of the albums you selected, recommending other bands you might not have heard of. Exchanges took place, which led to exciting discoveries; all thanks to human interaction. Today, when I download music on iTunes for example, the recommendations that pop up are based on algorithms, and yet, so often they just get it all wrong. Furthermore, I feel like I’m wasting time when I browse through music on the computer; even if I’m quickly finding what I want in just one click, it feels more like an episode of procrastination and less like a productive moment of discovery. We all have an emotional bond to the music we like, and music itself has always played an important role in bringing people together. Sharing a music mix on Spotify just doesn’t feel as special or personal as receiving or creating mix-tapes or CDs, which are now obsolete. I recently read an article about robots composing music, which begs the questions: How long before humans are entirely removed from music? And will the role of music as a unifier and instrument of human expression disappear as well?
I’d like to think not. But between the growing trends of getting virtual assistance from gizmos like Google Home, and interplays through virtual reality which removes us from reality, or connecting through social media which is essentially antisocial, to taking courses online where one will never learn directly from a teacher or experience the classroom dynamic, to the arrival of self-driving cars and robots in the workforce; a future where organic human qualities are no longer needed seems less and less distant.
If this current “remove the human” trend becomes the cause of such a future, then what will its effects be? A few days ago, I caught a glimpse of its possible effects, when I walked into a store where the sales associates were digital touch screens. A well-known trendy fashion brand (which shall remain nameless) recently opened a new store, introducing their latest shopping concept: you go to a screen and select the style you’d like in your size, and the request is automatically sent to your dressing room. The styles are also displayed on clothing racks for you to physically see and feel.
When I walked in, I wasn’t aware of this new retail formula; I ignored the screens, thinking they were just there to display images of the clothes on models. I went straight to the racks instead, selecting pieces I liked. One of the employees zipped over to me and explained that I wasn’t supposed to take items off the rack, but rather, to explore them through the touch screens. “Oh.” I said. “Well that kind of defies the purpose of in-store shopping, doesn’t it?” I added. She shrugged her shoulders with an expression that suggested she didn’t see the point in the touch screens either, but couldn’t do anything about it. “Well, would it be possible for me to just try these on in my size?” I asked, as the girl returned my selections to their display racks. She rolled her eyes and impatiently replied: “Yes, but you have to select it from the screen and enter your name.” She caught herself getting snappy with me, and quickly changed her tone to an apologetic one when she said: “I’m sorry, it’s not that I don’t want to assist you, it’s just that I’m not supposed to, it would interfere with our system. All requests and all stock is tracked and organized through our system here. I can help you navigate the screens though. I know it seems confusing at first, but then you get the hang of it.” I was basically having an online shopping experience in a store; it felt completely pointless, boring, and just downright dumb.
I don’t go shopping often; I abstain for practical reasons, but also because it’s time-consuming, and I don’t have the patience for it. But on that rare occasion when I do go on a retail excursion, I actually want a real experience. They call it “Retail Therapy” for a reason; it’s supposed to be fun and put your mind at ease. A real shopping experience should be a moment of exploration and inspiration; one where customers are given the privacy to browse through the items whilst also receiving the right amount of thoughtful attention, should they need assistance. The customer should feel respected and be respectable as well, no matter what the shopping context is. And respect is a human thing, isn’t it? So where does respect come in when your experience is controlled and monitored by a computer system? Personally, I don’t like being told how to shop, especially not by a computer. Enough time had been wasted between explanations of how this store concept worked, and figuring out the whole screen selection process (time that would have been saved if the humans in the store were allowed to participate as official sales associates). I was ready to leave, but instead I figured I might as well just play along with this new format, and try to enjoy it as a curious experiment.
A few moments later, my name was announced; I was notified that my fitting room was ready. I was assigned to room #4. As soon as I walked into the room, I noticed that the clothes were not there. Instead, there was a wardrobe; its doors were closed shut. An automated female voice much like Siri’s spoke into the room: “Open the closet. Go ahead, I dare you.” I followed the instructions and opened the doors to find my clothing selections hanging inside. I guess that’s supposed to have a “wow!” effect on people, but to me it just felt gimmicky in a creepy sort of way. It didn’t feel magical or special; were they going for the “Narnia” magic wardrobe shtick? Instead it just made me think: “do people seriously fall for this cheap trick?” Next to the door of the dressing room was another screen on the wall, a charging station where you could also plug in your music to try on clothes to, and a control panel to change the lighting around the mirror in the room, in order to facilitate flattering selfies in the mirror. I’m not kidding. These dressing rooms were designed for virtual vanity.
Very often the services and products we see on the market are responding to market demands. The fact that this kind of shopping experience now exists must signify that there was sufficient cause for it. In a selfie-obsessed culture where images are often staged and “curated,” it’s understandable how a shopping format like this can be conceived and implemented. We can do anything with technology these days, but just because we can, does that mean we should?
I tried on the clothes in my “magic” wardrobe, and needed a different size. So I did what anyone would normally do; I opened the door of my fitting room and asked someone in the store if they could please find a different size for me. “You need to make the request from the screen and then shut the wardrobe door and wait for the pants to arrive inside.” I was told. “So what exactly is the point of shopping in a store if humans can’t help their customers?” I asked, out of curiosity, trying to understand the point of this new store format. The girl simply shrugged and said: “Your guess is as good as mine; I feel kind of useless here sometimes, but what can we do? That’s just the way it is now, gotta just run with the program, you know?” I felt like saying: “If we keep running with this program none of us will have jobs!” But I bit my tongue and smiled instead. She helped me make my selection from the screen, I shut the wardrobe door, and after a few minutes, I heard mechanical sounds coming from the closet, followed by the computer-generated voice: “Your clothes are inside. You can open the doors now!” At that point I wondered if I was being watched in my fitting room. There were no cameras in sight, but maybe there was one behind the mirror or within the screen? I couldn’t tell, but I felt like I was being watched. I opened the door, and there were my new options. Had some digital closet elf put them there? Is that why I had to keep the doors shut? Or were there humans behind the wardrobes, filling them up, and not allowed to be seen? I don’t know. All I know is that I didn’t like this experience; it felt unsettling to me.
Most people probably don’t question this sort of thing; they just readily adapt. As a writer, and as a human concerned for my fellow humans in this technological era, I couldn’t help but analyze this situation from a critical perspective. When the girl who helped me with the screen told me that she felt “useless,” the echoes of that word really shook me. No wonder most of the people who worked there seemed to be in a bad mood; they all had a bit of an attitude. I wouldn’t be too happy either if I felt useless at work. No one should feel useless; it’s degrading. Imagine having a set of skills that you’re not allowed to use, because it interferes with a computer. Of course you’d feel like you were being wasted, and you’d grow resentful of any job that made you feel that way, because there’s no dignity in that. And when you’re working in an environment that has no regard for your presence, you’re certainly not going to make much of an effort there, are you? And subsequently, the lousy attitude is blamed on the people, not on the computer system, thus justifying the “need” for the human-less shopping experience.
Removing the human might be cost-effective for companies in the short-run; no salaries to pay, no costs of benefits, etc. But in the long-run, I believe that as long as the human race remains organically conscious, we will long for human experiences, which can be as simple as a courteous interaction, such as “hello, let me know how I can help you,” from one human being to another, acknowledging that our presence is significant to one another, and that it matters. With all of its special effects and convenience, technology doesn’t have intrinsic human values, and therefore does not bring people together in a genuine way, on an essentially human level. In fact, this bizarre sci-fi shopping experience is just one of many examples that shows how technology can alienate people from each other, or even from themselves, leading them to feel useless. If we continue to be perceived as useless, then we’re only allowing the “remove the human” trend to grow.
The issue however, is that we are not useless. We all serve a purpose. Perhaps we’re not all aware of our purpose, but if we are here, we should at least strive to be purposeful, and I believe that in each of our unique ways, we do seek to fulfill a purpose of some kind, because it’s in our nature to do so. Companies that give no sense of purpose to their employees are by default giving their customers less reason to give them any business, because more than an exchange of money for goods, the real exchange at the heart of any business transaction is one of values. If I’m buying an item from a store, it’s because I value that item, for whatever it might represent to my needs or desires, and from the way that it’s made to the way it’s sold to me. The people selling me that item want to uphold the value of what they’re selling by respecting all of the elements that drive a customer to their business. If an employee is made to feel useless, surely they will not feel of any use to their customers either, which makes for an exchange of little value. From that perspective alone, the “remove the human” trend will ultimately be bad for business in the long-run. The retail industry’s economic challenges today already indicate that something is wrong. Virtual shopping in-store is not going to be a long-term solution to the problem. A solution void of human dignity is not a solution.
In a world where we are all exposed to so much and where so many choices are at our fingertips, consumers have come to value the experiences of getting what they want just as much (if not more) as the merchandise itself. Genuine experiences are becoming rare, and we all want what’s rare. The experiences that give customers the most satisfaction are the ones where they feel valued and valuable. No algorithm or computer system or automated email greeting can provide that feeling better than a human being; it takes a human to know one after all. In the age of information and sensory overload, removing the human only exposes us to more digital congestion, which leaves us longing for natural and authentic experiences, removed from technology. It comes as no surprise that the demands for “digital-detox” retreats are steadily growing and that this new business has become a lucrative one. That’s a clear indication that old-fashioned human to human interactions are worth something.
As annoying or useless as we may seem to each other at times, it turns out that we are efficient in meeting human needs. In that department, we’re more efficient than the most advanced technologies of our time. As long as we remain natural and integral humans, as long as our species remains biological, and lest we become cyborgs, the human being will always fulfill human needs with maximum efficiency, because only we have the capacity to put ourselves in each other’s shoes. And what is the essential human need that we all have in common? Call it cliché, but it’s LOVE. And if it’s a cliché, there’s good reason for it. There’s always truth to clichés. The truth is, we can’t have love if we can’t have each other. And yet, the “remove the human” trend persists.
As a new year arrives, there isn’t a better time than the present to reflect upon what has led us here and what kind of future we want to lead ourselves into. If we want to meet our needs, we’re going to need each other. If we need each other, then surely we are useful after all. May the days ahead be filled with more personal experiences, and less impersonal ones, in spaces both personal and commercial, and beyond. Let’s think twice before adopting trends that don’t recognize our worth. With a new beginning ahead, let’s begin to recognize our purpose, whatever it may be, for its cause is ultimately founded on our essential human need.