It seems as though Amazon is in hot water yet again. Earlier this month, we wrote about Amazon’s requirement of its drivers to sign a biometric consent form in compliance with newly-installed AI dash cameras. Drivers were told to consent or risk losing their jobs.
Amazon claims it’s trying to keep its drivers safe. Their drivers, on the other hand, are claiming unfair business practices, surveillance and even discrimination. If we unpack this latest development further, it becomes increasingly difficult to find where the answers lie.
Introducing dash cams to Amazon drivers
AI in the workplace isn’t new. From voice-activated virtual assistants to warehouse “employees”, the use of artificial intelligence is positively impacting workflows across a wide number of industries. The assumption, of course, is that the technology is sound and that the algorithms are faultless. This was the assumption made by Amazon when it partnered with Netradyne in February 2021 to install cameras in their delivery vehicles.
According to employees of the mega-retailer, drivers are being watched — and graded. We’ve written numerous pieces about the benefits of gamification and incentivizing good driving habits, but again, this presupposes that a dash cam system is robust enough to do the job, and that AI can learn from human behaviour and adjust its algorithms accordingly. But what happens when both assumptions fall short?
Penalizing behaviours that don’t exist
According to Amazon drivers across the US, Netradyne cameras are regularly punishing them for events or triggers that don’t constitute unsafe driving, such as looking at a side mirror, getting cut off in traffic or adjusting the radio.
The four-lens camera records drivers when an “event” is detected, then it uploads video footage to a proprietary interface. In some cases, a robotic voice will tell the driver how to resolve said event, such as “maintain safe distance”. Ultimately, footage will affect a driver’s score each week.
While the cameras were installed as safety and cost-saving measures, there are those within the company that have reported losing income from wrongful citations registered by the Netradyne interface. The issue is now less about following the rules, but whether Netradyne’s AI is capable of deciding whether or not a rule has been broken.
A human driver will assess a driving scenario and scan for context; AI doesn’t have the ability to evaluate the same situation, it only understands its own inflexible rules and logic. In this case, AI is missing a critical human component.
To further add to the AI problem, metrics gathered from this logic are being used to penalize drivers and determine how much they get paid. When AI cameras identify an unsafe event, it cumulatively factors into a driver’s performance score and can effectively harm their chances of getting bonuses, prizes and other incentives.
These events also help determine a driver’s rating of either “poor,” “fair,” “good,” or “fantastic.” For perspective, an Amazon Delivery Service Provider (DSP) can only get bonuses to put toward damages and repairs if their drivers’ combined weekly scores remain in “fantastic” land.
Amazon claims that its drivers have improved their habits and that traffic incidents are down substantially. While it’s all well and good to suppose that Amazon’s claim is due to the instalment of Netradyne’s cameras and hiring practices, a more accurate claim might be that drivers are simply circumventing the technology so as not to harm their driver scores. Examples include wearing sunglasses to avoid eye movement being misconstrued as distracted driving, or placing stickers over the camera lenses to avoid detection altogether.
Everyone getting along
No dash cam system should be deployed if the false positive rate is higher than before its implementation. Creating an environment where the very technology set in place to protect its drivers has to be circumvented goes against the business case. Companies such as Amazon would do well with human oversight and a proper review process, whereby drivers can contest false claims.
At ZenduIT, we offer end-to-end managed services, where actual people review your AI-compiled footage to deduce if an incident took place. We are the extra set of eyes you need to properly engage your drivers and keep them safe on the road. We understand that success is not measured by the number of events triggered, but rather how they are actioned so as to achieve equitable outcomes for drivers and fleet managers. We provide the tools to coach your fleet effectively and improve driver scores by both algorithm and human calculation. You have options — contact your ZenduIT consultant for a free demonstration today.