This bot crossed a line it shouldn't have because humans told it to • TechCrunch

This bot crossed a line it shouldn’t have because humans told it to • TechCrunch

A video of a sidewalk delivery robot crossing a yellow warning tape and rolling through a Los Angeles crime scene went viral this week, garnering more than 650,000 views on Twitter and sparking debate over whether the technology is ready to emerge.

It turns out that the robot’s error, at least in this case, was caused by humans.

The video The event was captured and posted on Twitter by William Gude, owner of Filming Police LA, a police censorship account in Los Angeles. Judd was in the area suspected of the Hollywood High School school shooting at about 10 a.m. when he videotaped the robot hovering on a street corner, looking confused, until someone raised the tape, allowing the bot to continue on its way. through the crime scene.

Uber Serve Robotics told TechCrunch that the robot’s self-driving system has not decided to cross over to the crime scene. It was the choice of the human factor who was operating the robot remotely.

The company’s delivery robots have so-called level 4 autonomy, which means they can drive themselves under certain conditions without the need for a human to take over. Serv has been testing its bots with Uber Eats in the region since May.

Serve Robotics has a policy that requires a human operator to remotely monitor and assist their robot at every intersection. The human operator will also control remotely if the robot encounters an obstacle such as a building area or a fallen tree and cannot figure out how to move around it within 30 seconds.

In this case, the robot, which had just finished delivery, approached the intersection and a human operator took charge, according to the company’s internal operating policy. At first, the human operator paused at the yellow warning bar. But when bystanders raised the bar and apparently “waved it,” the human operator decided to move on, Serve Robotics CEO Ali Kashani told TechCrunch.

“The robot would not have crossed on its own,” Kashani said. “There are just so many systems out there to ensure it is never crossed until a human gives that going forward.”

The error in judgment here, he added, is that someone decided to keep crossing.

No matter what, Kashani said it shouldn’t have happened. He added that Serv has pulled data from the incident and is working on a new set of protocols for humans and artificial intelligence to prevent it in the future.

Some obvious steps would be to ensure employees follow standard operating procedures (or SOPs), which include proper training and establishing new rules for what to do if an individual attempts to move the robot across a barrier.

But Kashani said there are also ways to use software to help avoid this happening again.

He said the programs could be used to help people make better decisions or to avoid an area altogether. For example, the company can work with local law enforcement to send the bot updated information about police incidents so that it can guide around those areas. Another option is to give software the ability to identify law enforcement and then alert human decision makers and remind them of local laws.

These lessons will be critical as robotics advances and expands its operational areas.

“The funny thing is that the robot did the right thing,” Kashani said. “So that really comes back to providing enough context for people to make good decisions so that we can be confident enough that we don’t need people to make those decisions.”

Serve Robotics hasn’t quite reached that point yet. However, Kashani told TechCrunch that bots are becoming more autonomous and usually operate on their own, with two exceptions: intersections and barriers of some sort.

Kashani said the scenario revealed this week runs counter to the number of people watching AI.

“I think the narrative in general is basically that people are really great in extreme situations and then the AI ​​makes mistakes, or maybe it’s not ready for the real world,” Kashani said. “It’s funny how we’re learning kind of the opposite, which is that we find that people make a lot of mistakes, and we need to rely more on AI.”


#bot #crossed #line #shouldnt #humans #told #TechCrunch

Leave a Comment

Your email address will not be published. Required fields are marked *