Gary Grossman * He says personal robots could enter the mainstream within a decade.
Many observers were disappointed With the latest demo of the AI-powered “Optimus” robot at Tesla’s AI Day.
One reviewer cleverly titled his article “Sub-Optimus”.
However, these opinions actually miss the point.
Whatever may be said about Elon Musk, he is a genius at sensing timing and opportunity, applying technology and providing needed resources.
The quality and enthusiasm of the engineering team indicates that Optimus can succeed, even if it takes longer than the estimated 3-5 years to fully produce.
If successful, Optimus could bring personal robotics into the mainstream within a decade.
Although initially priced at $20,000, the 2032 Optimus sibling could be just as common in stores or factories as today’s Tesla on the road.
After another 10 years, robots that resemble humans in everyday life could become commonplace, whether at home, in stores and restaurants, in factories and warehouses, or in health and home care settings.
Artificial Intelligence Noise: Interacting with Robots
In this vision, the idea of an “artificial friend”, an emotionally intelligent Android as portrayed by Kazuo Ishiguro in Clara and the sundoes not seem far-fetched.
nor “digients” (short for “digital entities”), as described by Ted Chiang in The life cycle of software objects.
Digital servers are artificial intelligence created within a purely digital world that inhabits a shared digital space (much like the emerging metaverse) but can also be downloaded into actual bots so that they can interact with people in the real world.
The ability of people to interact with the robot appears to be the key to the successful implementation of the robot.
At least that’s the view of Will Jackson, founder and CEO of Engineered Arts, who recently said: “The ‘real killer application’ of a robot is people’s desire to interact with it.”
Could this robot vision be completely unrealistic and nothing more than science fiction or entrepreneurial hype? That’s the view of some, says Michael Hiltzick of the Los Angeles Times.
He said, “Artificial intelligence propaganda not only poses a danger to ordinary people’s understanding of [robotics] domain, but it poses the risk of undermining the domain itself.”
He’s right about that, and it’s definitely important to cut the hype out of reality.
Hiltzik may be missing the arc of history.
Robotics today, much like the expanding field of artificial intelligence (AI), is still in its early days.
However, the rate of progress is enormous.
While Optimus is years from a finished product and there are still many technical and cultural hurdles, it is impossible to ignore the extraordinary pace of progress.
In just a year, Optimus has gone from idea to a bipedal, mobile robot.
It’s a growing field because Tesla isn’t alone in building a humanoid robot.
For example, a team of engineers from the Rochester Institute of Technology (RIT) has announced a humanoid robot that can teach tai chi.
A long way to go to achieve AI-powered robots
It is very difficult to build robots that mimic the actions of humans.
This EE Times article describes these challenges.
“From a mechanical perspective, for example, moving on two legs (walking on two legs) is a very physically demanding task.
“In response, the human body has evolved and adapted so that the strength density of human joints in areas such as the knees has become very high.”
In other words, it is very difficult for bots to stay upright.
Despite these challenges, real progress has been made.
Oregon State University researchers recently created a Guinness World Record for a robot that walked in a 100-meter dash, completing the course in less than 25 seconds.
The team has been training Cassie since 2017, using reinforcement-learning AI algorithms to reward the robot when it moves correctly.
The principal investigator noted the importance of the record and said:[Now we] It can make robots move aggressively around the world on two legs.”
Although impressive, the human body not only remains straight, but navigates the world through a highly complex sensory system.
“The hardest part is creating a machine that interacts with humans in a natural way,” according to Nancy J. Cook, a professor at Arizona State University.
Re-creating it in Android is still in its infancy.
This is now among the most troubling challenges for Optimus and other robotic efforts.
AI automation takes center stage
Human robotics are made possible by artificial intelligence, and artificial intelligence is moving forward with the help of triple exponential growth of computer power and software and data development.
Perhaps there is no better example of this rapid advance in AI than Natural Language Processing (NLP), particularly in terms of text generation and text-to-image conversion.
OpenAI released its first text-to-image tools in February 2019 using GPT-2, followed by GPT-3 in June 2020, DALL-E for text-to-image conversion in January 2021, and DALL-E 2 in April 2022.
Each iteration was much more capable than previous versions.
Additional companies are working to push these technologies forward, such as MidJourney and Stable Diffusion.
Now the same phenomenon is happening with converting text to video, with many new apps appearing recently from Meta, Google, Synthesia, GliaCloud and others.
NLP techniques are quickly finding real-world applications, from code development to advertising (from copywriting to image creation), and even filmmaking.
In my recent article, I described how creative artist Karen X. Cheng was commissioned to create an AI-generated cover photo for worldwide.
To help create the ideas and the final image, I used DALL-E 2.
the crowa video created by artificial intelligence, recently won the Jury Prize at the Cannes Short Film Festival.
To create the video, computer artist Glenn Marshall fed video frames of an existing video as an image reference to CLIP (Contrast Language – Image Advance Training), another text-to-image neural network also created by OpenAI.
Marshall then urged CLIP to create a video of “a painting of a crow in a desolate area.”
If only he had a brain
Of course, building an NLP application is not the same as robots.
While computing power, software, and data are common denominators, the physical aspect of building robots that need to interact with the real world adds challenges beyond software automation development.
What robots need is a brain.
“Robots don’t have anything even remotely close to the brain,” AI researcher Philip Beknevsky told Business Insider. This is largely true today, although what NLP offers is the beginning of the need for brain robots to interact with humans.
After all, the main human-like function of the brain is the ability to perceive and interpret language and transform it into contextually appropriate responses and actions.
NLP is already used in chatbots, which are bots that facilitate communication with people.
Project December, a text chat software developed with GPT-3 – has helped people get closure by “talking” to a deceased loved one.
“It may not be the first smart machine,” said robotics developer Jason Rohrer of the December project.
But it appears to be the first machine with a soul.” Intelligent robots with a soul that can walk and manipulate objects would be a huge advance.
This progress is close, although it may take a decade or more for robots to roam the globe.
Optimus and other robots today are mostly simple machines that will grow in capabilities over the next two decades to become fully developed artificial humans.
We have now begun the era of modern robotics.
*Gary Grossman He is the Senior Vice President of Technology Practices at Edelman and Global Head of the Edelman AI Center of Excellence.
This article first appeared on venturebeat.com.
#entering #era #AIpowered #robots