Rechercher dans ce blog

Wednesday, April 19, 2023

Google launched Bard chatbot despite ethics concerns, warnings it was a 'pathological liar': report - New York Post

Google reportedly moved forward with the troubled launch of its AI chatbot Bard last month despite internal warnings from employees who described the tool as a “pathological liar” prone to spewing out responses riddled with false information that can “result in serious injury or death.”

Current and former employees allege that Google ignored its own AI ethics during a desperate effort to catch up to competitors, such as Microsoft-backed OpenAI’s popular ChatGPT, Bloomberg reported on Wednesday.

Google’s push to develop Bard reportedly ramped up late last year after ChatGPT’s success prompted top brass to declare a “competitive code red,” according to the outlet.

Microsoft’s planned integration of ChatGPT into its Bing search engine is widely seen as a threat to Google’s dominant online search business.

Google rolled out Bard to US users last month in what it has described as an “experiment.”

However, many Google workers voiced concerns before the rollout when the company tasked them with testing out Bard to identify potential bugs or issues – a process known in tech circles as “dogfooding.”

Bard testers flagged concerns that the chatbot was spitting out information ranging from inaccurate to potentially dangerous.

Bard
Google rolled out Bard for US users in March.
ZUMAPRESS.com

One worker described Bard as a “pathological liar” after viewing erratic responses, according to a screenshot of an internal discussion obtained by Bloomberg. A second employee reportedly referred to Bard’s performance as “cringe-worthy.”

In one instance, a Google employee asked Bard for directions on how to land a plane – only for the service to respond with advice likely to result in a crash, according to Bloomberg.

In another case, Bard purportedly answered a prompt about scuba diving with suggestions “which would likely result in serious injury or death.”

Google CEO Sundar Pichai raised eyebrows when he admitted that the company didn’t “fully understand” its own technology.

“You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got [it] wrong,” Pichai said during interview on “60 Minutes” last Sunday.

In February, an unnamed Google employee quipped on an internal forum that Bard was “worse than useless” and asked executives not to launch the chatbot in its current state.

“AI ethics has taken a back seat,” Meredith Whittaker, a former Google employee and current president of the privacy-focused Signal Foundation, told Bloomberg. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

Employees who spoke to the outlet said Google executives opted to refer to Bard and other new AI products as “experiments” so that the public would be willing to overlook their early struggles.

Bard
Google’s chatbot has been labeled as an “experiment” by the company.
Gado via Getty Images

As Bard advanced closer to a potential launch, Google purportedly relaxed requirements for AI that are meant to dictate when a particular product is safe for public use.

In March, Jen Gennai, Google’s AI principles ops & governance lead, overrode an assessment by members of her own team which stated that Bard was not ready for release due to its potential to cause harm, sources told Bloomberg.

Gennai pushed back on the report in a statement, stating that internal reviewers suggested “risk mitigations and adjustments to the technology, versus providing recommendations on the eventual product launch.”

A committee of senior leaders for Google’s product, research, and business leaders then determines whether the AI project should move forward and what adjustments are needed, Gennai added.

“In this particular review, I added to the list of potential risks from the reviewers and escalated the resulting analysis to this multi-disciplinary council, which determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” Gennai said in a statement to The Post.

Bard AI
Google is scrambling to compete with OpenAI’s ChatGPT.
SOPA Images/LightRocket via Getty Images

Google spokesperson Brian Gabriel said “responsible AI remains a top priority at the company.”

“We are continuing to invest in the teams that work on applying our AI Principles to our technology,” Gabriel told The Post.

At present, Google’s website for Bard still labels the tool as an “experiment.”

A “FAQ” section included on the site openly declares that Bard “may display inaccurate information or offensive statements.”

“Accelerating people’s ideas with generative AI is truly exciting, but it’s still early days, and Bard is an experiment,” the site says.

Google
Google said it is committed to responsible AI.
AP

Bard’s launch has already resulted in some embarrassment for the tech giant.

Last month, app researcher Jane Manchun Wong posted an exchange in which Bard sided in the Justice Department’s antitrust officials in pending litigation against Google by declaring its creators held a “monopoly on the digital advertising market.”

In February, social media users pointed out that Bard had provided an inaccurate answer about the James Webb Space Telescope in a request to a prompt that was included in a company advertisement.

Scrutiny over Google’s Bard chatbot has intensified amid a broader debate over the potential risks associated with the unrestrained development of AI technology.

Billionaire Elon Musk and more than 1,000 experts in the field signed an open letter calling for a six-month pause in the development of advanced AI until proper guardrails were in place.

Despite his safety concerns, Musk is rapidly advancing with the launch of his own AI startup as competition builds in the sector. Google and Microsoft are just two rivals in the increasingly crowded field.

In the “60 Minutes” interview, Pichai declared that AI would eventually impact “every product across every company.”

He also expressed his support for government regulations to address potential risks.

“I think we have to be very thoughtful,” Pichai said. “And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”

Adblock test (Why?)


Google launched Bard chatbot despite ethics concerns, warnings it was a 'pathological liar': report - New York Post
Read More

No comments:

Post a Comment

Google's encryption-breaking Magic Compose AI proves iPhone shouldn't support RCS messaging - BGR

For years, Google has been dying to come up with an iMessage equivalent, a key iPhone feature that’s probably responsible for stealing plent...