Google Hopes ‘Bard’ Will Outsmart ChatGPT, Microsoft in AI

Google is girding for a battle of wits in the field of artificial intelligence with “Bard,” a conversational service aimed at countering the popularity of the ChatGPT tool backed by Microsoft.

Bard initially will be available exclusively to a group of “trusted testers” before being widely released later this year, according to a Monday blog post from Google CEO Sundar Pichai.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Pichai didn’t say in his post whether Bard will be able to write prose in the vein of William Shakespeare, the playwright who apparently inspired the service’s name.

“Bard can be an outlet for creativity, and a launchpad for curiosity,” Pichai wrote.

Google announced Bard’s existence less than two weeks after Microsoft disclosed it’s pouring billions of dollars into OpenAI, the San Francisco-based maker of ChatGPT and other tools that can write readable text and generate new images.

Microsoft’s decision to up the ante on a $1 billion investment that it previously made in OpenAI in 2019 intensified the pressure on Google to demonstrate that it will be able to keep pace in a field of technology that many analysts believe will be as transformational as personal computers, the internet and smartphones have been in various stages over the past 40 years.

In a report last week, CNBC said a team of Google engineers working on artificial intelligence technology “has been asked to prioritize working on a response to ChatGPT.” Bard had been a service being developed under a project called “Atlas,” as part of Google’s “code red” effort to counter the success of ChatGPT, which has attracted tens of millions of users since its general release late last year, while also raising concerns in schools about its ability to write entire essays for students.

Pichai has been emphasizing the importance of artificial intelligence for the past six years, with one of the most visible byproducts materializing in 2021 as part of a system called “Language Model for Dialogue Applications,” or LaMDA, which will be used to power Bard.

Google also plans to begin incorporating LaMDA and other artificial intelligence advancements into its dominant search engine to provide more helpful answers to the increasingly complicated questions being posed by its billion of users. Without providing a specific timeline, Pichai indicated the artificial intelligence tools will be deployed in Google’s search in the near future.

In another sign of Google’s deepening commitment to the field, Google announced last week that it is investing in and partnering with Anthropic, an AI startup led by some former leaders at OpenAI. Anthropic has also built its own AI chatbot named Claude and has a mission centered on AI safety.

Schools Ban ChatGPT amid Fears of Artificial Intelligence-Assisted Cheating

U.S. educators are debating the merits and risks of a new, free artificial intelligence tool called ChatGPT, which students are using to write passable high school essays. So far, there isn’t a reliable way to catch cheating. Matt Dibble has the story.

Technology Brings Hope to Ukraine’s Wounded

The war in Ukraine has left thousands of wounded soldiers, many of whom require the latest technologies to heal and return to normal life. For VOA, Anna Chernikova visited a rehabilitation center near Kyiv, where cutting edge technology and holistic care are giving soldiers hope. (Myroslava Gongadze contributed to this report. Camera: Eugene Shynkar )       

Ransomware Attacks in Europe Target Old VMware, Agencies Say

Cybersecurity agencies in Europe are warning of ransomware attacks exploiting a two-year-old computer bug as Italy experienced widespread internet outages. 

The Italian premier’s office said Sunday night the attacks affecting computer systems in the country involved “ransomware already in circulation” in a product made by cloud technology provider VMware. 

A Friday technical bulletin from a French cybersecurity agency said the attack campaigns target VMware ESXi hypervisors, which are used to monitor virtual machines. 

Palo Alto, California-based VMware fixed the bug back in February 2021, but the attacks are targeting older, unpatched versions of the product. 

The company said in a statement Sunday that its customers should take action to apply the patch if they have not already done so. 

“Security hygiene is a key component of preventing ransomware attacks,” it said. 

The U.S. Cybersecurity and Infrastructure Security Agency said Sunday it is “working with our public and private sector partners to assess the impacts of these reported incidents and providing assistance where needed.” 

The problem attracted particular public attention in Italy on Sunday because it coincided with a nationwide internet outage affecting telecommunications operator Telecom Italia, which interfered with streaming the Spezia v. Napoli soccer match but appeared largely resolved by the time of the later Derby della Madonnina between Inter Milan and AC Milan. It was unclear whether the outages were related to the ransomware attacks. 

Seeing Is Believing? Global Scramble to Tackle Deepfakes

Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions — the scramble is on to rein in AI deepfakes that have become a misinformation super spreader.

Artificial Intelligence is redefining the proverb “seeing is believing,” with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.

China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere — it was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”

The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.

“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the COVID-19 vaccine was behind his on-field collapse baselessly labeled his video a deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”

In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”

The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparency? That is the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”

“The results confirm fears … about how the tool can be weaponized in the wrong hands,” NewsGuard said.

Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

US, 8 States Sue Google on Digital Ad Business Dominance

The U.S. Justice Department filed a lawsuit against Alphabet’s GOOGL.O Google on Tuesday over allegations that the company abused its dominance of the digital advertising business, according to a court document.

“Google has used anticompetitive, exclusionary, and unlawful means to eliminate or severely diminish any threat to its dominance over digital advertising technologies,” the government said in its antitrust complaint.

The Justice Department asked the court to compel Google to divest its Google Ad manager suite, including its ad exchange AdX.

Google did not immediately respond to a request for comment.

The lawsuit is the second federal antitrust complaint filed against Google, alleging violations of antitrust law in how the company acquires or maintains its dominance. The Justice Department lawsuit filed against Google in 2020 focuses on its monopoly in search and is scheduled to go to trial in September.

Eight states joined the department in the lawsuit filed on Tuesday, including Google’s home state of California.

Google shares were down 1.3% on the news.

The lawsuit says “Google has thwarted meaningful competition and deterred innovation in the digital advertising industry, taken supra-competitive profits for itself, prevented the free market from functioning fairly to support the interests of the advertisers and publishers who make today’s powerful internet possible.”

While Google remains the market leader by a long shot, its share of the U.S. digital ad revenue has been eroding, falling to 28.8% last year from 36.7% in 2016, according to Insider Intelligence. Google’s advertising business is responsible for some 80% of its revenue.

AI Tools Can Create New Images, But Who Is the Real Artist?

Countless artists have taken inspiration from “The Starry Night” since Vincent Van Gogh painted the swirling scene in 1889.

Now artificial intelligence systems are doing the same, training themselves on a vast collection of digitized artworks to produce new images you can conjure in seconds from a smartphone app.

The images generated by tools such as DALL-E, Midjourney and Stable Diffusion can be weird and otherworldly but also increasingly realistic and customizable — ask for a “peacock owl in the style of Van Gogh” and they can churn out something that might look similar to what you imagined.

But while Van Gogh and other long-dead master painters aren’t complaining, some living artists and photographers are starting to fight back against the AI software companies creating images derived from their works.

Two new lawsuits —- one this week from the Seattle-based photography giant Getty Images —- take aim at popular image-generating services for allegedly copying and processing millions of copyright-protected images without a license.

Getty said it has begun legal proceedings in the High Court of Justice in London against Stability AI — the maker of Stable Diffusion —- for infringing intellectual property rights to benefit the London-based startup’s commercial interests.

Another lawsuit filed Friday in a U.S. federal court in San Francisco describes AI image-generators as “21st-century collage tools that violate the rights of millions of artists.” The lawsuit, filed by three working artists on behalf of others like them, also names Stability AI as a defendant, along with San Francisco-based image-generator startup Midjourney, and the online gallery DeviantArt.

The lawsuit said AI-generated images “compete in the marketplace with the original images. Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or license an original image from that artist.”

Companies that provide image-generating services typically charge users a fee. After a free trial of Midjourney through the chatting app Discord, for instance, users must buy a subscription that starts at $10 per month or up to $600 a year for corporate memberships. The startup OpenAI also charges for use of its DALL-E image generator, and StabilityAI offers a paid service called DreamStudio.

Stability AI said in a statement that “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”

In a December interview with The Associated Press, before the lawsuits were filed, Midjourney CEO David Holz described his image-making subscription service as “kind of like a search engine” pulling in a wide swath of images from across the internet. He compared copyright concerns about the technology with how such laws have adapted to human creativity.

“Can a person look at somebody else’s picture and learn from it and make a similar picture?” Holz said. “Obviously, it’s allowed for people and if it wasn’t, then it would destroy the whole professional art industry, probably the nonprofessional industry too. To the extent that AIs are learning like people, it’s sort of the same thing and if the images come out differently then it seems like it’s fine.”

The copyright disputes mark the beginning of a backlash against a new generation of impressive tools — some of them introduced just last year — that can generate new images, readable text and computer code on command.

They also raise broader concerns about the propensity of AI tools to amplify misinformation or cause other harm. For AI image generators, that includes the creation of nonconsensual sexual imagery.

Some systems produce photorealistic images that can be impossible to trace, making it difficult to tell the difference between what’s real and what’s AI. And while most have some safeguards in place to block offensive or harmful content, experts say it’s not enough and fear it’s only a matter of time until people utilize these tools to spread disinformation and further erode public trust.

“Once we lose this capability of telling what’s real and what’s fake, everything will suddenly become fake because you lose confidence of anything and everything,” said Wael Abd-Almageed, a professor of electrical and computer engineering at the University of Southern California.

As a test, The Associated Press submitted a text prompt on Stable Diffusion featuring the keywords “Ukraine war” and “Getty Images.” The tool created photo-like images of soldiers in combat with warped faces and hands, pointing and carrying guns. Some of the images also featured the Getty watermark, but with garbled text.

AI can also get things wrong, like feet and fingers or details on ears that can sometimes give away that they’re not real, but there’s no set pattern to look out for. And those visual clues can also be edited. On Midjourney, for instance, users often post on the Discord chat asking for advice on how to fix distorted faces and hands.

With some generated images traveling on social networks and potentially going viral, they can be challenging to debunk since they can’t be traced back to a specific tool or data source, according to Chirag Shah, a professor at the Information School at the University of Washington, who uses these tools for research.

“You could make some guesses if you have enough experience working with these tools,” Shah said. “But beyond that, there is no easy or scientific way to really do this.”

But for all the backlash, there are many people who embrace the new AI tools and the creativity they unleash. Searches on Midjourney, for instance, show curious users are using the tool as a hobby to create intricate landscapes, portraits and art.

There’s plenty of room for fear, but “what can else can we do with them?” asked the artist Refik Anadol this week at the World Economic Forum in Davos, Switzerland, where he displayed an exhibit of his AI-generated work.

At the Museum of Modern Art in New York, Anadol designed “Unsupervised,” which draws from artworks in the museum’s prestigious collection — including “The Starry Night” — and feeds them into a massive digital installation generating animations of mesmerizing colors and shapes in the museum lobby.

The installation is “constantly changing, evolving and dreaming 138,000 old artworks at MoMA’s Archive,” Anadol said. “From Van Gogh to Picasso to Kandinsky, incredible, inspiring artists who defined and pioneered different techniques exist in this artwork, in this AI dream world.”

For painters like Erin Hanson, whose impressionist landscapes are so popular and easy to find online that she has seen their influence in AI-produced visuals, she is not worried about her own prolific output, which makes $3 million a year.

She does, however, worry about the art community as a whole.

“The original artist needs to be acknowledged in some way or compensated,” Hanson said. “That’s what copyright laws are all about. And if artists aren’t acknowledged, then it’s going to make it hard for artists to make a living in the future.”

FBI Chief Says He’s ‘Deeply Concerned’ by China’s AI Program

FBI Director Christopher Wray said Thursday that he was “deeply concerned” about the Chinese government’s artificial intelligence program, asserting that it was “not constrained by the rule of law.”

Speaking during a panel session at the World Economic Forum in Davos, Switzerland, Wray said Beijing’s AI ambitions were “built on top of massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

He said that left unchecked, China could use artificial intelligence advancements to further its hacking operations, intellectual property theft and repression of dissidents inside the country and beyond.

“That’s something we’re deeply concerned about. I think everyone here should be deeply concerned about,” he said.

More broadly, he said, “AI is a classic example of a technology where I have the same reaction every time. I think, ‘Wow, we can do that?’ And then I think, ‘Oh God, they can do that.’”

Such concerns have long been voiced by U.S. officials. In October 2021, for instance, U.S. counterintelligence officials issued warnings about China’s ambitions in AI as part of a renewed effort to inform business executives, academics and local and state government officials about the risks of accepting Chinese investment or expertise in key industries.

Earlier that year, an AI commission led by former Google CEO Eric Schmidt urged the U.S. to boost its AI skills to counter China, including by pursuing “AI-enabled” weapons.

A spokesperson for the Chinese Embassy in Washington did not immediately respond to a request seeking comment Thursday about Wray’s comments. Beijing has repeatedly accused Washington of fearmongering and attacked U.S. intelligence for its assessments of China.

Tech Layoffs Mount as Microsoft, Amazon Shed Staff

Software giant Microsoft on Wednesday became the latest major company in the tech sector to announce significant job cuts when it reported it would lay off 10,000 employees, or about 5% of its workforce.

Microsoft’s job cuts come just as e-commerce leader Amazon begins a fresh round of 18,000 layoffs, extending a wave of other major cuts at Twitter, Salesforce and dozens of smaller technology firms in recent weeks.

The phenomenon of job losses in the tech sector has global reach but has been keenly felt in Silicon Valley and other West Coast tech hubs in the United States. The website layoffs.fyi, which tracks job cuts in the tech industry, has identified well over 100 tech firms announcing layoffs since January 1 across North and South America, Europe, Asia and Australia. In all, the website has counted more than 1,200 firms making layoffs since the beginning of 2022.

Changing environment

In an interview at the World Economic Forum in Davos, Switzerland, on Wednesday, Microsoft CEO Satya Nadella appeared to suggest that retrenchment in the tech sector was a result of reduced consumer demand.

“During the pandemic, there was rapid acceleration,” Nadella said. “I think we’re going to go through a phase today where there is some amount of normalization in demand.”

He said the company would seek to drive growth by increasing its own productivity. The interview took place before Microsoft officially announced the layoffs.

One major focus of the layoffs, according to multiple media reports, was the division of the company that makes augmented reality systems, including the company’s HoloLens goggles and the Integrated Visual Augmentation System, which until recently were being developed in cooperation with the U.S. Army.

Later in the day in an email to employees, Nadella wrote, “These are the kinds of hard choices we have made throughout our 47-year history to remain a consequential company in this industry that is unforgiving to anyone who doesn’t adapt to platform shifts.”

However, he signaled the company would continue hiring in areas such as artificial intelligence that management believes are strategically important.

Also on Wednesday, Doug Herrington, head of Amazon’s global retail business, said his company was restructuring to meet consumers’ demands but would continue to invest in areas where it saw the potential for growth, including its grocery delivery business.

Stronger, perhaps

Wayne Hochwarter, who teaches business administration at Florida State University, described the layoffs at Microsoft and Amazon as examples of businesses making adjustments to their workforces in the face of a changing business climate.

“I think they overestimated the trends in personal purchasing patterns, and they thought, ‘OK, we’re going to make sure we’re not shorthanded,’” he told VOA. “And then when things softened a little bit, they realized they had hired too many people.”

He also warned against reading too much into the latest layoffs.

“I don’t think the tech sector is going to heck in a handbasket,” he said. “They may have reevaluated where things are going to go, but I don’t see this as a catalyst for sending us into economic deterioration, or anything that’s going to put a crimp on the economy.”

Looking to the future, Hochwarter said, the workforce changes are “probably going to make them stronger companies.”

Weathering the storm

Margaret O’Mara, author of the book The Code: Silicon Valley and the Remaking of America, told VOA that the current run of layoffs in the U.S. was just the latest chapter in a long cycle of booms and busts in the tech sector.

In some important respects, she said, it’s a story about more than just a misreading of trends in consumer preferences.

“It’s similar to other downturns, and there have been many — for every boom there was a bust — in that their macro[economic] conditions have shifted,” she said. “Tech is an industry that’s very much fueled by investment capital and the stock market.”

O’Mara said that over the last 10 years, with low interest rates and large amounts of cash flowing through the economy, conditions have been “extraordinary” for the growth of U.S. tech companies. As those conditions change, so does the amount of money investors want to put into tech firms.

However, O’Mara, a professor of American history at the University of Washington, said it was important not to look at conditions today as similar to the catastrophic dot-com bust of 2000.

“Tech is many orders of magnitude larger than it ever has been before,” she said. “We are talking about platform companies that are unlike the dot-coms, which were very young and very frothy, and it was easy for their value to collapse. They weren’t providing the essential services … fundamental to the rest of the economy.”

By contrast, she said, companies like Microsoft and Amazon have deep connections to the broader U.S. economy and should be able to withstand the current economic headwinds.

Difficult for H-1B visa holders

A disproportionate share of workers in the U.S. technology sector are non-citizens who hold H-1B visas, which allow companies to sponsor them. Layoffs are particularly difficult for visa holders — the overwhelming majority of whom are from India — because once their employment is terminated, they have just 60 days to find a new sponsor. Otherwise, they are required to leave the country.

Hochwarter said he thought companies would pull back on hiring H-1B visa workers, at least for the time being.

“My sense is that because that takes a great deal of effort and energy on the part of the employing organization, they’re probably going to start cutting down on those because they’re just not quite as needed,” he said.

On Wednesday, U.S. Secretary of Labor Martin Walsh, speaking at Davos, bemoaned the state of U.S. immigration law, saying it denies the U.S. the workers it needs to drive economic growth.

“We need immigration reform in America. America has always been a country that has depended on immigration. The threat to the American economy long term is not inflation, it’s immigration,” he said. “It’s not having enough workers.”

SpaceX’s Starlink Becomes Crucial Tool in Ukrainian War Effort

When Russia invaded Ukraine, the military and private citizens started using Elon Musk’s SpaceX Starlink, which eventually became key to Ukraine’s resistance. From Kyiv, Myroslava Gongadze tells the story of one Ukrainian engineer who volunteers to support the technology and the soldiers who use it.

Biden Urges Netherlands to Back Restrictions on Exporting Chip Tech to China

President Joe Biden hosted Dutch Prime Minister Mark Rutte on Tuesday at the White House, where he urged the Netherlands to support new U.S. restrictions on exporting chip-making technology to China, a key part of Washington’s strategy in its rivalry against Beijing.

During a brief appearance in front of reporters before their meeting, Biden said that he and Rutte have been working on “how to keep a free and open Indo-Pacific” to “meet the challenges of China.”

“Simply put, our companies, our countries have been so far just lockstep in what we’ve done in our investment to the future. So today, I look forward to discussing how we can further deepen our relationship and securing our supply chains to strengthen our transatlantic partnership,” he said.

ASML Holding NV, maker of the world’s most advanced semiconductor lithography systems, is headquartered in Veldhoven, making the Netherlands key to Washington’s chip push against Beijing. Ahead of Rutte’s visit, Dutch Trade Minister Liesje Schreinemacher said the Netherlands is consulting with European and Asian allies and will not automatically accept the new restrictions that the U.S. Commerce Department launched in October.

“You can’t say that they’ve been pressuring us for two years and now we have to sign on the dotted line. And we won’t,” she said.

Rutte did not mention the semiconductor issue ahead of his meeting with Biden, focusing instead on Russia’s invasion on Ukraine, where the NATO allies have been working together to support Kyiv.

“Let’s stay closely together this year,” Rutte said. “And hopefully, things will move forward in a way which is acceptable for Ukraine.”

China is one of ASML’s biggest clients. CEO Peter Wennink in October played down the impact of the U.S. export control regulations.

“Based on our initial assessment, the new restrictions do not amend the rules governing lithography equipment shipped by ASML out of the Netherlands and we expect the direct impact on ASML’s overall 2023 shipment plan to be limited,” he said.

Shoring up allies

Biden has been shoring up allies, including the Netherlands, Japan and South Korea — home to leading companies that play a critical role in the industry’s supply chain — to limit Beijing’s access to advanced semiconductors. Last week he hosted Japanese Prime Minister Fumio Kishida, who said he backs Biden’s attempt but did not agree to match the sweeping curbs targeting China’s semiconductor and supercomputing industries.

U.S. officials say export restrictions on chips are necessary because China can use semiconductors to advance their military systems, including weapons of mass destruction, and commit human rights abuses.

The October restrictions follow the U.S. Congress’ July passing of the CHIPS Act of 2022 to strengthen domestic semiconductor manufacturing, design and research, and reinforce America’s chip supply chains. The legislation also restricts companies that receive U.S. subsidies from investing in and expanding cutting edge chipmaking facilities in China.

Some information for this story came from AP.

Israel’s Cognyte Won Tender to Sell Spyware to Myanmar Before Coup, Documents Show

Israel’s Cognyte Software Ltd won a tender to sell intercept spyware to a Myanmar state-backed telecommunications firm a month before the Asian nation’s February 2021 military coup, according to documents reviewed by Reuters.

The deal was made even though Israel has claimed it stopped defense technology transfers to Myanmar following a 2017 ruling by Israel’s Supreme Court, according to a legal complaint recently filed with Israel’s attorney general and disclosed Sunday.

While the ruling was subjected to a rare gag order at the request of the state and media cannot cite the verdict, Israel’s government has publicly stated on numerous occasions that defense exports to Myanmar are banned.

The complaint, led by high-profile Israeli human rights lawyer Eitay Mack who spearheaded the campaign for the Supreme Court ruling, calls for a criminal investigation into the deal.

It accuses Cognyte and unnamed defense and foreign ministry officials who supervise such deals of “aiding and abetting crimes against humanity in Myanmar.”

The complaint was filed on behalf of more than 60 Israelis, including a former speaker of the house as well as prominent activists, academics and writers.

The documents about the deal, provided to Reuters and Mack by activist group Justice for Myanmar, are a January 2021 letter with attachments from Myanmar Posts and Telecommunications (MPT) to local regulators that list Cognyte as the winning vendor for intercept technology and note the purchase order was issued “by 30th Dec 2020.”

Intercept spyware can give authorities the power to listen in on calls, view text messages and web traffic including emails, and track the locations of users without the assistance of telecom and internet firms.

Representatives for Cognyte, Myanmar’s military government and MPT did not respond to multiple Reuters requests for comment. Japan’s KDDI Corp and Sumitomo Corp, which have stakes in MPT, declined to comment, saying they were not privy to details on communication interception.

Israel’s attorney general did not respond to requests for comment about the complaint. The foreign affairs ministry did not respond to requests for comment about the deal, while the defense ministry declined to comment.

Two people with knowledge of Myanmar’s intercept plans separately told Reuters the Cognyte system was tested by MPT.

They declined to be identified for fear of retribution by Myanmar’s junta.

MPT uses intercept spyware, a source with direct knowledge of the matter and three people briefed on the issue told Reuters although they did not identify the vendor. Reuters was unable to determine whether the sale of Cognyte intercept technology to MPT was finalized.

Even before the coup, public concern had mounted in Israel about the country’s defense exports to Myanmar after a brutal 2017 crackdown by the military on the country’s Rohingya population while Aung San Suu Kyi’s government was in power. The crackdown prompted the petition led by Mack that asked the Supreme Court to ban arms exports to Myanmar.

Since the coup, the junta has killed thousands of people including many political opponents, according to the United Nations.

Cognyte under fire

Many governments around the world allow for what are commonly called “lawful intercepts” to be used by law enforcement agencies to catch criminals but the technology is not ordinarily employed without any kind of legal process, cybersecurity experts have said.

According to industry executives and activists previously interviewed by Reuters, Myanmar’s junta is using invasive telecoms spyware without legal safeguards to protect human rights.

Mack said Cognyte’s participation in the tender contradicts statements made by Israeli officials after the Supreme court ruling that no security exports had been made to Myanmar.

While intercept spyware is typically described as “dual-use” technology for civilian and defense purposes, Israeli law states that “dual-use” technology is classified as defense equipment.

Israeli law also requires companies exporting defense-related products to seek licenses for export and marketing when doing deals. The legal complaint said any officials who granted Cognyte licenses for Myanmar deals should be investigated. Reuters was unable to determine whether Cognyte obtained such licenses.

Around the time of the 2020 deal, the political situation in Myanmar was tense with the military disputing the results of an election won by Suu Kyi.

Norway’s Telenor, previously one of the biggest telecoms firms in Myanmar before withdrawing from the country last year, also said in a Dec. 3, 2020 briefing and statement that it was concerned about Myanmar authorities’ plans for a lawful intercept due to insufficient legal safeguards.

Nasdaq-listed Cognyte was spun off in February 2021 from Verint Systems Inc, a pioneering giant in Israel’s cybersecurity industry.

Cognyte, which had $474 million in annual revenue for its last financial year, was also banned from Facebook in 2021.

Facebook owner Meta Platforms Inc said in a report Cognyte “enables managing fake accounts across social media platforms.”

Meta said its investigation identified Cognyte customers in a range of countries such as Kenya, Mexico and Indonesia and their targets included journalists and politicians. It did not identify the customers or the targets.

Meta did not respond to a request for further comment.

Norway’s sovereign wealth fund last month dropped Cognyte from its portfolio, saying states said to be customers of its surveillance products and services “have been accused of extremely serious human rights violations.” The fund did not name any states.

Cognyte has not responded publicly to the claims made by Meta or Norway’s sovereign wealth fund.

Fight Over Big Tech Looms in US Supreme Court

An upcoming U.S. Supreme Court case that asks whether tech firms can be held liable for damages related to algorithmically generated content recommendations has the ability to “upend the internet,” according to a brief filed by Google this week.

The case, Gonzalez v. Google LLC, is a long-awaited opportunity for the high court to weigh in on interpretations of Section 230 of the Communications Decency Act of 1996. A provision of federal law that has come under fire from across the political spectrum, Section 230 shields technology firms from liability for content published by third parties on their platforms, but also allows those same firms to curate or bar certain content.

The case arises from a complaint by Reynaldo Gonzalez, whose daughter was killed in an attack by members of the terror group ISIS in Paris in 2015. Gonzales argues that Google helped ISIS recruit members because YouTube, the online video hosting service owned by Google, used a video recommendation algorithm that suggested videos published by ISIS to individuals who displayed interest in the group.

Gonzalez’s complaint argues that by recommending content, YouTube went beyond simply providing a platform for ISIS videos, and should therefore be held accountable for their effects.

Dystopia warning

The case has garnered the attention of a multitude of interested parties, including free speech advocates who want tech firms’ liability shield left largely intact. Others argue that because tech firms take affirmative steps to keep certain content off their platforms, their claims to be simple conduits of information ring hollow, and that they should therefore be liable for the material they publish.

In its brief, Google painted a dire picture of what might happen if the latter interpretation were to prevail, arguing that it “would turn the internet into a dystopia where providers would face legal pressure to censor any objectionable content. Some might comply; others might seek to evade liability by shutting their eyes and leaving up everything, no matter how objectionable.”

Not everyone shares Google’s concern.

“Actually all it would do is make it so that Google and other tech companies have to follow the law just like everybody else,” Megan Iorio, senior counsel for the Electronic Privacy Information Center, told VOA.

“Things are not so great on the internet for certain groups of people right now because of Section 230,” said Iorio, whose organization filed a friend of the court brief in the case. “Section 230 makes it so that tech companies don’t have to respond when somebody tells them that non-consensual pornography has been posted on their site and keeps on proliferating. They don’t have to take down other things that a court has found violate the person’s privacy rights. So you know, to [say] that returning Section 230 to its original understanding is going to create a hellscape is hyperbolic.”

Unpredictable effects

Experts said the Supreme Court might try to chart a narrow course that leaves some protections intact for tech firms, but allows liability for recommendations. However, because of the prevalence of algorithmic recommendations on the internet, the only available method to organize the dizzying array of content available online, any ruling that affects them could have a significant impact.

“It has pretty profound implications, because with tech regulation and tech law, things can have unintended consequences,” John Villasenor, a professor of engineering and law and director of the UCLA Institute for Technology, Law and Policy, told VOA.

“The challenge is that even a narrow ruling, for example, holding that targeted recommendations are not protected, would have all sorts of very complicated downstream consequences,” Villasenor said. “If it’s the case that targeted recommendations aren’t protected under the liability shield, then is it also true that search results that are in some sense customized to a particular user are also unprotected?”

26 words

The key language in Section 230 has been called, “the 26 words that created the internet.” That section reads as follows:

“No provider or user of an interactive computer service shall be treated as the publisher of or speaker of information provided by another information content provider.”

At the time the law was drafted in the 1990s, people around the world were flocking to an internet that was still in its infancy. It was an open question whether an internet platform that gave individual third parties the ability to post content on them, such as a bulletin board service, was legally liable for that content.

Recognizing that a patchwork of state-level libel and defamation laws could leave developing internet companies exposed to crippling lawsuits, Congress drafted language meant to shield them. That protection is credited by many for the fact that U.S. tech firms, particularly in Silicon Valley, rose to dominance on the internet in the 21st century.

Because of the global reach of U.S. technology firms, the ruling in Gonzalez v. Google LLC is likely to echo far beyond the United States when it is handed down.

Legal groundwork

The groundwork for the Supreme Court’s decision to take the case was laid in 2020, when Justice Clarence Thomas wrote in response to an appeal that, “in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by internet platforms.”

That statement by Thomas, arguably the court’s most conservative member, heartened many on the right who are concerned that “Big Tech” firms enjoy too much cultural power in the U.S., including the ability to deny a platform to individuals with whose views they disagree.

Gonzalez v. Google LLC is remarkable in that many cases that make it to the Supreme Court do so in part because lower courts have issued conflicting decisions, requiring an authoritative ruling from the high court to provide legal clarity.

Gonzalez’s case, however, has been dismissed by two lower courts, both of which held that Section 230 rendered Google immune from the suit.

Conservative concerns

Politicians have been calling for reform of Section 230 for years, with both Republicans and Democrats joining the chorus, though frequently for different reasons.

Former President Donald Trump regularly railed against large technology firms, threatening to use the federal government to rein them in, especially when he believed that they were preventing him or his supporters from getting their messages out to the public.

His concern became particularly intense during the early years of the COVID-19 pandemic, when technology firms began working to limit the spread of social media accounts that featured misinformation about the virus and the safety of vaccinations.

Trump was eventually kicked off Twitter and Facebook after using those platforms to spread false claims about the 2020 presidential election, which he lost, and to help organize a rally that preceded the assault on the U.S. Capitol on January 6, 2021.

Major figures in the Republican Party are active in the Gonzalez case. Missouri Senator Josh Hawley and Texas Senator Ted Cruz have both submitted briefs in the case urging the court to crack down on Google and large tech firms in general.

“Confident in their ability to dodge liability, platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views,” Cruz writes.

Biden calls for reform

Section 230 criticism has come from both sides of the aisle. On Wednesday, President Joe Biden published an essay in The Wall Street Journal urging “Democrats and Republicans to come together to pass strong bipartisan legislation to hold Big Tech accountable.”

Biden argues for a number of reforms, including improved privacy protections for individuals, especially children, and more robust competition, but he leaves little doubt about what he sees as a need for Section 230 reform.

“[W]e need Big Tech companies to take responsibility for the content they spread and the algorithms they use,” he writes. “That’s why I’ve long said we must fundamentally reform Section 230 of the Communications Decency Act, which protects tech companies from legal responsibility for content posted on their sites.”

Report: Iran May Be Using Facial Recognition Technology to Police Hijab Law

A recently published report in a U.S.-based magazine says Iran is likely using facial recognition technology to monitor women’s compliance with the country’s hijab law.

While there are other ways people can be identified, Wired magazine says Iran’s apparent use of facial recognition technology against women is “perhaps the first known instance of a government using face recognition to impose dress law on women based on religious belief.”

Iran announced late last year that it would begin to use recognition technology to monitor its women.

Wired said that since the protests that have erupted across Iran following the death of a young women who was arrested for wearing her headscarf improperly, Iranian women are reporting that they are being arrested for hijab infractions a day or two after attending protests, even though they had no interaction with police during the protests.

Tiandy, a Chinese company blacklisted by the U.S., is a likely provider of facial recognition technology to Iran, although neither it nor Iranian officials responded to a request for comment from Wired.

The company has in the past listed the Iran Revolutionary Guard Corp and other Iranian police and government agencies as customers. Tiandy also boasted on its website that its technology has helped China identify the country’s ethnic minorities, including Uyghurs.

Journalists Say Elon Musk Needs to Reinstitute Monitoring of Twitter

Concerns linger over Twitter’s stance on free expression and safety since Elon Musk took over the platform in a $44 billion deal.

Since taking ownership in late October, Musk has instituted changes including dissolving an oversight review channel, laying off a large portion of the team focused on combating misinformation, and suspending the accounts of several U.S. journalists.

Two media advocacy groups on Wednesday called on Musk to reverse course and implement policies to protect the right to legitimate information and press freedom.

In a joint letter to Twitter, Reporters Without Borders (RSF) and the Committee to Protect Journalists (CPJ) voiced “alarm” that Musk had undermined the legitimacy of Twitter by dissolving the site’s oversight review panel that checked postings for their truthfulness and laying off the majority of Twitter staff who helped combat misinformation.

The journalists’ groups also criticized Musk for “arbitrarily reinstating the accounts of nefarious actors, including known spreaders of misinformation,” and its suspension of several reporters, including VOA’s chief national correspondent, Steve Herman.

“Twitter’s policies should be crafted and communicated in a transparent manner … not arbitrarily or based on the company leadership’s personal preferences, perceptions and frustrations,” said the two organizations.

The groups also said Musk should reinstate Twitter’s Trust and Safety Council to review content posted on the site and better monitor attempts to censor information and penalize some individuals, including many journalists.

“Transparency and democratic safeguards must replace Musk’s capricious, arbitrary decision-making,” said Christophe Deloire, secretary-general of RSF.

In December, Twitter notified members of the Trust and Safety Council that the advisory group had been dissolved.

The email to the group said Twitter would work with partners through smaller meetings and regional contacts, said CPJ, a media rights organization that was a member of the council along with RSF.

“Mechanisms such as the Trust and Safety Council help platforms like Twitter to understand how to address harm and counter behavior that targets journalists,” CPJ President Jodie Ginsberg said in a statement. “Safety online can mean survival offline.”

Twitter also has continued its suspension of some journalists, saying it will restore their accounts only if certain posts are deleted.

Those suspended had tweeted about @ElonJet, an account that uses publicly available data to report on Musk’s private jet. That account was also suspended.

Musk had said on Twitter that the @Elonjet account and any accounts that linked to it were suspended because they violated Twitter’s anti-doxxing policy.

Doxxing is maliciously publishing a person’s private or identifying information — such a phone number or address — on the internet.

The @Elonjet Twitter account, however, used publicly available data. Additionally, none of the journalists who had tweeted about Musk and his shutdown of the account had tweeted location information for his plane. They did report that the @Elonjet account had moved to another platform and named the platform.

Some of the journalists have had their accounts restored after removing content. But VOA’s Herman is still suspended from the platform after refusing to remove tweets.

The veteran correspondent said he was notified this week that his appeal against the permanent suspension was denied. The reason: violating rules against “posting private information.”

Before the account was suspended, Herman had more than 111,000 followers.

“Based on what Musk has previously tweeted and recent media reports, I have concerns that if I don’t give into the demand to delete several posts and reactivate @W7VOA, my Twitter account will eventually be deleted for inactivity or auctioned off,” he told VOA.

Herman, like other journalists, migrated to other social media platforms including Mastodon, where he gained 40,000 followers. But, he said, “Neither platform has yet to achieve critical mass and thus the influence of Twitter, especially for journalists and policymakers.”

GM, Ford, Google Partner to Promote ‘Virtual’ Power Plants

Companies including GM, Ford, Google and solar energy producers said on Tuesday they would work together to establish standards for scaling up the use of virtual power plants (VPPs), systems for easing loads on electricity grids when supply is short.

Energy transition nonprofit RMI will host the initiative, the Virtual Power Plant Partnership (VP3), which will also aim to shape policy for promoting the use of the systems, the companies said.

Virtual power plants pool together thousands of decentralized energy resources like electric vehicles or electric heaters controlled by smart thermostats.

With permission from customers, they use advanced software to react to electricity shortages with such techniques as switching thousands of households’ batteries, like those in EVs, from charge to discharge mode or prompting electricity-using devices, such as water heaters, to back off their consumption.

VPPs are positioned for explosive growth in the United States, where the 2021 Inflation Reduction Act has created or enlarged tax incentives for electric cars, electric water heaters, solar panels and other devices whose output and consumption can be coordinated to smooth grid load.

RMI estimates that by 2030, VPPs could reduce U.S. peak demand by 60 gigawatts, the average consumption of 50 million households, and by more than 200 GW by 2050.

“Virtual power plants will enable grid planners and grid operators to (better manage) growing electricity demand from vehicles, from buildings and from industry, and make sure that the grid can stay reliable even in the face of ongoing extreme weather challenges and aging physical infrastructure,” said Mark Dyson, managing director with the carbon-free electricity program at RMI.

Rob Threlkeld, director of global energy strategy at General Motors GM.N, told Reuters that VP3 would be able to “show that EVs can become a reliable asset to the retail utility and or the retail transmission operator” and “can be an asset to a homeowner and to fleet customers.”

VPPs have already improved grid reliability in such countries as Germany and Australia and in some U.S. states.

During an extreme heat wave last August, wholesale market operator California Independent System Operator avoided blackouts by calling on all available resources, including VPPs, to dispatch electricity. Google Nest smart thermostats contributed to easing the load.

“That is increasingly going to be required to make sure that the grid remains resilient, that we avoid blackouts and that we enable the grid to become cleaner and greener,” said Parag Chokshi, director of Google’s Nest Renew.

Other founding members of VP3 include Ford F.N, SunPower SPWR.O and Sunrun RUN.O.

Virgin Orbit Rocket Carrying Satellites Fails to Reach Orbit

A mission to launch the first satellites into orbit from Western Europe suffered an “anomaly” Tuesday, Virgin Orbit said.  

The U.S.-based company attempted its first international launch on Monday, using a modified jumbo jet to carry one of its rockets from Cornwall in southwestern England to the Atlantic Ocean where the rocket was released. The rocket was supposed to take nine small satellites for mixed civil and defense use into orbit.  

But about two hours after the plane took off, the company reported that the mission encountered a problem. 

“We appear to have an anomaly that has prevented us from reaching orbit. We are evaluating the information,” Virgin Orbit said on Twitter.  

Virgin Orbit, which is listed on the NASDAQ stock exchange, was founded by British billionaire Richard Branson. It has previously completed four similar launches from California. 

Hundreds gathered for the launch cheered earlier as a repurposed Virgin Atlantic Boeing 747 aircraft, named “Cosmic Girl,” took off from Cornwall late Monday. Around an hour into the flight, the plane released the rocket at around 35,000 feet (around 10,000 meters) over the Atlantic Ocean to the south of Ireland.  

The plane, piloted by a Royal Air Force pilot, returned to Cornwall after releasing the rocket. 

Some of the satellites are meant for U.K. defense monitoring, while others are for businesses such as those working in navigational technology. One Welsh company is looking to manufacture materials such as electronic components in space.  

U.K. officials had high hopes for the mission. Ian Annett, deputy chief executive at the U.K. Space Agency, said Monday it marked a “new era” for his country’s space industry. There was strong market demand for small satellite launches, Annett said, and the U.K. has ambitions to be “the hub of European launches.”  

In the past, satellites produced in the U.K. had to be sent to spaceports in other countries to make their journey into space. 

The mission was a collaboration between the U.K. Space Agency, the Royal Air Force, Virgin Orbit and Cornwall Council.  

The launch was originally planned for late last year, but it was postponed because of technical and regulatory issues.