Hate Passwords? You’re in Luck — Google Is Sidelining Them

Good news for all the password-haters out there: Google has taken a big step toward making them an afterthought by adding “passkeys” as a more straightforward and secure way to log into its services. 

Here’s what you need to know: 

What are passkeys?  

Passkeys offer a safer alternative to passwords and texted confirmation codes. Users won’t ever see them directly; instead, an online service like Gmail will use them to communicate directly with a trusted device such as your phone or computer to log you in. 

All you’ll have to do is verify your identity on the device using a PIN unlock code, biometrics such as your fingerprint or a face scan or a more sophisticated physical security dongle. 

Google designed its passkeys to work with a variety of devices, so you can use them on iPhones, Macs and Windows computers, as well as Google’s own Android phones. 

Why are passkeys necessary?  

Thanks to clever hackers and human fallibility, passwords are just too easy to steal or defeat. And making them more complex just opens the door to users defeating themselves. 

For starters, many people choose passwords they can remember — and easy-to-recall passwords are also easy to hack. For years, analysis of hacked password caches found that the most common password in use was “password123.” A more recent study by the password manager NordPass found that it’s now just “password.” This isn’t fooling anyone. 

Passwords are also frequently compromised in security breaches. Stronger passwords are more secure, but only if you choose ones that are unique, complex and non-obvious. And once you’ve settled on “erVex411$%” as your password, good luck remembering it. 

In short, passwords put security and ease of use directly at odds. Software-based password managers, which can create and store complex passwords for you, are valuable tools that can improve security. But even password managers have a master password you need to protect, and that plunges you back into the swamp. 

In addition to sidestepping all those problems, passkeys have one additional advantage over passwords. They’re specific to particular websites, so scammer sites can’t steal a passkey from a dating site and use it to raid your bank account. 

How do I start using passkeys?  

The first step is to enable them for your Google account. On any trusted phone or computer, open the browser and sign into your Google account. Then visit the page g.co/passkeys and click the option to “start using passkeys.” Voila! The passkey feature is now activated for that account. 

If you’re on an Apple device, you’ll first be prompted to set up the Keychain app if you’re not already using it; it securely stores passwords and now passkeys, as well. 

The next step is to create the actual passkeys that will connect your trusted device. If you’re using an Android phone that’s already logged into your Google account, you’re most of the way there; Android phones are automatically ready to use passkeys, though you still have to enable the function first. 

On the same Google account page noted above, look for the “Create a passkey” button. Pressing it will open a window and let you create a passkey either on your current device or on another device. There’s no wrong choice; the system will simply notify you if that passkey already exists. 

If you’re on a PC that can’t create a passkey, it will open a QR code that you can scan with the ordinary cameras on iPhones and Android devices. You may have to move the phone closer until the message “Set up passkey” appears on the image. Tap that and you’re on your way. 

And then what?  

From that point on, signing into Google will only require you to enter your email address. If you’ve gotten passkeys set up properly, you’ll simply get a message on your phone or other device asking you to for your fingerprint, your face or a PIN.

Of course, your password is still there. But if passkeys take off, odds are good you won’t be needing it very much. You may even choose to delete it from your account someday. 

‘Godfather of AI’ Quits Google to Warn of the Technology’s Dangers

A computer scientist often dubbed “the godfather of artificial intelligence” has quit his job at Google to speak out about the dangers of the technology, U.S. media reported Monday.

Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity”.

“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary.”

Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The Times.

Jobs could be at risk

In 2022, Google and OpenAI — the startup behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.

Hinton told The Times he believed these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.

While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.

AI “takes away the drudge work” but “might take away more than that,” he told The Times.

Concern about misinformation

The scientist also warned about the potential spread of misinformation created by AI, telling The Times that the average person will “not be able to know what is true anymore.”

Hinton notified Google of his resignation last month, The Times reported.

Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to U.S. media.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.

“We’re continually learning to understand emerging risks while also innovating boldly.”

In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.

An open letter, signed by more than 1,000 people. including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.

Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”

EU Tech Tsar Vestager Sees Political Agreement on AI Law This Year 

The European Union is likely to reach a political agreement this year that will pave the way for the world’s first major artificial intelligence (AI) law, the bloc’s tech regulation chief, Margrethe Vestager, said on Sunday.

This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU’s Artificial Intelligence Act to a vote on May 11. Parliament will then thrash out the bill’s final details with EU member states and the European Commission before it becomes law.

At a press conference after a Group of Seven digital ministers’ meeting in Takasaki, Japan, Vestager said the EU AI Act was “pro-innovation” since it seeks to mitigate the risks of societal damage from emerging technologies.

Regulators around the world have been trying to find a balance where governments could develop “guardrails” on emerging artificial intelligence technology without stifling innovation.

“The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself,” Vestager said.

While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.

“There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence,” she told Reuters in an interview.

While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI’S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.

An organization backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.

Digital ministers of the G-7 advanced nations on Sunday also agreed to adopt “risk-based” regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.

“It is important that our democracy paved the way and put in place the rules to protect us from its abusive manipulation – AI should be useful but it shouldn’t be manipulating us,” said German Transport Minister Volker Wissing.

This year’s G-7 meeting was also attended by representatives from Indonesia, India and Ukraine.

UK Blocks Microsoft-Activision Gaming Deal, Biggest in Tech

British antitrust regulators on Wednesday blocked Microsoft’s $69 billion purchase of video game maker Activision Blizzard, thwarting the biggest tech deal in history over worries that it would stifle competition for popular titles like Call of Duty in the fast-growing cloud gaming market.

The Competition and Markets Authority said in its final report that “the only effective remedy” to the substantial loss of competition “is to prohibit the Merger.” The companies have vowed to appeal.

The all-cash deal faced stiff opposition from rival Sony, which makes the PlayStation gaming system, and also was being scrutinized by regulators in the U.S. and Europe over fears that it would give Microsoft and its Xbox console control of hit franchises like Call of Duty and World of Warcraft.

The U.K. watchdog’s concerns centered on how the deal would affect cloud gaming, which streams to tablets, phones and other devices and frees players from buying expensive consoles and gaming computers. Gamers can keep playing major Activision titles, including mobile games like Candy Crush, on the platforms they typically use.

Cloud gaming has the potential to change the industry by giving people more choice over how and where they play, said Martin Colman, chair of the Competition and Markets Authority’s independent expert panel investigating the deal.

“This means that it is vital that we protect competition in this emerging and exciting market,” he said.

The decision underscores Europe’s reputation as the global leader in efforts to rein in the power of Big Tech companies. A day earlier, the U.K. government unveiled draft legislation that would give regulators more power to protect consumers from online scams and fake reviews and boost digital competition.

The U.K. decision further dashes Microsoft’s hopes that a favorable outcome could help it resolve a lawsuit brought by the U.S. Federal Trade Commission. A trial before FTC’s in-house judge is set to begin Aug. 2. The European Union’s decision, meanwhile, is due May 22.

Activision lashed out, portraying the watchdog’s decision as a bad signal to international investors in the United Kingdom at a time when the British economy faces severe challenges.

The game maker said it would “work aggressively” with Microsoft to appeal, asserting that the move “contradicts the ambitions of the U.K.” to be an attractive place for tech companies.

“We will reassess our growth plans for the U.K. Global innovators large and small will take note that — despite all its rhetoric — the U.K. is clearly closed for business,” Activision said.

Redmond, Washington-based Microsoft also signaled it wasn’t ready to give up.

“We remain fully committed to this acquisition and will appeal,” President Brad Smith said in a statement. The decision “rejects a pragmatic path to address competition concerns” and discourages tech innovation and investment in Britain, he said.

“We’re especially disappointed that after lengthy deliberations, this decision appears to reflect a flawed understanding of this market and the way the relevant cloud technology actually works,” Smith said.

It’s not the first time British regulators have flexed their antitrust muscles on a Big Tech deal. They previously blocked Facebook parent Meta’s purchase of Giphy over fears it would limit innovation and competition. The social media giant appealed the decision to a tribunal but lost and was forced to sell off the GIF sharing platform.

When it comes to gaming, Microsoft already has a strong position in the cloud computing market, and regulators concluded that if the deal went through, it would reinforce the company’s advantage by giving it control of key game titles.

In an attempt to ease concerns, Microsoft struck deals with Nintendo and some cloud gaming providers to license Activision titles like Call of Duty for 10 years — offering the same to Sony.

The watchdog said it reviewed Microsoft’s remedies “in considerable depth” but found they would require its oversight, whereas preventing the merger would allow cloud gaming to develop without intervention.

Twitter Changes Stoke Russian, Chinese Propaganda Surge

Twitter accounts operated by authoritarian governments in Russia, China and Iran are benefiting from recent changes at the social media company, researchers said Monday, making it easier for them to attract new followers and broadcast propaganda and disinformation to a larger audience. 

The platform is no longer labeling state-controlled media and propaganda agencies, and will no longer prohibit their content from being automatically promoted or recommended to users. Together, the two changes, both made in recent weeks, have supercharged the Kremlin’s ability to use the U.S.-based platform to spread lies and misleading claims about its invasion of Ukraine, U.S. politics and other topics. 

Russian state media accounts are now earning 33% more views than they were just weeks ago, before the change was made, according to findings released Monday by Reset, a London-based non-profit that tracks authoritarian governments’ use of social media to spread propaganda. Reset’s findings were first reported by The Associated Press. 

The increase works out to more than 125,000 additional views per post. Those posts included ones suggesting the CIA had something to do with the September 11, 2001, attacks on the U.S., that Ukraine’s leaders are embezzling foreign aid to their country, and that Russia’s invasion of Ukraine was justified because the U.S. was running clandestine biowarfare labs in the country. 

State media agencies operated by Iran and China have seen similar increases in engagement since Twitter quietly made the changes. 

The about-face from the platform is the latest development since billionaire Elon Musk purchased Twitter last year. Since then, he’s ushered in a confusing new verification system and laid off much of the company’s staff, including those dedicated to fighting misinformation, allowed back neo-Nazis and others formerly suspended from the site, and ended the site’s policy prohibiting dangerous COVID-19 misinformation. Hate speech and disinformation have thrived. 

Before the most recent change, Twitter affixed labels reading “Russia state-affiliated media” to let users know the origin of the content. It also throttled back the Kremlin’s online engagement by making the accounts ineligible for automatic promotion or recommendation—something it regularly does for ordinary accounts as a way to help them reach bigger audiences. 

The labels quietly disappeared after National Public Radio and other outlets protested Musk’s plans to label their outlets as state-affiliated media, too. NPR then announced it would no longer use Twitter, saying the label was misleading, given NPR’s editorial independence, and would damage its credibility. 

Reset’s conclusions were confirmed by the Atlantic Council’s Digital Forensic Research Lab (DFRL), where researchers determined the changes were likely made by Twitter late last month. Many of the dozens of previously labeled accounts were steadily losing followers since Twitter began using the labels. But after the change, many accounts saw big jumps in followers. 

RT Arabic, one of Russia’s most popular propaganda accounts on Twitter, had fallen to less than 5,230,000 followers on January 1, but rebounded after the change was implemented, the DFRL found. It now has more than 5,240,000 followers. 

Before the change, users interested in seeking out Kremlin propaganda had to search specifically for the account or its content. Now, it can be recommended or promoted like any other content. 

“Twitter users no longer must actively seek out state-sponsored content in order to see it on the platform; it can just be served to them,” the DFRL concluded. 

Twitter did not respond to questions about the change or the reasons behind it. Musk has made past comments suggesting he sees little difference between state-funded propaganda agencies operated by authoritarian strongmen and independent news outlets in the west.

“All news sources are partially propaganda,” he tweeted last year, “some more than others.”

Writer, Adviser, Poet, Bot: How ChatGPT Could Transform Politics

The AI bot ChatGPT has passed exams, written poetry, and deployed in newsrooms, and now politicians are seeking it out — but experts are warning against rapid uptake of a tool also famous for fabricating “facts.”

The chatbot, released last November by U.S. firm OpenAI, has quickly moved center stage in politics — particularly as a way of scoring points.

Japanese Prime Minister Fumio Kishida recently took a direct hit from the bot when he answered some innocuous questions about health care reform from an opposition MP.

Unbeknownst to the PM, his adversary had generated the questions with ChatGPT. He also generated answers that he claimed were “more sincere” than Kishida’s.

The PM hit back that his own answers had been “more specific.”

French trade union boss Sophie Binet was on-trend when she drily assessed a recent speech by President Emmanuel Macron as one that “could have been done by ChatGPT.”

But the bot has also been used to write speeches and even help draft laws. 

“It’s useful to think of ChatGPT and generative AI in general as a cliche generator,” David Karpf of George Washington University in the U.S. said during a recent online panel. 

“Most of what we do in politics is also cliche generation.”

‘Limited added value’

Nowhere has the enthusiasm for grandstanding with ChatGPT been keener than in the United States.

Last month, Congresswoman Nancy Mace gave a five-minute speech at a Senate committee enumerating potential uses and harms of AI — before delivering the punchline that “every single word” had been generated by ChatGPT.

Local U.S. politician Barry Finegold had already gone further though, pronouncing in January that his team had used ChatGPT to draft a bill for the Massachusetts Senate.

The bot reportedly introduced original ideas to the bill, which is intended to rein in the power of chatbots and AI.

Anne Meuwese from Leiden University in the Netherlands wrote in a column for Dutch law journal RegelMaat last week that she had carried out a similar experiment with ChatGPT and also found that the bot introduced original ideas.

But while ChatGPT was to some extent capable of generating legal texts, she wrote that lawmakers should not fall over each other to use the tool.

“Not only is much still unclear about important issues such as environmental impact, bias and the ethics at OpenAI … the added value also seems limited for now,” she wrote.

Agitprop bots

The added value might be more obvious lower down the political food chain, though, where staffers on the campaign trail face a treadmill of repetitive tasks.

Karpf suggested AI could be useful for generating emails asking for donations — necessary messages that were not intended to be masterpieces.

This raises an issue of whether the bots can be trained to represent a political point of view.

ChatGPT has already provoked a storm of controversy over its apparent liberal bias — the bot initially refused to write a poem praising Donald Trump but happily churned out couplets for his successor as U.S. President Joe Biden.

Billionaire magnate Elon Musk has spied an opportunity. Despite warning that AI systems could destroy civilization, he recently promised to develop TruthGPT, an AI text tool stripped of the perceived liberal bias.

Perhaps he needn’t have bothered. New Zealand researcher David Rozado already ran an experiment retooling ChatGPT as RightWingGPT — a bot on board with family values, liberal economics and other right-wing rallying cries.

“Critically, the computational cost of trialling, training and testing the system was less than $300,” he wrote on his Substack blog in February.

Not to be outdone, the left has its own “Marxist AI.”

The bot was created by the founder of Belgian satirical website Nordpresse, who goes by the pseudonym Vincent Flibustier.

He told AFP his bot just sends queries to ChatGPT with the command to answer as if it were an “angry trade unionist.”

The malleability of chatbots is central to their appeal but it goes hand-in-hand with the tendency to generate untruths, making AI text generators potentially hazardous allies for the political class.

“You don’t want to become famous as the political consultant or the political campaign that blew it because you decided that you could have a generative AI do [something] for you,” said Karpf. 

US Invests in Alternative Solar Tech, More Solar for Renters

The Biden administration announced more than $80 million in funding Thursday in a push to produce more solar panels in the U.S., make solar energy available to more people, and pursue superior alternatives to the ubiquitous sparkly panels made with silicon.

The initiative, spearheaded by the U.S. Department of Energy (DOE) and known as Community solar, encompasses a variety of arrangements where renters and people who don’t control their rooftops can still get their electricity from solar power. Two weeks ago, Vice President Kamala Harris announced what the administration said was the largest community solar effort ever in the United States.

Now it is set to spend $52 million on 19 solar projects across a dozen states, including $10 million from the infrastructure law, as well as $30 million on technologies that will help integrate solar electricity into the grid.

The DOE also selected 25 teams to participate in a $10 million competition designed to fast-track the efforts of solar developers working on community solar projects.

The Inflation Reduction Act already offers incentives to build large solar generation projects, such as renewable energy tax credits. But Ali Zaidi, White House national climate adviser, said the new money focuses on meeting the nation’s climate goals in a way that benefits more communities.

“It’s lifting up our workers and our communities. And that’s, I think, what really excites us about this work,” Zaidi said. “It’s a chance not just to tackle the climate crisis, but to bring economic opportunity to every zip code of America.”

The investments will help people save on their electricity bills and make the electricity grid more reliable, secure, and resilient in the face of a changing climate, said Becca Jones-Albertus, director of the energy department’s Solar Energy Technologies Office.

Jones-Albertus said she’s particularly excited about the support for community solar projects, since half of Americans don’t live in a situation where they can buy their own solar and put in on the roof.

Michael Jung, executive director of the ICF Climate Center agreed. “Community solar can help address equity concerns, as most current rooftop solar panels benefit owners of single-family homes,” he said.

In typical community solar projects, households can invest in or subscribe to part of a larger solar array offsite. “What we’re doing here is trying to unlock the community solar market,” Jones-Albertus said.

The U.S. has 5.3 gigawatts of installed community solar capacity currently, according to the latest estimates. The goal is that by 2025, five million households will have access to it — about three times as many as today — saving $1 billion on their electricity bills, according to Jones-Albertus.

The new funding also highlights investment in a next generation of solar technologies, intended to wring more electricity out of the same amount of solar panels. Currently only about 20% of the sun’s energy is converted to electricity in crystalline silicon solar cells, which is what most solar panels are made of. There has long been hope for higher efficiency, and today’s announcement puts some money towards developing two alternatives: perovskite and cadmium telluride (CdTe) solar cells. Zaidi said this will allow the U.S. to be “the innovation engine that tackles the climate crisis.”

Joshua Rhodes, a scientist at the University of Texas at Austin said the investment in perovskites is good news. They can be produced more cheaply than silicon and are far more tolerant of defects, he said. They can also be built into textured and curved surfaces, which opens up more applications for their use than traditional rigid panels. Most silicon is produced in China and Russia, Rhodes pointed out.

Cadmium telluride solar can be made quickly and at a low cost, but further research is needed to improve how efficient the material is at converting sunlight to electrons.

Cadmium is also toxic and people shouldn’t be exposed to it. Jones-Albertus said that in cadmium telluride solar technology, the compound is encapsulated in glass and additional protective layers.

The new funds will also help recycle solar panels and reuse rare earth elements and materials. “One of the most important ways we can make sure CdTe remains in a safe compound form is ensuring that all solar panels made in the U.S. can be reused or recycled at the end of their life cycle,” Jones-Albertus explained.

Recycling solar panels also reduces the need for mining, which damages landscapes and uses a lot of energy, in part to operate the heavy machinery. Eight of the projects in Thursday’s announcement focus on improving solar panel recycling, for a total of about $10 million.

Clean energy is a fit for every state in the country, the administration said. One solar project in Shungnak, Alaska, was able to eliminate the need to keep making electricity by burning diesel fuel, a method sometimes used in remote communities that is not healthy for people and contributes to climate change.

“Alaska is not a place that folks often think of when they think about solar, but this energy can be an economic and affordable resource in all parts of the country,” said Jones-Albertus.

Did the AI-Generated Drake Song Breach Copyright?

A viral AI-generated song imitating Drake and The Weeknd was pulled from streaming services this week, but did it breach copyright as claimed by record label Universal?

Created by someone called @ghostwriter, Heart On My Sleeve racked up millions of listens before Universal Music Group asked for its removal from Spotify, Apple Music and other platforms.

However, Andres Guadamuz, who teaches intellectual property law at Britain’s University of Sussex, is not convinced that the song breached copyright.

As similar cases look set to multiply — with an uncanny AI replication of Liam Gallagher from Oasis causing buzz — he spoke to AFP about some of the issues being raised.

Did the song breach copyright?

The underlying music on Heart On My Sleeve was new, only the sound of the voice was familiar, “and you can’t copyright the sound of someone’s voice,” Guadamuz said.

Perhaps the furor around AI impersonators may lead to copyright being expanded to include voice, rather than just melody, lyrics and other created elements, “but that would be problematic,” Guadamuz added.

“What you’re protecting with copyright is the expression of an idea, and voice isn’t really that,” he said. 

He said Universal probably claimed copyright infringement because it is the simplest route to removing content, with established procedures in place with streaming platforms.

Were other rights breached?

An AI-generated impersonator may be breaching other laws.

If an artist has a distinctive voice or image, this is potentially protected under “publicity rights” in the United States or similar image rights in other countries.

Bette Midler won a case against Ford in 1988 for using an impersonator of her in an ad. Tom Waits won a similar case in 1993 against the Frito-Lays potato chips company.

The problem, said Guadamuz, is that enforcement of these rights is “very hit and miss” and taken much more seriously in some countries than others.

And streaming platforms currently lack straightforward mechanisms for removing content seen as breaching image rights.

What comes next?

The big upcoming legal fight is over how AI programs are trained.

It may be argued that inputting existing Drake and Weeknd songs to train an AI program may be a breach of copyright, but Guadamuz said this issue was far from settled.

“You need to copy the music in order to train the AI and so that unauthorized copying could potentially be copyright infringement,” he said.

“But defendants will say it’s fair use. They are using it to train a machine, teaching it to listen to music, and then removing the copies,” he said. “Ultimately, we will have to wait and see for the case law to be decided.”

But it is almost certainly too late to stem the flood.

“Bands are going to have to decide whether they want to pursue this in court, and copyright cases are expensive,” said Guadamuz.

“Some artists may lean into the technology and start using it themselves, especially if they start losing their voice.” 

US-China Competition in Tech Expands to AI Regulations

Competition between the U.S. and China in artificial intelligence has expanded into a race to design and implement comprehensive AI regulations.

The efforts to come up with rules to ensure AI’s trustworthiness, safety and transparency come at a time when governments around the world are exploring the impact of the technology on national security and education.

ChatGPT, a chatbot that mimics human conversation, has received massive attention since its debut in November. Its ability to give sophisticated answers to complex questions with a language fluency comparable to that of humans has caught the world by surprise. Yet its many flaws, including its ostensibly coherent responses laden with misleading information and apparent bias, have prompted tech leaders in the U.S. to sound the alarm.

“What happens when something vastly smarter than the smartest person comes along in silicon form? It’s very difficult to predict what will happen in that circumstance,” said Tesla Chief Executive Officer Elon Musk in an interview with Fox News. He warned that artificial intelligence could lead to “civilization destruction” without regulations in place.

Google CEO Sundar Pichai echoed that sentiment. “Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society,” Pichai said in an interview with CBS’s “60 Minutes” program.

Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA Mandarin, “Business leaders understand that regulators will be watching this space closely, and they have an interest in shaping the approaches regulators will take.”

US grapples with regulations

AI regulation is still nascent in the U.S. Last year, the White House released voluntary guidance through a Blueprint for an AI Bill of Rights to help ensure users’ rights are protected as technology companies design and develop AI systems.

At a meeting of the President’s Council of Advisors on Science and Technology this month, President Joe Biden expressed concern about the potential dangers associated with AI and underscored that companies had a responsibility to ensure their products were safe before making them public.

On April 11, the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, began to seek comment and public input with the aim of crafting a report on AI accountability.

The U.S. government is trying to find the right balance to regulate the industry without stifling innovation “in part because the U.S. having innovative leadership globally is a selling point for the United States’ hard and soft power,” said Johanna Costigan, a junior fellow at the Asia Society Policy Institute’s Center for China Analysis.

Brandt, with Brookings, said, “The challenge for liberal democracies is to ensure that AI is developed and deployed responsibly, while also supporting a vibrant innovation ecosystem that can attract talent and investment.”

Meanwhile, other Western countries have also started to work on regulating the emerging technology.

The U.K. government published its AI regulatory framework in March. Also last month, Italy temporarily blocked ChatGPT in the wake of a data breach, and the German commissioner for data protection said his country could follow suit.

The European Union stated it’s pushing for an AI strategy aimed at making Europe a world-class hub for AI that ensures AI is human-centric and trustworthy, and it hopes to lead the world in AI standards.

Cyber regulations in China

In contrast to the U.S., the Chinese government has already implemented regulations aimed at tech sectors related to AI. In the past few years, Beijing has introduced several major data protection laws to limit the power of tech companies and to protect consumers.

The Cybersecurity Law enacted in 2017 requires that data must be stored within China and operators must submit to government-conducted security checks. The Data Security Law enacted in 2021 sets a comprehensive legal framework for processing personal information when doing business in China. The Personal Information Protection Law established in the same year gives Chinese consumers the right to access, correct and delete their personal data gathered by businesses. Costigan, with the Asia Society, said these laws have laid the groundwork for future tech regulations.

In March 2022, China began to implement a regulation that governs the way technology companies can use recommendation algorithms. The Cyberspace Administration of China (CAC) now supervises the process of using big data to analyze user preferences and companies’ ability to push information to users.

On April 11, the CAC unveiled a draft for managing generative artificial intelligence services similar to ChatGPT, in an effort to mitigate the dangers of the new technology.

Costigan said the goal of the proposed generative AI regulation could be seen in Article 4 of the draft, which states that content generated by future AI products must reflect the country’s “core socialist values” and not encourage subversion of state power.

“Maintaining social stability is a key consideration,” she said. “The new draft regulation does some good and is unambiguously in line with [President] Xi Jinping’s desire to ensure that individuals, companies or organizations cannot use emerging AI applications to challenge his rule.”

Michael Caster, the Asia digital program manager at Article 19, a London-based rights organization, told VOA, “The language, especially at Article 4, is clearly about maintaining the state’s power of censorship and surveillance.

“All global policymakers should be clearly aware that while China may be attempting to set standards on emerging technology, their approach to legislation and regulation has always been to preserve the power of the party.”

The future of cyber regulations

As strategies for cyber and AI regulations evolve, how they develop may largely depend on each country’s way of governance and reasons for creating standards. Analysts say there will also be intrinsic hurdles linked to coming up with consensus.

“Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play,” Brandt told VOA. “They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are.”

Observers said the international community would face challenges as it creates standards aimed at making AI technology ethical and safe.

US Targeting China, Artificial Intelligence Threats 

U.S. homeland security officials are launching what they describe as two urgent initiatives to combat growing threats from China and expanding dangers from ever more capable, and potentially malicious, artificial intelligence.

Homeland Security Secretary Alejandro Mayorkas announced Friday that his department was starting a “90-day sprint” to confront more frequent and intense efforts by China to hurt the United States, while separately establishing an artificial intelligence task force.

“Beijing has the capability and the intent to undermine our interests at home and abroad and is leveraging every instrument of its national power to do so,” Mayorkas warned, addressing the threat from China during a speech at the Council on Foreign Relations in Washington.

The 90-day sprint will “assess how the threats posed by the PRC [People’s Republic of China] will evolve and how we can be best positioned to guard against future manifestations of this threat,” he said.

“One critical area we will assess, for example, involves the defense of our critical infrastructure against PRC or PRC-sponsored attacks designed to disrupt or degrade provision of national critical functions, sow discord and panic, and prevent mobilization of U.S. military capabilities,” Mayorkas added.

Other areas of focus for the sprint will include addressing ways to stop Chinese government exploitation of U.S. immigration and travel systems to spy on the U.S. government and private entities and to silence critics, and looking at ways to disrupt the global fentanyl supply chain.

 

AI dangers

Mayorkas also said the magnitude of the threat from artificial intelligence, appearing in a growing number of tools from major tech companies, was no less critical.

“We must address the many ways in which artificial intelligence will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of these threats,” he said.

Mayorkas promised that the Department of Homeland Security “will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology.”

 

The new task force is set to seek ways to use AI to protect U.S. supply chains and critical infrastructure, counter the flow of fentanyl, and help find and rescue victims of online child sexual exploitation.

The unveiling of the two initiatives came days after lawmakers grilled Mayorkas about what some described as a lackluster and derelict effort under his leadership to secure the U.S. border with Mexico.

“You have not secured our borders, Mr. Secretary, and I believe you’ve done so intentionally,” the chair of the House Homeland Security Committee, Republican Mark Green, told Mayorkas on Wednesday.

Another lawmaker, Republican Marjorie Taylor Greene, went as far as to accuse Mayorkas of lying, though her words were quickly removed from the record.

Mayorkas on Friday said it might be possible to use AI to help with border security, though how exactly it could be deployed for the task was not yet clear.

“We’re at a nascent stage of really deploying AI,” he said. “I think we’re now at the dawn of a new age.”

But Mayorkas cautioned that technologies like AI would do little to slow the number of migrants willing to embark on dangerous journeys to reach U.S. soil.

“Desperation is the greatest catalyst for the migration we are seeing,” he said.

FBI warning

The announcement of Homeland Security’s 90-day sprint to confront growing threats from Beijing followed a warning earlier this week from the FBI about the willingness of China to target dissidents and critics in the U.S.

and the arrests of two New York City residents for their involvement in a secret Chinese police station.

China has denied any wrongdoing.

“The Chinese government strictly abides by international law, and fully respects the law enforcement sovereignty of other countries,” Liu Pengyu, the spokesman for the Chinese Embassy in Washington, told VOA in an email earlier this week, accusing the U.S. of seeking “to smear China’s image.”

Top U.S. officials have said they are opening two investigations daily into Chinese economic espionage in the U.S.

“The Chinese government has stolen more of American’s personal and corporate data than that of every nation, big or small combined,” FBI Director Christopher Wray told an audience late last year.

More recently, Wray warned of Chinese’ advances in AI, saying he was “deeply concerned.”

Mayorkas voiced a similar sentiment, pointing to China’s use of investments and technology to establish footholds around the world.

“We are deeply concerned about PRC-owned and -operated infrastructure, elements of infrastructure, and what that control can mean, given that the operator and owner has adverse interests,” Mayorkas said Friday.

“Whether it’s investment in our ports, whether it is investment in partner nations, telecommunications channels and the like, it’s a myriad of threats,” he said.

Twitter Drops Government-Funded Media Labels

Twitter has removed labels describing global media organizations as government-funded or state-affiliated, a move that comes after the Elon Musk-owned platform started stripping blue verification checkmarks from accounts that don’t pay a monthly fee.

Among those no longer labeled was National Public Radio in the U.S., which announced last week that it would stop using Twitter after its main account was designated state-affiliated media, a term also used to identify media outlets controlled or heavily influenced by authoritarian governments, such as Russia and China.

Twitter later changed the label to “government-funded media,” but NPR — which relies on the government for a tiny fraction of its funding — said it was still misleading.

Canadian Broadcasting Corp. and Swedish public radio made similar decisions to quit tweeting. CBC’s government-funded label vanished Friday, along with the state-affiliated tags on media accounts including Sputnik and RT in Russia and Xinhua in China.

Many of Twitter’s high-profile users on Thursday lost the blue checks that helped verify their identity and distinguish them from impostors.

Twitter had about 300,000 verified users under the original blue-check system — many of them journalists, athletes and public figures. The checks used to mean the account was verified by Twitter to be who it says it is.

High-profile users who lost their blue checks Thursday included Beyoncé, Pope Francis, Oprah Winfrey and former President Donald Trump.

The costs of keeping the marks range from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts, as was the case with the previous blue check doled out during the platform’s pre-Musk administration.

Celebrity users, from basketball star LeBron James to author Stephen King and Star Trek’s William Shatner, have balked at joining — although on Thursday, all three had blue checks indicating that the account paid for verification.

King, for one, said he hadn’t paid.

“My Twitter account says I’ve subscribed to Twitter Blue. I haven’t. My Twitter account says I’ve given a phone number. I haven’t,” King tweeted Thursday. “Just so you know.”

In a reply to King’s tweet, Musk said “You’re welcome namaste” and in another tweet he said he’s “paying for a few personally.” He later tweeted he was just paying for King, Shatner and James.

Singer Dionne Warwick tweeted earlier in the week that the site’s verification system “is an absolute mess.”

“The way Twitter is going anyone could be me now,” Warwick said. She had earlier vowed not to pay for Twitter Blue, saying the monthly fee “could (and will) be going toward my extra hot lattes.”

On Thursday, Warwick lost her blue check (which is actually a white check mark in a blue background).

For users who still had a blue check Thursday, a popup message indicated that the account “is verified because they are subscribed to Twitter Blue and verified their phone number.” Verifying a phone number simply means that the person has a phone number and they verified that they have access to it — it does not confirm the person’s identity.

It wasn’t just celebrities and journalists who lost their blue checks Thursday. Many government agencies, nonprofits and public-service accounts around the world found themselves no longer verified, raising concerns that Twitter could lose its status as a platform for getting accurate, up-to-date information from authentic sources, including in emergencies.

While Twitter offers gold checks for “verified organizations” and gray checks for government organizations and their affiliates, it’s not clear how the platform doles these out.

The official Twitter account of the New York City government, which earlier had a blue check, tweeted on Thursday that “This is an authentic Twitter account representing the New York City Government This is the only account for @NYCGov run by New York City government” in an attempt to clear up confusion.

A newly created spoof account with 36 followers (also without a blue check), disagreed: “No, you’re not. THIS account is the only authentic Twitter account representing and run by the New York City Government.”

Soon, another spoof account — purporting to be Pope Francis — weighed in too: “By the authority vested in me, Pope Francis, I declare @NYC_GOVERNMENT the official New York City Government. Peace be with you.”

Fewer than 5% of legacy verified accounts appear to have paid to join Twitter Blue as of Thursday, according to an analysis by Travis Brown, a Berlin-based developer of software for tracking social media.

Musk’s move has riled up some high-profile users and pleased some right-wing figures and Musk fans who thought the marks were unfair. But it is not an obvious money-maker for the social media platform that has long relied on advertising for most of its revenue.

Digital intelligence platform Similarweb analyzed how many people signed up for Twitter Blue on their desktop computers and only detected 116,000 confirmed sign-ups last month, which at $8 or $11 per month does not represent a major revenue stream. The analysis did not count accounts bought via mobile apps.

After buying San Francisco-based Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.

Twitter began tagging profiles with a blue check mark starting about 14 years ago. Along with shielding celebrities from impersonators, one of the main reasons was to provide an extra tool to curb misinformation coming from accounts impersonating people. Most “legacy blue checks,” including the accounts of politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, are not household names.

One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.

The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently.

TikTok CEO Tries to Ease Critics’ Security Concerns

The CEO of TikTok tried to calm critics’ fears about the security of his company’s app during an appearance Thursday.

Shou Chew was asked at a TED2023 Possibility conference if he could guarantee Beijing would not use the TikTok app, owned by the Chinese tech company ByteDance, to interfere in future U.S. elections.

“I can say that we are building all the tools to prevent any of these actions from happening,” Chew said. “And I’m very confident that with an unprecedented amount of transparency that we’re giving on the platform, we can, how we can reduce this risk to as low as zero as possible.”

Chew made the comments in Vancouver at the TED organization’s annual convention, where artificial intelligence and safeguards were discussed.

U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.

U.S. lawmakers have grilled Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

U.S. House Speaker Kevin McCarthy tweeted in March, “It’s very concerning that the CEO of TikTok can’t be honest and admit what we already know to be true — China has access to TikTok user data.”

U.S. Representative Michael McCaul, chairman of the U.S. House of Representatives Foreign Affairs Committee, was even more blunt in February, telling the committee, “Make no mistake, TikTok is a national security threat. … It’s a spy balloon in your phone.”

He was referencing a Chinese surveillance balloon that drifted across the United States in early February before being shot down off the southeastern U.S. coast.

Several governments, including Canada and the U.S., have banned the TikTok app from government-issued smartphones, citing concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

Chew says TikTok has never stored data from Americans on servers in China.

“All new U.S. data is already stored in the Oracle Cloud infrastructure. So it’s in this protected U.S. environment that we talked about in the United States,” he said. “We still have some legacy data to delete in our own servers in Virginia and in Singapore. Our data has never been stored in China.”

“It’s going to take us a while to delete them, but I expect it to be done this year,” he said.

Chew also emphasized TikTok’s efforts to moderate content. When asked how many people are reviewing content posted to the platform, Chew said the numbers and cost are huge.

“The group is based in Ireland and it’s a lot of people. It’s tens of thousands of people,” Chew said. “It’s one of the most important cost items. And I think it’s completely worth it.”

Speaking to a TED conference dominated by discussions of artificial intelligence, Chew said a lot of moderation on TikTok is done by machines.

“The machines are good, they’re quite good, but they’re not as good as you know, they’re not perfect at this point. So you have to complement it with a lot of human beings today,” he said.

VOA’s Masood Farivar contributed to this report. 

Good, Bad of Artificial Intelligence Discussed at TED Conference  

While artificial intelligence, or AI, is not new, the speed at which the technology is developing and its implications for societies are, for many, a cause for wonder and alarm.

ChatGPT recently garnered headlines for doing things like writing term papers for university students.

Tom Graham and his company, Metaphysic.ai, have received attention for creating fake videos of actor Tom Cruise and re-creating Elvis Presley singing on an American talent show. Metaphysic was started to utilize artificial intelligence and create high-quality avatars of stars like Cruise or people from one’s own family or social circle.

Graham, who appeared at this year’s TED Conference in Vancouver, which began Monday and runs through Friday, said talking with an artificially created younger self or departed loved one can have tremendous benefits for therapy.

He added that the technology would allow actors to appear in movies without having to show up on set, or in ads with AI-generated sports stars.

“So, the idea of them being able to create ads without having to turn up is – it’s a match made in heaven,” Graham said. “The advertisers get more content. The sports people never have to turn up because they don’t want to turn up. And everyone just gets paid the same.”

Sal Khan, founder of Khan Academy, a nonprofit organization that provides free teaching materials, sees AI as beneficial to education and a kind of one-on-one instruction: student and AI.

His organization is using artificial intelligence to supplement traditional instruction and make it more interactive.

“But now, they can talk to literary characters,” he said. “They can talk to fictional cats. They can talk to historic characters, potentially even talk to inanimate objects, like, we were talking about the Mississippi River. Or talk to the Empire State Building. Or talk to … you know, talk to Mount Everest. This is all possible.”

For Chris Anderson, who is in charge of TED – a nonpartisan, nonprofit organization whose mission is to spread ideas, usually in the form of short speeches – conversations about artificial intelligence are the most important ones we can have at the moment. He said the organization’s role this year is to bring different parts of this rapidly emerging technology together.

“And the conversation can’t just be had by technologists,” he said. “And it can’t just be heard by politicians. And it can’t just be held by creatives. Everyone’s future is being affected. And so, we need to bring people together.”

For all of AI’s promise, there are growing calls for safeguards against misuse of the technology.

Computer scientist Yejin Choi at the University of Washington said policies and regulations are lagging because AI is moving so fast.

“And then there’s this question of whose guardrails are you going to install into AI,” she said. “So there’s a lot of these open questions right now. And ideally, we should be able to customize the guardrails for different cultures or different use cases.”

Another TED speaker this year, Eliezer Yudkowsky, has been studying AI for 20 years and is currently a senior research fellow at the Machine Intelligence Research Institute in California. He has a more pessimistic view of artificial intelligence and any type of safeguards.

“This eventually gets to the point where there is stuff smarter than us,” he said. “I think we are presently not on track to be able to handle that remotely gracefully. I think we all end up dead.”

Ready or not, societies are confronting the need to adapt to AI’s emergence.

SpaceX Giant Rocket Explodes Minutes After Launch from Texas

SpaceX’s giant new rocket blasted off on its first test flight Thursday but exploded minutes after rising from the launch pad and crashed into the Gulf of Mexico.

Elon Musk’s company was aiming to send the nearly 400-foot (120-meter) Starship rocket on a round-the-world trip from the southern tip of Texas, near the Mexican border. It carried no people or satellites.

The plan called for the booster to peel away from the spacecraft minutes after liftoff, but that didn’t happen. The rocket began to tumble and then exploded four minutes into the flight, plummeting into the gulf. After separating, the spacecraft was supposed to continue east and attempt to circle the world, before crashing into the Pacific near Hawaii.

Throngs of spectators watched from South Padre Island, several miles away from the Boca Chica Beach launch site, which was off limits. As it lifted off, the crowd screamed: “Go, baby, go!”

The company plans to use Starship to send people and cargo to the moon and, eventually, Mars. NASA has reserved a Starship for its next moonwalking team, and rich tourists are already booking lunar flybys.

It was the second launch attempt. Monday’s try was scrapped by a frozen booster valve.

At 394 feet and nearly 17 million pounds of thrust, Starship easily surpasses NASA’s moon rockets — past, present and future. The stainless steel rocket is designed to be fully reusable with fast turnaround, dramatically lowering costs, similar to what SpaceX’s smaller Falcon rockets have done soaring from Cape Canaveral, Florida. Nothing was to be saved from the test flight.

The futuristic spacecraft flew several miles into the air during testing a few years ago, landing successfully only once. But this was to be the inaugural launch of the first-stage booster with 33 methane-fueled engines.

LogOn: Artificial Intelligence Creates Voices for Films, Ads

A growing number of startups are using artificial intelligence to replicate human voices. A company is creating synthetic voices for organizations to use for advertising, marketing and training. Phil Dierking reports.

SpaceX Postpones Debut Flight of Starship Rocket System

Elon Musk’s SpaceX on Monday called off a highly anticipated launch of its powerful new Starship rocket, delaying the first uncrewed test flight of the vehicle into space.

The two-stage rocketship, standing taller than the Statue of Liberty at 394 feet (120 m) high, originally was scheduled for blast-off from the SpaceX facility at Boca Chica, Texas, during a two-hour launch window that began at 8 a.m. EDT (1200 GMT).

But the California-based space company announced in a live webcast during the final minutes of the countdown that it was scrubbing the flight attempt for at least 48 hours, citing a pressurization issue in the lower-stage rocket booster.

Musk, the company’s billionaire founder and chief executive, told a private Twitter audience on Sunday night that the mission stood a better chance of being scrubbed than proceeding to launch on Monday.

Getting the vehicle to space for the first time would represent a key milestone in SpaceX’s ambition of sending humans back to the moon and ultimately to Mars – at least initially as part of NASA’s newly inaugurated human spaceflight program, Artemis.

A successful debut flight would also instantly rank the Starship system as the most powerful launch vehicle on Earth.

Both the lower-stage Super Heavy booster rocket and the upper-stage Starship cruise vessel it will carry to space are designed as reusable components, capable of flying back to Earth for soft landings – a maneuver that has become routine for SpaceX’s smaller Falcon 9 rocket.

But neither stage would be recovered for the expendable first test flight to space, expected to last no more than 90 minutes.

Prototypes of the Starship cruise vessel have made five sub-space flights up to 6 miles (10 km) above Earth in recent years, but the Super Heavy booster has never left the ground.

In February, SpaceX did a test-firing of the booster, igniting 31 of its 33 Raptor engines for roughly 10 seconds with the rocket bolted in place vertically atop a platform.

The Federal Aviation Administration just last Friday granted a license for what would be the first test flight of the fully stacked rocket system, clearing a final regulatory hurdle for the long-awaited launch.

If all goes as planned for the next launch bid, all 33 Raptor engines will ignite simultaneously to loft the Starship on a flight that nearly completes a full orbit of the Earth before it re-enters the atmosphere and free-falls into the Pacific at supersonic speed about 60 miles (97 km) off the coast of the northern Hawaiian islands.

After separating from the Starship, the Super Heavy booster is expected to execute the beginnings of a controlled return flight before plunging into the Gulf of Mexico.

As designed, the Starship rocket is nearly two times more powerful than NASA’s own Space Launch System (SLS), which made its debut uncrewed flight to orbit in November, sending a NASA cruise vessel called Orion on a 10-day voyage around the moon and back.

Japan’s Sega to Buy Finnish Angry Birds Maker Rovio

Japanese video games group Sega has offered to buy Angry Birds maker Rovio, valuing the Finnish company at over $770 million, the companies said Monday.   

“Combining the strengths of Rovio and Sega presents an incredibly exciting future,” Alexandre Pelletier-Normand, CEO of Rovio, said in a statement, which added that Rovio was recommending shareholders to accept the offer.   

The offer, which represents a 19% premium over Rovio’s closing share price on Friday, is part of the Sonic the Hedgehog maker’s “long-term goal” of expanding into the mobile gaming market, Sega CEO Haruki Satomi said.   

“Among the rapidly growing global gaming market, the mobile gaming market has especially high potential,” he added.   

In 2022, Rovio, which employs over 500 people, saw a revenue of $350 million, and an adjusted net profit of $34.5 million.   

Rovio launched the bird slingshot game in 2009 and it soared rapidly to become one of the most popular games on Apple’s App Store.   

In 2016, the “Angry Birds” movie, produced by Sony Entertainment, was a huge success and grossed $350 million worldwide.   

Rovio also manages Angry Birds theme parks in several countries and oversees the publication of children’s books about the famous birds in a dozen languages.   

Following the global success of Angry Birds, Rovio has remained heavily reliant on its flagship game, struggling to develop another similar hit.   

After years of success tied to its Angry Birds mobile games, Rovio hit a rough patch in 2015 and laid off a third of its staff.   

Sega is aiming to open the offer period in early May, hoping to complete the deal in the third quarter, the company said. 

‘Big Sponge’: New CO2 Tech Taps Oceans to Tackle Global Warming

Floating in the port of Los Angeles, a strange-looking barge covered with pipes and tanks contains a concept that scientists hope to make waves: a new way to use the ocean as a vast carbon dioxide sponge to tackle global warming.

Scientists from University of California Los Angeles (UCLA) have been working for two years on SeaChange — an ambitious project that could one day boost the amount of CO2, a major greenhouse gas, that can be absorbed by our seas.

Their goal is “to use the ocean as a big sponge,” according to Gaurav Sant, director of the university’s Institute for Carbon Management (ICM).

The oceans, covering most of the Earth, are already the planet’s main carbon sinks, acting as a critical buffer in the climate crisis.

They absorb a quarter of all CO2 emissions, as well as 90% of the warming that has occurred in recent decades due to increasing greenhouse gases.

But they are feeling the strain. The ocean is acidifying, and rising temperatures are reducing its absorption capacity.

The UCLA team wants to increase that capacity by using an electrochemical process to remove vast quantities of CO2 already in seawater — rather like wringing out a sponge to help recover its absorptive power.

“If you can take out the carbon dioxide that is in the oceans, you’re essentially renewing their capacity to take additional carbon dioxide from the atmosphere,” Sant told AFP.

Engineers built a floating mini-factory on a 30-meter-long boat which pumps in seawater and subjects it to an electrical charge.

Chemical reactions triggered by electrolysis convert CO2 dissolved in the seawater into a fine white powder containing calcium carbonate — the compound found in chalk, limestone and oyster or mussel shells.

This powder can be discarded back into the ocean, where it remains in solid form, thereby storing CO2 “very durably… over tens of thousands of years,” explained Sant.

Meanwhile, the pumped water returns to the sea, ready to absorb more carbon dioxide from the atmosphere.

Sant and his team are confident the process will not damage the marine environment, although this will require further testing to confirm.

A potential additional benefit of the technology is that it creates hydrogen as a byproduct. As the so-called “green revolution” progresses, the gas could be widely used to power clean cars, trucks and planes in the future.

Of course, the priority in curbing global warming is for humans to drastically reduce current CO2 emissions — something nations are struggling to achieve.

But in parallel, most scientists say carbon dioxide capture and storage techniques can play an important role in keeping the planet livable.

Carbon dioxide removal (CDR) could help to achieve carbon neutrality by 2050 as it offsets emissions from industries which are particularly difficult to decarbonize, such as aviation, and cement and steel production.

It could help to tackle the stocks of CO2 that have been accumulating in the atmosphere for decades.

Keeping global warming under control will require the removal of between 450 billion and 1.1 trillion tons of CO2 from the atmosphere by 2100, according to the first global report dedicated to the topic, released in January.

That would require the CDR sector “to grow at a rate of about 30 percent per year over the next 30 years, much like what happened with wind and solar,” said one of its authors, Gregory Nemet.

UCLA’s SeaChange technology “fits into a category of a promising solution that could be large enough to be climate-relevant,” said Nemet, a professor at the University of Wisconsin-Madison.

By sequestering CO2 in mineral form within the ocean, it differs markedly from existing “direct air capture” (DAC) methods, which involve pumping and storing gas underground through a highly complex and expensive process.

A start-up company, Equatic, plans to scale up the UCLA technology and prove its commercial viability, by selling carbon credits to manufacturers wanting to offset their emissions.

In addition to the Los Angeles barge, a similar boat is currently being tested in Singapore.

Sant hopes data from both sites will quickly lead to the construction of far larger plants that are capable of removing “thousands of tons of carbon” each year.

“We expect to start operating these new plants in 18 to 24 months,” he said.

Europe’s Most Powerful Nuclear Reactor Kicks Off in Finland 

Information in this article is confirmed with other sources and may be used without attribution to The Associated Press in broadcasts — websites still must use the attribution. The News Center has no plans currently to match it.  

(With AP Photo) 

 

Europe’s Most Powerful Nuclear Reactor Kicks Off in Finland 

 

Apr 16, 2023 13:05 (GMT) – 423 words |By JARI TANNER The Associated Press 

 

FOR RADIO: HELSINKI (AP) — Finland’s much-delayed and costly new nuclear reactor, Europe’s most powerful by production capacity, has completed a test phase lasting over a year and has started regular output, significantly boosting the Nordic country’s electricity self-sufficiency. The Olkiluoto 3 reactor, which has 1,600-megawatt capacity, was connected into the Finnish national power grid in March 2022 and kicked off regular production Sunday. Operator Teollisuuden Voima, or TVO, tweeted that “Olkiluoto 3 is now ready” after a delay of 14 years from the original plan. It will help Finland achieve its carbon neutrality targets and increase energy security at a time when European countries have cut oil, gas and other power supplies from Russia, Finland’s neighbor. 

 

FOR WEB: HELSINKI (AP) — Finland’s much-delayed and costly new nuclear reactor, Europe’s most powerful by production capacity, has completed a test phase lasting over a year and started regular output, boosting the Nordic country’s electricity self-sufficiency significantly. 

The Olkiluoto 3 reactor, which has 1,600-megawatt capacity, was connected into the Finnish national power grid in March 2022 and kicked off regular production Sunday. Operator Teollisuuden Voima, or TVO, tweeted that “Olkiluoto 3 is now ready” after a delay of 14 years from the original plan. 

It will help Finland to achieve its carbon neutrality targets and increase energy security at a time when European countries have cut oil, gas and other power supplies from Russia, Finland’s neighbor. 

“The production of Olkiluoto 3 stabilizes the price of electricity and plays an important role in the Finnish green transition,” said TVO President and CEO Jarmo Tanhua in a statement. The company added that “the electricity production volume of Europe’s largest nuclear power plant unit is a significant addition to clean, domestic production.” 

Construction of Olkiluoto 3 began in 2005 and was to be completed four years later. However, the project was plagued by several technological problems that led to lawsuits. The last time a new nuclear reactor was commissioned in Finland was over 40 years ago. 

The Olkiluoto 3 is western Europe’s first new reactor in more than 15 years. It is the first new-generation EPR, or European Pressurized Reactor, plant to have gone online in Europe. It was developed in a joint venture between France’s Areva and Germany’s Siemens. 

Primarily due to safety concerns, nuclear power remains a controversial issue in Europe. The launch of the Finnish reactor coincides with Germany’s move to shut down its last remaining three nuclear plants Saturday. 

Experts have put Olkiluoto 3’s final price tag at some 11 billion euros ($12 billion) — almost three times what was initially estimated. Finland now has five nuclear reactors in two power plants located on the shores of the Baltic Sea. Combined, they cover more than 40% of the nation’s electricity demand. 

The conservative National Coalition Party, or NCP, which won Finland’s April 2 general election, wants to increase the share of energy that the country of 5.5 million gets from nuclear power still further. 

NCP leader Petteri Orpo, Finland’s likely new prime minister, said during the election campaign that the new Cabinet should make nuclear power “the cornerstone of the government’s energy policy.” 

First Test Flight of SpaceX’s Big Starship

Elon Musk’s SpaceX is about to take its most daring leap yet with a round-the-world test flight of its mammoth Starship.

It’s the biggest and mightiest rocket ever built, with the lofty goal of ferrying people to the moon and Mars.

Jutting almost 120 meters into the South Texas sky, Starship could blast off as early as Monday, with no one aboard. Musk’s company got the OK from the U.S. Federal Aviation Administration on Friday.

It will be the first launch with Starship’s two sections together. Early versions of the sci-fi-looking upper stage rocketed several miles into the stratosphere a few years back, crashing four times before finally landing upright in 2021. The towering first-stage rocket booster, dubbed Super Heavy, will soar for the first time.

For this demo, SpaceX won’t attempt any landings of the rocket or the spacecraft. Everything will fall into the sea.

“I’m not saying it will get to orbit, but I am guaranteeing excitement. It won’t be boring,” Musk promised at a Morgan Stanley conference last month. “I think it’s got, I don’t know, hopefully about a 50% chance of reaching orbit.”

Here’s the rundown on Starship’s debut:

Supersize rocket

The stainless-steel Starship has 33 main engines and 7.6 million kilograms of thrust. All but two of the methane-fueled, first-stage engines ignited during a launch pad test in January — good enough to reach orbit, Musk noted. Given its muscle, Starship could lift as much as 250 tons and accommodate 100 people on a trip to Mars. The six-engine spacecraft accounts for 50 meters of its height. Musk anticipates using Starship to launch satellites into low-Earth orbit, including his own Starlinks for internet service, before strapping anyone in. Starship easily eclipses NASA’s moon rockets — the Saturn V from the bygone Apollo era and the Space Launch System from the Artemis program that logged its first lunar trip late last year. It also outflanks the former Soviet Union’s N1 moon rocket, which never made it past a minute into flight, exploding with no one aboard.

Game plan

The test flight will last 1 ½ hours and fall short of a full orbit of Earth. If Starship reaches the three-minute mark after launch, the booster will be commanded to separate and fall into the Gulf of Mexico. The spacecraft would continue eastward, passing over the Atlantic, Indian and Pacific oceans before ditching near Hawaii. Starship is designed to be fully reusable, but nothing will be saved from the test flight. Harvard astrophysicist and spacecraft tracker Jonathan McDowell will be more excited whenever Starship lands and returns intact from orbit. It will be “a profound development in spaceflight if and when Starship is debugged and operational,” he said.

Launch pad

Starship will take off from a remote site on the southernmost tip of Texas near Boca Chica Beach. It’s just below South Padre Island, and about 32 kilometers from Brownsville. Down the road from the launch pad is the complex where SpaceX has been developing and building Starship prototypes for the past several years. The complex, called Starbase, has more than 1,800 employees, who live in Brownsville or elsewhere in the Rio Grande Valley. The Texas launch pad is equipped with giant robotic arms — called chopsticks — to eventually grab a returning booster as it lands. SpaceX is retooling one of its two Florida launch pads to accommodate Starships down the road. Florida is where SpaceX’s Falcon rockets blast off with crew, space station cargo and satellites for NASA and other customers.

The odds

As usual, Musk is remarkably blunt about his chances, giving even odds, at best, that Starship will reach orbit on its first flight. But with a fleet of Starships under construction at Starbase, he estimates an 80% chance that one of them will attain orbit by year’s end. He expects it will take a couple of years to achieve full and rapid reusability.

Customers

With Starship, the California-based SpaceX is focusing on the moon for now, with a $3 billion NASA contract to land astronauts on the lunar surface as early as 2025, using the upper stage spacecraft. It will be the first moon landing by astronauts in more than 50 years. The moonwalkers will leave Earth via NASA’s Orion capsule and Space Launch System rocket, and then transfer to Starship in lunar orbit for the descent to the surface, and then back to Orion. To reach the moon and beyond, Starship will first need to refuel in low-Earth orbit. SpaceX envisions an orbiting depot with window-less Starships as tankers. But Starship isn’t just for NASA. A private crew will be the first to fly Starship, orbiting Earth. Two private flights to the moon would follow — no landings, just fly-arounds.