BLACKWATER USA | DAILY BRIEF

Apr 12th 2023

BLACKWATER USA | DAILY BRIEF

Ukraine

  • Yet another revelation from the trove of Pentagon documents that leaked last week was confirmation that Ukraine used drones to attack targets in Belarus and Russia. Russia blamed Kyiv for those attacks anyway, but Ukraine never formally acknowledged responsibility.
  • The leaked documents also showed that several Western countries have special forces personnel operating in Ukraine - including a 50-person British contingent.
  • Separately, the UN High Commissioner for Human Rights said it has confirmed nearly 8,500 civilian deaths in Ukraine since the start of the war - but conceded the true tally is likely far higher.
Myanmar
  • Yesterday, Myanmar's military bombed and strafed a large gathering in a rebel-held part of Sagaing Region, where members of the local resistance were inaugurating an administrative office. It was the junta's deadliest attack since seizing power: at least 100 were killed in the melee.
Afghanistan
  • The Taliban Ministry of Mines celebrated the award of a nephrite (jade) mining contract with an unspecified - but seemingly local - "private company" for three million Afghanis ($35k). The mine is in Nangarhar, and this is one of the first mining contracts - if not the very first - signed under the Taliban's (dubious) authority. The Taliban still seems ill-equipped to analyze bids and monitor awarded contracts.
China
  • The U.S. and the Philippines began joint military exercises in the Philippines, and China immediately felt threatened. A Chinese spokesman scolded that military drills "should not target any third party and should be conducive to regional peace and stability."
  • On the other hand, China irritated the U.S. by inviting Brazilian president Lula to visit a Huawei site in Shanghai. Brazil has generally declined to pick sides between the U.S. and China, but this visit implies a warmth with China that's likely to annoy the U.S.
  • A Quartz article pasted below discusses the challenges China's AI bots are going to face conforming to the "socialist values" Beijing imposes on domestic companies. American AI bots like ChatGPT have their own ethical and security concerns - e.g., The Economist previouslypointed out that ChatGPT will refuse to tell you how to build a bomb but offer specific instructions if you ask it to tell you a story about a bombmaker with lots of technical detail (this loophole may have since been plugged). Beijing will likely struggle to settle on a level of freedom it's comfortable giving bots.
EVs
  • The U.S. is considering new auto emissions rules that would require up to half of all new cars to be electric vehicles (EVs) by 2030 - and up to 67% of them EVs by 2032. The UK and EU have already passed similar rules.
Other News
  • The IMF set its lowest annual global growth target in 30 years: 2.9% (the long-run average is 3.8%).
China wants to require a security review of AI services before they’re released (Quartz)
Chinese AI bots looking to rival OpenAI's ChatGPT will need to study up on "socialist values"

Amid the flurry of Baidu and Alibaba announcing AI products, China has been quick to propose regulation over the burgeoning generative AI industry.

AI products developed in China must undergo a “security assessment” before being released to the public, according to the Cyberspace Administration of China (CAC), which has drafted new rules regarding the development of generative AI services. The goal is to ensure a “healthy development and standardized application” of generative AI technology, the proposal, open to the public for comment, notes.

The content generated by AI bots should “reflect the core values of socialism, and must not contain subversion of state power” in addition to not promoting terrorism, discrimination, and violence, among other things. The guidelines, released on Apr. 11, note companies must ensure that AI content is accurate, and measures should be taken to prevent the models from producing false information.

When it comes to data collection for the AI models, the data must not contain information that infringes intellectual property rights. If the data contains personal information, companies are expected to obtain the consent of the subject of the personal information or meet other circumstances required by law, the CAC writes.

The rules comes as big tech giants in China, in recent weeks, have rushed to unveil their generative AI products, which are trained on large datasets to produce new content. Baidu is testing its Ernie bot. This week, SenseTime, an AI company, released its AI bot SenseNova, while e-commerce giant Alibaba introduced Tongyi Qianwen, planning to integrate the AI bot across its products.

Those bots, though, are in test mode and not available yet to the public. It’s not clear the timeline of when they will be. As analysts noted to Bloomberg, the CAC rules will likely affect how AI models in China will be trained in the future.

The popularity of AI bots skyrocketed after San Francisco-based OpenAI launched ChatGPT just five months ago. AI chatbots have been used to draft emails and write essays, but there’s been growing concerns over generative AI models spitting out false and inaccurate information.

How will AI be regulated?
Countries around the world are looking to regulate the development of AI bots. Just last week, Italy temporarily banned ChatGPT, citing the processing of personal data as well as the bot’s tendency to generate inaccurate information. Meanwhile, in the US, the Department of Commerce has put out a formal public request, this week, for comment on whether AI models should undergo a certification process.

Companies like Google and Microsoft have been quick to say that their AI bots are not perfect, highlighting the ambiguous nature of generative AI. Some companies are open to regulation. “We believe that powerful AI systems should be subject to rigorous safety evaluations,” OpenAI’s website reads. “Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take.”

If companies fail to comply to the guidelines, China’s CAC writes, the AI services will be stopped. The company responsible for the technology could receive a fine of at least 10,000 yuan ($1,450) and may even face criminal investigations.