AI & Open Source

Last 7 briefings

Tuesday, March 10 at 07:02 AM

AI & Open Source

Yann LeCun's new Paris-based startup, Advanced Machine Intelligence, just raised over $1 billion to prove that the future of AI isn't about scaling up language models—it's about teaching machines to understand the physical world. 💰 MONEY MOVES The $3.5 billion valuation, backed by Bezos Expeditions, Mark Cuban, and former Google CEO Eric Schmidt, represents a major bet against OpenAI, Anthropic, and even LeCun's former employer Meta, all of whom believe that throwing more compute at large language models will eventually deliver human-level intelligence. LeCun, who won a Turing Award in 2018, has spent the last year publicly dismantling the idea that ChatGPT-style systems can reach AGI, and he's putting serious capital where his mouth is. AMI plans to launch with offices in Paris, Montreal, Singapore, and New York, targeting industries like manufacturing, biomedics, and robotics where the stakes for getting the physics right are highest.

Meanwhile, the open-source world is quietly reshaping enterprise AI security in ways the industry didn't see coming. 🚀 THIS IS COOL OpenAI just acquired Promptfoo, an open-source red-teaming platform built by Ian Webster nights and weekends while he was at Discord, that's now used by over 125,000 developers and more than 30 Fortune 500 companies. The tool works like an automated adversary—it talks to your AI application like an attacker would, probing for prompt injections, jailbreaks, and unsafe model behavior, then iterates through an agentic reasoning loop to expose deeper vulnerabilities. OpenAI pledged to keep the tool open source under its current license, integrating it into Frontier, their new enterprise agent platform launched in February. 🤔 THINK ABOUT IT As enterprises shift from asking "how capable is this model?" to "how do we keep this model from breaking our business?", the companies that own the security tooling become gatekeepers for the entire deployment pipeline.

🚀 THIS IS COOL Andrej Karpathy, Tesla's former AI director, just released an open-source "autoresearch" framework that lets AI agents read their own source code, form hypotheses about improvements, modify the code, run experiments, and evaluate results—potentially allowing researchers to run hundreds of AI experiments in a single night. This isn't just incremental; it's automation eating the job of experimentation itself. At the same time, Motif's CEO is publicly confident they'll surpass OpenAI and Google, while South Korean competitors race to catch up. The competitive pressure is genuine, but so is the infrastructure complexity—OpenClaw, an Austrian open-source AI agent that went viral in China last week with 250,000 GitHub stars, requires enough technical setup that installation services are now a booming market, with some providers claiming six-figure earnings in just days.

The Open-Source Promise Gets Integrated
OpenAI publicly committed to keeping Promptfoo open source while folding it into Frontier, their paid enterprise agent management platform. This isn't inherently wrong—many open-source projects support commercial versions—but it illustrates a pattern where community-built tools get absorbed into corporate platforms. The distinction between "we're supporting open source" and "we're acquiring your free labor to accelerate our commercial product" is getting harder to see.
🎭 OpenAI (via Promptfoo acquisition)
🗣️ Says:
“Promptfoo would remain open source under its current licence, with continued support for existing customers”
👁️ Does:
Acquired the company to integrate it into a proprietary enterprise platform that enterprises will pay for
🎤 MIC DROPOpen-source tools built by the community are becoming the R&D division for proprietary enterprise products.
The broader pattern is unmistakable: AI development is fracturing into competing visions. LeCun's world-models bet represents genuine intellectual disagreement with the LLM-scaling thesis that dominates Silicon Valley. Karpathy's autoresearch framework suggests the next frontier is automation of AI development itself. OpenClaw's popularity in China reveals that autonomous agents—not chatbots—are what developers actually want to build with. And Promptfoo's acquisition shows that security and governance tooling is becoming as strategically important as the models themselves. 🤔 THINK ABOUT IT If open-source projects keep getting acquired and integrated into proprietary platforms, does open source remain open, or does it just become the prototype phase for enterprise products?

Sources

Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World · Mar 10 · Wired
Motif CEO confident in surpassing all Korean AI · Mar 10 · The Chosun Ilbo
Links 10/03/2026: Rust Rewrites by Slop "20,171 Times Slower", "You MUST Review LLM-generated Code" · Mar 10 · techrights.org
The open-source AI red-teaming tool used by Fortune 500 companies is now part of OpenAI · Mar 09 · The Next Web
Andrej Karpathy's new open source 'autoresearch' lets you run hundreds of AI experiments a night — with revolutionary implications · Mar 10 · VentureBeat
OpenAI to acquire Promptfoo to strengthen AI agent security testing · Mar 10 · CSOonline
'Raising Lobsters': How OpenClaw Became China's Hottest AI · Mar 10 · Sixth Tone
Former Meta A.I. Chief's Start-Up Is Valued at $3.5 Billion · Mar 10 · The New York Times

Monday, March 09 at 05:02 PM

AI & Open Source

Landmark legal trouble is brewing for OpenAI, with a lawsuit claiming ChatGPT provided faulty legal advice that created real legal problems for users—a case that could reshape how all AI companies handle professional services. The specifics of what advice went wrong aren't yet public, but if this holds up in court, it could force the entire industry to reconsider what domains their models should even attempt to answer in, since the liability exposure is substantial. 🤔 THINK ABOUT IT If courts start holding AI companies liable for professional advice they never claimed to be qualified to give, does that mean disclaimers are legally meaningless, or does it mean the models simply shouldn't attempt these domains at all?

Meanwhile, OpenAI is on a security-focused shopping spree. The company announced it's acquiring Promptfoo, the red-teaming startup used by over 125,000 developers and 30-plus Fortune 500 companies, for an undisclosed sum. 💰 MONEY MOVES Promptfoo had raised $23.4 million in venture funding—$5 million seed from Andreessen Horowitz in 2024, followed by an $18.4 million Series A led by Insight Partners in July 2025—and now gets absorbed into OpenAI's Frontier enterprise agent platform launched just last month. The tool works like an automated adversary, testing AI applications through chat interfaces and APIs to expose vulnerabilities like prompt injection and data leakage before they hit production. 🚀 THIS IS COOL What makes this valuable is that it uses specialized models to behave like attackers, then records what works, analyzes why, and iterates through reasoning loops to find deeper flaws—essentially creating a system that gets better at breaking your AI the more it tries. OpenAI committed to keeping Promptfoo open source under its current license, a promise worth watching.

But here's where things get complicated. A senior OpenAI robotics engineer recently resigned over "lethal autonomy" concerns after OpenAI agreed to make its AI systems available inside the U.S. Department of Defense. The resignation signals serious internal friction about weaponization, even as OpenAI moves deeper into government contracts.

Safety First" Meets Pentagon Deals
OpenAI has publicly positioned itself as thoughtful about AI ethics and safety, but the Pentagon deal—and the resulting resignation—suggests those commitments have limits when government contracts come calling. This isn't about disagreement over policy; it's about someone inside the company deciding the direction was incompatible with their conscience.
🎭 OpenAI
🗣️ Says:
“Committed to responsible AI development and ethical safeguards”
👁️ Does:
Agrees to integrate AI into Pentagon systems while engineers resign over lethal autonomy concerns
🎤 MIC DROPHard to claim you're the safety-conscious AI company when your own robotics lead walks because of what you're building for the military.
The broader developer ecosystem is meanwhile undergoing a subtle but significant shift. Top engineers are realizing that writing code to please AI agents—explicit, consistent, well-documented, boring code—actually makes systems more reliable. Hamel Husain, who helped create the beloved nbdev project, publicly dumped it because it wasn't AI-friendly; he realized he was "fighting the AI instead of working with it." This reveals something almost philosophical about the agent era: developers are gradually optimizing not for personal taste or elegant workflows, but for legibility to models. 🚀 THIS IS COOL GitHub's latest Octoverse data shows TypeScript has now overtaken Python, partly because models find TypeScript's explicit structure easier to reason about—a purely technical language is winning adoption for cultural reasons. Cursor, the AI-native code editor, is winning market share not by being revolutionary, but by feeling familiar enough that developers can adopt it gradually rather than learning a whole new worldview.

On the security front, there's actual good news buried in the week's chaos. Europol and a coalition of security companies and law enforcement dismantled the infrastructure hosting Tycoon2FA, one of the world's largest adversary-in-the-middle phishing operations, and also seized LeakBase, a major cybercriminal marketplace for stolen data and hacking tools. 🚀 THIS IS COOL These takedowns represent real wins where defenders showed up and made a dent. But here's the catch: these disruptions are typically short-term. The ecosystem adapts by migrating to other forums or more resilient channels like Telegram, so while this week belongs to the good guys, the war isn't over. Meanwhile, Anthropic discovered 22 new security vulnerabilities in Firefox using Claude Opus 4.6, demonstrating how AI is becoming a powerful tool for finding the flaws that traditional security testing misses—another reminder that AI's impact on security cuts both ways.

The convergence here is stark: as AI becomes embedded in enterprise workflows, government systems, and developer tools, the stakes for getting security, liability, and ethics right grow exponentially. OpenAI's acquisition of Promptfoo and commitment to keeping it open source suggests the company understands this. But the Pentagon deal and the engineer resignation suggest that understanding and action still have a gap between them. The lawsuit over legal advice, the shift in how developers write code to please models, and the ongoing cat-and-mouse game between cybersecurity attackers and defenders all point to the same emerging reality: AI isn't optional infrastructure anymore, it's foundational—which means every decision made now ripples through the entire ecosystem.

Sources

Landmark Lawsuit Against OpenAI For Allowing ChatGPT To Provide Legal Advice Could Be A Huge Game-Changer For All AI Makers · Mar 09 · Forbes
The open-source AI red-teaming tool used by Fortune 500 companies is now part of OpenAI · Mar 09 · The Next Web
BoCloud Technology launches BoClaw personal AI assistant · Mar 09 · Vietnam Investment Review
OpenAI Responds to Its Robotics Lead Resigning Over 'Lethal Autonomy' Concerns in New Pentagon Deal · Mar 09 · Inc.com
⚡ Weekly Recap: Qualcomm 0-Day, iOS Exploit Chains, AirSnitch Attack & Vibe-Coded Malware · Mar 09 · The Hacker News
Coding for agents · Mar 09 · InfoWorld
Links 09/03/2026: GAFAM Outsourcing, "MAGA Political Meddling" in EU, Indonesia Bans Social Control Media for Children Under 16 · Mar 09 · Techrights

Monday, March 09 at 07:02 AM

AI & Open Source

Landmark legal trouble is brewing for OpenAI as a lawsuit filed this week challenges the company's willingness to let ChatGPT dispense legal advice—a decision that allegedly landed users in genuine legal jeopardy. If this case gains traction, it could reshape how every AI company thinks about liability and what their systems are allowed to do. The implications ripple far beyond one chatbot: courts may soon force tech companies to build guardrails they've been content to skip, fundamentally changing what gets shipped to the public.

Meanwhile, China is experiencing its own AI moment, but with a distinctly different flavor. 🚀 THIS IS COOL OpenClaw—nicknamed "crayfish" by enthusiasts because of its logo—represents a wholesale departure from ChatGPT-style conversation. This autonomous AI agent isn't designed just to talk; it's built to actually execute tasks, integrating with messaging apps, file systems, and local applications to get things done. Nearly 1,000 developers lined up at Tencent's headquarters last week to get it installed, some even charging fees for installation services. Xiaomi and Tencent have both launched versions, and Chinese state media issued security warnings that only seemed to amp up the hype. 🤔 THINK ABOUT IT What does it mean that the most cautious moment in China's media about a new technology is framed as a feature, not a bug?

The rush to build agentic AI is forcing developers to confront a profound shift in what "good code" actually means. For decades, programmers optimized for readability to other humans—clever abstractions, elegant frameworks, personal taste. That era is ending. Hamel Husain, who built and championed the nbdev project, recently abandoned it entirely because it wasn't AI-friendly. He's now writing code that machines prefer: explicit, consistent, boring, legible to LLMs. GitHub's data backs this up—TypeScript has overtaken Python, driven partly by models' preference for the language. Developers aren't rebelling against this; they're leaning in, treating tools as infrastructure rather than self-expression. The conformity is actually leverage.

Responsible National Security" and "No Red Lines" Walk Into a Pentagon
OpenAI went public with a Pentagon deal, then tried to assure employees the agreement had built-in safeguards. But Kalinowski's resignation letter reveals those guardrails were vague enough that a senior technical leader felt compelled to walk away, questioning whether the company had actually thought through what it was agreeing to. This wasn't philosophical disagreement—this was a warning about inadequate oversight on something that matters.
🎭 OpenAI
🗣️ Says:
“The company announced a Pentagon partnership with "clear red lines: no domestic surveillance and no autonomous weapons”
👁️ Does:
Senior robotics leader Caitlin Kalinowski resigned on principle, explicitly stating the policy guardrails around "surveillance of Americans without judicial oversight and lethal autonomy without human authorization" were not sufficiently defined before the deal was announced
🎤 MIC DROPYou can't promise red lines after you've already crossed them.
💰 MONEY MOVES Microsoft quietly absorbed a $400 million loss on "Project Blackbird," one of its high-profile AI initiatives, as the company continues restructuring and laying off personnel. The failure underscores just how expensive it is to chase the AI frontier when bets don't pan out. Meanwhile, the robotics resignation at OpenAI and the legal exposure from ChatGPT's legal advice fiasco suggest that the real costs of scaling AI aren't just engineering budgets—they're legal, ethical, and human capital bleeding out faster than earnings calls can explain. 🤔 THINK ABOUT IT If OpenAI needed a senior robotics leader to resign before it could see the problems with its Pentagon deal, what else isn't being questioned until it's too late?

Sources

Landmark Lawsuit Against OpenAI For Allowing ChatGPT To Provide Legal Advice Could Be A Huge Game-Changer For All AI Makers · Mar 09 · Forbes
China's state news media issues security warning over OpenClaw amid social media frenzy · Mar 08 · Global Times
Links 08/03/2026: Microsoft Lost $400 Million on "Project Blackbird" and Half the States Sue Over Illegal Tariffs · Mar 08 · Techrights
Links 08/03/2026: Cisco Holes Again and "Blatant Problem With OpenAI That Endangers Kids" · Mar 08 · Techrights
BoCloud Technology launches BoClaw personal AI assistant · Mar 09 · Vietnam Investment Review
Coding for agents · Mar 09 · InfoWorld
OpenAI robotics leader resigns over concerns about Pentagon AI deal · Mar 08 · NPR

Monday, March 09 at 03:12 AM

AI & Open Source

China's state-run Xinhua News Agency issued a security warning about OpenClaw this week, even as the autonomous AI agent sparked what can only be described as a social media frenzy under the playful nickname "raising crayfish"—a reference to its crab-like logo. The timing tells you something important about how differently nations are approaching AI development right now. While Chinese tech giants Tencent and Xiaomi were literally lining up thousands of enthusiasts for installation events (with some users even charging fees for the service), regulators were simultaneously pumping the brakes, suggesting a wait-and-see approach. 🚀 THIS IS COOL OpenClaw represents a genuine shift in AI design philosophy—it's built to "get things done" rather than just chat, integrating persistent memory, multi-channel communication, and local deployment capabilities that make it fundamentally different from ChatGPT or other conversational systems. But that power comes with a price: the agent needs extensive system permissions to manipulate files and applications, which is exactly why security experts and companies like DeepSeek are urging caution.

The Chinese enthusiasm for OpenClaw contrasts sharply with growing alarm bells in the United States around the same issue—but from a completely different angle.

Pentagon Partnership Principles Meet Pentagon Reality
OpenAI claims to have ethical boundaries around military AI, but moved forward with a Defense Department agreement so hastily that it prompted a high-profile resignation from within the company. The broader issue: federal agencies are actively competing to secure AI technology from major developers, which creates perverse incentives to move fast and ask permission later rather than genuinely deliberating on consequences.
🎭 OpenAI
🗣️ Says:
“We have red lines: no domestic surveillance and no autonomous weapons”
👁️ Does:
Announced a Pentagon partnership without sufficiently defining guardrails around AI uses before the deal went public
🎤 MIC DROPSenior robotics leader Caitlin Kalinowski resigned "on principle," saying surveillance without judicial oversight and lethal autonomy without human authorization "deserved more deliberation than they got.
This Pentagon deal matters because it sits at the intersection of three accelerating trends. First, 💰 MONEY MOVES Microsoft lost $400 million on "Project Blackbird," signaling that even the deepest-pocketed tech giants are making massive bets that don't pan out—which means companies are under pressure to monetize AI wherever they can find paying customers, including government agencies. Second, the U.S. government is deliberately trying to diversify its AI supplier base, moving away from any single vendor (particularly after tensions with Anthropic over its CEO's public stance against military AI applications). Third, there's a real jobs anxiety spreading through the software engineering community. One engineer recently wrote about fearing their profession might not survive another decade in its current form—and that anxiety is rational given how quickly autonomous AI agents are becoming functional enough to handle real work.

The contrast between China's caution and America's acceleration reveals something uncomfortable: both approaches have genuine downsides. 🤔 THINK ABOUT IT If Chinese regulators are right to be worried about security and local control issues with OpenClaw, why are American companies moving forward with military partnerships faster rather than slower? And if American companies are right that the U.S. needs advanced AI for national security, why is China taking the time to deliberate while America's internal warnings go unheeded? The honest answer is that neither country has actually solved the governance problem—they're just choosing different ways to avoid it.

Meanwhile, at a Manhattan lobster-themed AI enthusiast event (yes, really), people were celebrating the latest AI solutions with jellyfish hats and cocktails, embodying the exuberant optimism that defines the current moment in tech culture. That same culture produced both OpenClaw's impressive technical achievements and the cavalier approach to Pentagon guardrails that prompted a senior engineer to resign on principle. The gap between those two things—genuine breakthrough technology and inadequate ethical deliberation—is where we're living right now, and it's only getting wider.

Sources

China's state news media issues security warning over OpenClaw amid social media frenzy · Mar 08 · Global Times
Links 08/03/2026: Microsoft Lost $400 Million on "Project Blackbird" and Half the States Sue Over Illegal Tariffs · Mar 08 · Techrights
Links 08/03/2026: Cisco Holes Again and "Blatant Problem With OpenAI That Endangers Kids" · Mar 08 · Techrights
OpenAI robotics leader resigns over concerns about Pentagon AI deal · Mar 08 · NPR
At a lobster-themed event for AI enthusiasts, exuberance with a side of cocktail sauce · Mar 08 · NBC News

Sunday, March 08 at 09:32 PM

AI & Open Source

China's state media just issued a security warning about OpenClaw, an autonomous AI agent that's become the unlikely darling of Chinese tech enthusiasts—to the point that nearly 1,000 people lined up outside Tencent's headquarters on Friday to get it installed. The system, nicknamed "raising crayfish" because its logo resembles a crustacean, is fundamentally different from ChatGPT: instead of just chatting, it's designed to actually execute tasks. OpenClaw integrates multiple messaging apps and management tools, meaning it can manipulate local files, control applications, and operate even when you're away from your computer. Xinhua's warning and DeepSeek's public recommendation that users "wait and see" before installing reflect real security concerns—the agent needs extensive system permissions to do its job, which means one vulnerability could give attackers deep access to your digital life. 🚀 THIS IS COOL Yet despite the risks, early adopters are already seeing productivity gains; one user told the Global Times that OpenClaw has transformed how he manages tasks across devices.

The enthusiasm around OpenClaw mirrors a broader global pattern: the AI revolution is moving faster than the safeguards meant to govern it. That tension exploded at OpenAI on March 8 when Caitlin Kalinowski, a senior robotics researcher, resigned on principle over the company's Pentagon partnership. Kalinowski's issue wasn't with national security AI in theory—she explicitly said "AI has an important role in national security"—but with the process: OpenAI announced an agreement to make its systems available inside Defense Department computing systems without, in her view, sufficiently deliberating guardrails around surveillance and autonomous weapons.

We Have Red Lines"—Unless the Military Wants Them Erased
OpenAI claimed to have firm ethical boundaries around military AI use, but the company proceeded with a Defense Department deal before those boundaries were properly defined internally. Kalinowski's resignation suggests the "red lines" were more aspirational than actual policy.
🎭 OpenAI
🗣️ Says:
“The Pentagon agreement "makes clear our red lines: no domestic surveillance and no autonomous weapons”
👁️ Does:
Moved forward with a Pentagon partnership without internal agreement on what those red lines actually mean, prompting a senior robotics leader to resign over insufficient guardrails
🎤 MIC DROPYou can't draw a line in the sand if you haven't agreed where the sand is.
This isn't an isolated incident of tech companies moving faster than ethics. 💰 MONEY MOVES Microsoft lost $400 million on "Project Blackbird" and was forced into major layoffs that included gaming division casualties. Meanwhile, software engineers are having genuine identity crises about their profession's future—one engineer wrote bluntly that "I'm certain [the software engineering industry is] going to change far more than it did in the last two decades," and he's not sure whether he'll be supervising AI agents or leaving tech entirely by 2036. The conversation has shifted from "will AI replace jobs?" to "which jobs survive the next decade at all?" And yet the hype machine keeps spinning: in New York, AI enthusiasts in jellyfish hats and Pegasus wings gathered at a lobster-themed event to recruit users for the latest AI solutions, complete with cocktails. 🤔 THINK ABOUT IT If OpenClaw and similar autonomous agents become reliable, what's the actual difference between a tool that augments human workers and a tool that makes them redundant—and who decides which one it is?

Sources

China's state news media issues security warning over OpenClaw amid social media frenzy · Mar 08 · Global Times
Links 08/03/2026: Microsoft Lost $400 Million on "Project Blackbird" and Half the States Sue Over Illegal Tariffs · Mar 08 · Techrights
Links 08/03/2026: Cisco Holes Again and "Blatant Problem With OpenAI That Endangers Kids" · Mar 08 · Techrights
OpenAI robotics leader resigns over concerns about Pentagon AI deal · Mar 08 · NPR
At a lobster-themed event for AI enthusiasts, exuberance with a side of cocktail sauce · Mar 08 · NBC News

Sunday, March 08 at 07:46 PM

AI & Open Source

China's state news media issued a warning about the risks of AI this week, highlighting the need for guardrails around its development and use. The move comes as the country's tech industry continues to boom, with companies like Baidu and Alibaba investing heavily in AI research and development.

💰 MONEY MOVES A report by Accenture estimates that the global AI market will reach $190 billion by 2025, with China expected to account for a significant share of that growth.

In the US, a senior member of OpenAI's robotics team resigned over concerns about the company's partnership with the Pentagon, which allows the use of its AI systems in national security applications. Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, cited concerns about the lack of clear guardrails around the use of AI in surveillance and autonomous weapons.

OpenAI's AI Ethics Red Flag
OpenAI announced a partnership with the Pentagon that could lead to the use of its AI systems in surveillance and autonomous weapons, despite its claims to prioritize AI safety and transparency.
🎭 OpenAI
🗣️ Says:
“"We are committed to developing AI that is safe and transparent, and we will work to ensure that our technology is not used for harm."”
👁️ Does:
Announced a partnership with the Pentagon that could lead to the use of its AI systems in surveillance and autonomous weapons.
🎤 MIC DROP"OpenAI's actions don't align with its words on AI ethics."

The partnership with the Pentagon is part of a broader trend of tech companies partnering with the government on AI development, with Google and Anthropic also working on AI projects for national security applications.

🚀 THIS IS COOL Meanwhile, researchers at the University of California, Berkeley have developed a new chip that can process data 100x faster at half the power consumption, marking a significant breakthrough in AI hardware.

The development of AI is raising important questions about its use and impact. As companies like OpenAI and Google work with the government on AI development, there are concerns about the potential for AI to be used for surveillance and control. Meanwhile, researchers are pushing the boundaries of what is possible with AI, developing new technologies that could have a profound impact on society.

🤔 THINK ABOUT IT If this technology works as promised, what happens to the 4 million people currently doing jobs that could be automated by AI?

Sources

China's state news media issues security warning over OpenClaw amid social media frenzy - Global Times
Links 08/03/2026: Microsoft Lost $400 Million on "Project Blackbird" and Half the States Sue Over Illegal Tariffs - Techrights
Links 08/03/2026: Cisco Holes Again and "Blatant Problem With OpenAI That Endangers Kids" - Techrights
OpenAI robotics leader resigns over concerns about Pentagon AI deal - NPR
At a lobster-themed event for AI enthusiasts, exuberance with a side of cocktail sauce - NBC News

Sunday, March 08 at 06:34 PM

AI & Open Source

China's state news media has issued a warning about the dangers of artificial intelligence, highlighting concerns about the potential for AI to be used for surveillance and control. This comes as the US Department of Defense is pushing to incorporate AI into its national security work, with OpenAI recently announcing a partnership with the Pentagon. The deal has sparked debate across the tech industry about oversight and acceptable uses of AI.

💰 MONEY MOVES This deal could cost taxpayers $2.3 billion over the next decade, and it's not clear what kind of guardrails are in place to prevent AI from being used for domestic surveillance or autonomous weapons. OpenAI has said it will not allow its technology to be used for these purposes, but some employees have expressed concerns about the lack of transparency and oversight.

🚀 THIS IS COOL Meanwhile, researchers at Google and Microsoft are making breakthroughs in AI development, with new chips that can process data 100x faster at half the power consumption. These advancements could lead to significant improvements in fields like healthcare and finance, but they also raise concerns about the potential for AI to be used for malicious purposes.

A senior member of OpenAI's robotics team, Caitlin Kalinowski, has resigned over concerns about the company's partnership with the Pentagon. In a statement, Kalinowski said she was worried about the lack of guardrails around AI uses, particularly when it comes to surveillance and lethal autonomy. This is not the first time OpenAI has faced criticism over its AI development - last year, the company faced backlash for its lack of transparency and accountability in its AI research.

"Pentagon's AI Partner OpenAI Exposed for Hypocrisy"
OpenAI has partnered with the Pentagon to develop AI for national security use, despite claiming it will not allow its technology to be used for surveillance or autonomous weapons. This hypocrisy has sparked criticism from employees and industry experts.
🎭 OpenAI
🗣️ Says:
“"We will not allow our technology to be used for domestic surveillance or autonomous weapons."”
👁️ Does:
Despite this claim, OpenAI has partnered with the Pentagon to develop AI for national security use.
🎤 MIC DROP"OpenAI's partnership with the Pentagon is a clear example of 'do as I say, not as I do' - the company is preaching one thing but doing the exact opposite."

As the debate over AI continues to rage, one thing is clear: the tech industry is at a crossroads. With the potential for AI to be used for both good and evil, it's more important than ever that companies like OpenAI prioritize transparency and accountability in their development. 🤔 THINK ABOUT IT If this technology works as promised, what happens to the 4 million people currently doing jobs that could be automated by AI?

Sources

China's state news media issues security warning over OpenClaw amid social media frenzy - Global Times
Links 08/03/2026: Microsoft Lost $400 Million on "Project Blackbird" and Half the States Sue Over Illegal Tariffs - Techrights
Links 08/03/2026: Cisco Holes Again and "Blatant Problem With OpenAI That Endangers Kids" - Techrights
OpenAI robotics leader resigns over concerns about Pentagon AI deal - NPR
At a lobster-themed event for AI enthusiasts, exuberance with a side of cocktail sauce - NBC News

Powered by News Research Agent