AI & Open Source
Last 7 briefings
Tuesday, March 10 at 07:02 AM
Yann LeCun's new Paris-based startup, Advanced Machine Intelligence, just raised over $1 billion to prove that the future of AI isn't about scaling up language models—it's about teaching machines to understand the physical world. 💰 MONEY MOVES The $3.5 billion valuation, backed by Bezos Expeditions, Mark Cuban, and former Google CEO Eric Schmidt, represents a major bet against OpenAI, Anthropic, and even LeCun's former employer Meta, all of whom believe that throwing more compute at large language models will eventually deliver human-level intelligence. LeCun, who won a Turing Award in 2018, has spent the last year publicly dismantling the idea that ChatGPT-style systems can reach AGI, and he's putting serious capital where his mouth is. AMI plans to launch with offices in Paris, Montreal, Singapore, and New York, targeting industries like manufacturing, biomedics, and robotics where the stakes for getting the physics right are highest.
Meanwhile, the open-source world is quietly reshaping enterprise AI security in ways the industry didn't see coming. 🚀 THIS IS COOL OpenAI just acquired Promptfoo, an open-source red-teaming platform built by Ian Webster nights and weekends while he was at Discord, that's now used by over 125,000 developers and more than 30 Fortune 500 companies. The tool works like an automated adversary—it talks to your AI application like an attacker would, probing for prompt injections, jailbreaks, and unsafe model behavior, then iterates through an agentic reasoning loop to expose deeper vulnerabilities. OpenAI pledged to keep the tool open source under its current license, integrating it into Frontier, their new enterprise agent platform launched in February. 🤔 THINK ABOUT IT As enterprises shift from asking "how capable is this model?" to "how do we keep this model from breaking our business?", the companies that own the security tooling become gatekeepers for the entire deployment pipeline.
🚀 THIS IS COOL Andrej Karpathy, Tesla's former AI director, just released an open-source "autoresearch" framework that lets AI agents read their own source code, form hypotheses about improvements, modify the code, run experiments, and evaluate results—potentially allowing researchers to run hundreds of AI experiments in a single night. This isn't just incremental; it's automation eating the job of experimentation itself. At the same time, Motif's CEO is publicly confident they'll surpass OpenAI and Google, while South Korean competitors race to catch up. The competitive pressure is genuine, but so is the infrastructure complexity—OpenClaw, an Austrian open-source AI agent that went viral in China last week with 250,000 GitHub stars, requires enough technical setup that installation services are now a booming market, with some providers claiming six-figure earnings in just days.
Sources
Monday, March 09 at 05:02 PM
Landmark legal trouble is brewing for OpenAI, with a lawsuit claiming ChatGPT provided faulty legal advice that created real legal problems for users—a case that could reshape how all AI companies handle professional services. The specifics of what advice went wrong aren't yet public, but if this holds up in court, it could force the entire industry to reconsider what domains their models should even attempt to answer in, since the liability exposure is substantial. 🤔 THINK ABOUT IT If courts start holding AI companies liable for professional advice they never claimed to be qualified to give, does that mean disclaimers are legally meaningless, or does it mean the models simply shouldn't attempt these domains at all?
Meanwhile, OpenAI is on a security-focused shopping spree. The company announced it's acquiring Promptfoo, the red-teaming startup used by over 125,000 developers and 30-plus Fortune 500 companies, for an undisclosed sum. 💰 MONEY MOVES Promptfoo had raised $23.4 million in venture funding—$5 million seed from Andreessen Horowitz in 2024, followed by an $18.4 million Series A led by Insight Partners in July 2025—and now gets absorbed into OpenAI's Frontier enterprise agent platform launched just last month. The tool works like an automated adversary, testing AI applications through chat interfaces and APIs to expose vulnerabilities like prompt injection and data leakage before they hit production. 🚀 THIS IS COOL What makes this valuable is that it uses specialized models to behave like attackers, then records what works, analyzes why, and iterates through reasoning loops to find deeper flaws—essentially creating a system that gets better at breaking your AI the more it tries. OpenAI committed to keeping Promptfoo open source under its current license, a promise worth watching.
But here's where things get complicated. A senior OpenAI robotics engineer recently resigned over "lethal autonomy" concerns after OpenAI agreed to make its AI systems available inside the U.S. Department of Defense. The resignation signals serious internal friction about weaponization, even as OpenAI moves deeper into government contracts.
On the security front, there's actual good news buried in the week's chaos. Europol and a coalition of security companies and law enforcement dismantled the infrastructure hosting Tycoon2FA, one of the world's largest adversary-in-the-middle phishing operations, and also seized LeakBase, a major cybercriminal marketplace for stolen data and hacking tools. 🚀 THIS IS COOL These takedowns represent real wins where defenders showed up and made a dent. But here's the catch: these disruptions are typically short-term. The ecosystem adapts by migrating to other forums or more resilient channels like Telegram, so while this week belongs to the good guys, the war isn't over. Meanwhile, Anthropic discovered 22 new security vulnerabilities in Firefox using Claude Opus 4.6, demonstrating how AI is becoming a powerful tool for finding the flaws that traditional security testing misses—another reminder that AI's impact on security cuts both ways.
The convergence here is stark: as AI becomes embedded in enterprise workflows, government systems, and developer tools, the stakes for getting security, liability, and ethics right grow exponentially. OpenAI's acquisition of Promptfoo and commitment to keeping it open source suggests the company understands this. But the Pentagon deal and the engineer resignation suggest that understanding and action still have a gap between them. The lawsuit over legal advice, the shift in how developers write code to please models, and the ongoing cat-and-mouse game between cybersecurity attackers and defenders all point to the same emerging reality: AI isn't optional infrastructure anymore, it's foundational—which means every decision made now ripples through the entire ecosystem.
Sources
Monday, March 09 at 07:02 AM
Landmark legal trouble is brewing for OpenAI as a lawsuit filed this week challenges the company's willingness to let ChatGPT dispense legal advice—a decision that allegedly landed users in genuine legal jeopardy. If this case gains traction, it could reshape how every AI company thinks about liability and what their systems are allowed to do. The implications ripple far beyond one chatbot: courts may soon force tech companies to build guardrails they've been content to skip, fundamentally changing what gets shipped to the public.
Meanwhile, China is experiencing its own AI moment, but with a distinctly different flavor. 🚀 THIS IS COOL OpenClaw—nicknamed "crayfish" by enthusiasts because of its logo—represents a wholesale departure from ChatGPT-style conversation. This autonomous AI agent isn't designed just to talk; it's built to actually execute tasks, integrating with messaging apps, file systems, and local applications to get things done. Nearly 1,000 developers lined up at Tencent's headquarters last week to get it installed, some even charging fees for installation services. Xiaomi and Tencent have both launched versions, and Chinese state media issued security warnings that only seemed to amp up the hype. 🤔 THINK ABOUT IT What does it mean that the most cautious moment in China's media about a new technology is framed as a feature, not a bug?
The rush to build agentic AI is forcing developers to confront a profound shift in what "good code" actually means. For decades, programmers optimized for readability to other humans—clever abstractions, elegant frameworks, personal taste. That era is ending. Hamel Husain, who built and championed the nbdev project, recently abandoned it entirely because it wasn't AI-friendly. He's now writing code that machines prefer: explicit, consistent, boring, legible to LLMs. GitHub's data backs this up—TypeScript has overtaken Python, driven partly by models' preference for the language. Developers aren't rebelling against this; they're leaning in, treating tools as infrastructure rather than self-expression. The conformity is actually leverage.
Sources
Monday, March 09 at 03:12 AM
China's state-run Xinhua News Agency issued a security warning about OpenClaw this week, even as the autonomous AI agent sparked what can only be described as a social media frenzy under the playful nickname "raising crayfish"—a reference to its crab-like logo. The timing tells you something important about how differently nations are approaching AI development right now. While Chinese tech giants Tencent and Xiaomi were literally lining up thousands of enthusiasts for installation events (with some users even charging fees for the service), regulators were simultaneously pumping the brakes, suggesting a wait-and-see approach. 🚀 THIS IS COOL OpenClaw represents a genuine shift in AI design philosophy—it's built to "get things done" rather than just chat, integrating persistent memory, multi-channel communication, and local deployment capabilities that make it fundamentally different from ChatGPT or other conversational systems. But that power comes with a price: the agent needs extensive system permissions to manipulate files and applications, which is exactly why security experts and companies like DeepSeek are urging caution.
The Chinese enthusiasm for OpenClaw contrasts sharply with growing alarm bells in the United States around the same issue—but from a completely different angle.
The contrast between China's caution and America's acceleration reveals something uncomfortable: both approaches have genuine downsides. 🤔 THINK ABOUT IT If Chinese regulators are right to be worried about security and local control issues with OpenClaw, why are American companies moving forward with military partnerships faster rather than slower? And if American companies are right that the U.S. needs advanced AI for national security, why is China taking the time to deliberate while America's internal warnings go unheeded? The honest answer is that neither country has actually solved the governance problem—they're just choosing different ways to avoid it.
Meanwhile, at a Manhattan lobster-themed AI enthusiast event (yes, really), people were celebrating the latest AI solutions with jellyfish hats and cocktails, embodying the exuberant optimism that defines the current moment in tech culture. That same culture produced both OpenClaw's impressive technical achievements and the cavalier approach to Pentagon guardrails that prompted a senior engineer to resign on principle. The gap between those two things—genuine breakthrough technology and inadequate ethical deliberation—is where we're living right now, and it's only getting wider.
Sources
Sunday, March 08 at 09:32 PM
China's state media just issued a security warning about OpenClaw, an autonomous AI agent that's become the unlikely darling of Chinese tech enthusiasts—to the point that nearly 1,000 people lined up outside Tencent's headquarters on Friday to get it installed. The system, nicknamed "raising crayfish" because its logo resembles a crustacean, is fundamentally different from ChatGPT: instead of just chatting, it's designed to actually execute tasks. OpenClaw integrates multiple messaging apps and management tools, meaning it can manipulate local files, control applications, and operate even when you're away from your computer. Xinhua's warning and DeepSeek's public recommendation that users "wait and see" before installing reflect real security concerns—the agent needs extensive system permissions to do its job, which means one vulnerability could give attackers deep access to your digital life. 🚀 THIS IS COOL Yet despite the risks, early adopters are already seeing productivity gains; one user told the Global Times that OpenClaw has transformed how he manages tasks across devices.
The enthusiasm around OpenClaw mirrors a broader global pattern: the AI revolution is moving faster than the safeguards meant to govern it. That tension exploded at OpenAI on March 8 when Caitlin Kalinowski, a senior robotics researcher, resigned on principle over the company's Pentagon partnership. Kalinowski's issue wasn't with national security AI in theory—she explicitly said "AI has an important role in national security"—but with the process: OpenAI announced an agreement to make its systems available inside Defense Department computing systems without, in her view, sufficiently deliberating guardrails around surveillance and autonomous weapons.
Sources
Sunday, March 08 at 07:46 PM
China's state news media issued a warning about the risks of AI this week, highlighting the need for guardrails around its development and use. The move comes as the country's tech industry continues to boom, with companies like Baidu and Alibaba investing heavily in AI research and development.
💰 MONEY MOVES A report by Accenture estimates that the global AI market will reach $190 billion by 2025, with China expected to account for a significant share of that growth.
In the US, a senior member of OpenAI's robotics team resigned over concerns about the company's partnership with the Pentagon, which allows the use of its AI systems in national security applications. Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, cited concerns about the lack of clear guardrails around the use of AI in surveillance and autonomous weapons.
The partnership with the Pentagon is part of a broader trend of tech companies partnering with the government on AI development, with Google and Anthropic also working on AI projects for national security applications.
🚀 THIS IS COOL Meanwhile, researchers at the University of California, Berkeley have developed a new chip that can process data 100x faster at half the power consumption, marking a significant breakthrough in AI hardware.
The development of AI is raising important questions about its use and impact. As companies like OpenAI and Google work with the government on AI development, there are concerns about the potential for AI to be used for surveillance and control. Meanwhile, researchers are pushing the boundaries of what is possible with AI, developing new technologies that could have a profound impact on society.
🤔 THINK ABOUT IT If this technology works as promised, what happens to the 4 million people currently doing jobs that could be automated by AI?
Sources
Sunday, March 08 at 06:34 PM
China's state news media has issued a warning about the dangers of artificial intelligence, highlighting concerns about the potential for AI to be used for surveillance and control. This comes as the US Department of Defense is pushing to incorporate AI into its national security work, with OpenAI recently announcing a partnership with the Pentagon. The deal has sparked debate across the tech industry about oversight and acceptable uses of AI.
💰 MONEY MOVES This deal could cost taxpayers $2.3 billion over the next decade, and it's not clear what kind of guardrails are in place to prevent AI from being used for domestic surveillance or autonomous weapons. OpenAI has said it will not allow its technology to be used for these purposes, but some employees have expressed concerns about the lack of transparency and oversight.
🚀 THIS IS COOL Meanwhile, researchers at Google and Microsoft are making breakthroughs in AI development, with new chips that can process data 100x faster at half the power consumption. These advancements could lead to significant improvements in fields like healthcare and finance, but they also raise concerns about the potential for AI to be used for malicious purposes.
A senior member of OpenAI's robotics team, Caitlin Kalinowski, has resigned over concerns about the company's partnership with the Pentagon. In a statement, Kalinowski said she was worried about the lack of guardrails around AI uses, particularly when it comes to surveillance and lethal autonomy. This is not the first time OpenAI has faced criticism over its AI development - last year, the company faced backlash for its lack of transparency and accountability in its AI research.
As the debate over AI continues to rage, one thing is clear: the tech industry is at a crossroads. With the potential for AI to be used for both good and evil, it's more important than ever that companies like OpenAI prioritize transparency and accountability in their development. 🤔 THINK ABOUT IT If this technology works as promised, what happens to the 4 million people currently doing jobs that could be automated by AI?
Sources
Powered by News Research Agent