Thursday, May 14, 2026
OpenAI raises $122 billion at $852 billion valuation, largest private financing everAI News & Trends

OpenAI raises $122 billion at $852 billion valuation, largest private financing ever

OpenAI has raised $122 billion at an $852 billion valuation, the largest private funding ever, according to the company's announcement. Major investors include Amazon, Nvidia, and SoftBank, with Microsoft keeping its large stake. The money may go toward building more infrastructure and powerful AI tools, as well as expanding cloud and chip partnerships. Some reports suggest OpenAI's costs could rise very quickly, and it is not clear if future revenue will keep up with this growth. OpenAI has not said when, or if, it might go public.

RadixArk launches with $100M seed round to expand open-source AI toolingAI News & Trends

RadixArk launches with $100M seed round to expand open-source AI tooling

RadixArk, a startup based in San Francisco, launched with a $100 million seed round at a $400 million valuation, which observers describe as a mega-seed. The company plans to use the money to grow its open-source AI tools like SGLang and a new managed platform, though details about customer adoption are still unclear. Investors such as Accel, Spark Capital, and tech companies like NVIDIA and AMD are involved, which may suggest strong interest in AI infrastructure. Some reports say deals of this size reflect a trend to secure resources early, but it is not certain if this funding will be enough for RadixArk's big goals. Future updates from RadixArk, such as customer numbers or partnerships, may show how well the company can meet its aim to make AI infrastructure widely available.

OpenAI gates GPT-5.5-Cyber after report flags offensive capabilityAI News & Trends

OpenAI gates GPT-5.5-Cyber after report flags offensive capability

OpenAI has restricted public access to its GPT-5.5-Cyber model after a report from the UK's AI Security Institute (AISI) flagged its strong offensive cyber abilities. Tests showed the model may help defenders, but could also be misused by attackers, and existing safeguards may not reliably block harmful uses. OpenAI now only allows vetted cybersecurity teams to use GPT-5.5-Cyber through a special program, and similar restrictions are being adopted by other companies. Experts suggest that the model's hacking skills appear to be a side effect of its advanced reasoning, raising concerns that future models might become even harder to control.

Enterprises Adopt 4 Controls for AI Agent Governance, ComplianceBusiness & Ethical AI

Enterprises Adopt 4 Controls for AI Agent Governance, Compliance

Enterprises may need to use four main controls to manage risks and compliance when using cloud-based AI agents. First, before deployment, they should map all data sources accessed by the agent and confirm legal bases for using each type of data. Second, during operations, organizations might set up strict controls like filtering out data without consent, encrypting data, and using automated redaction and strong access controls. Third, continuous monitoring and logging appear to help maintain security and traceability, with periodic permission reviews and audit trails for incident response. Finally, having a clear exit plan, including secure data deletion and export, is suggested to ensure proper closure at the end of a vendor relationship.

Nestlé Recalls 800 Infant Formula Products Across 60 CountriesInstitutional Intelligence & Tribal Knowledge

Nestlé Recalls 800 Infant Formula Products Across 60 Countries

Nestlé recalled over 800 infant formula products in more than 60 countries due to possible contamination, but no confirmed illnesses have been linked to the products. Experts suggest that slow communication may have increased public worry. The recall caused a drop in nutrition sales and high costs for Nestlé, and total losses might reach 1 billion euros. Authorities warned that the contamination could cause vomiting and cramps, but no cases were reported. Trust may take a long time to recover, and clear, transparent information appears to help rebuild confidence.

Latest News

DeepMind AlphaEvolve Makes 23 Scientific Discoveries in Q1 2026
AI News & Trends3h ago

DeepMind AlphaEvolve Makes 23 Scientific Discoveries in Q1 2026

DeepMind's AlphaEvolve reported 23 verified scientific discoveries in chemistry, materials science, and mathematics during early 2026. The system appears to work by generating and testing algorithms, with results confirmed by experts. In chemistry, AlphaEvolve may have created quantum circuit layouts that reduced error rates by ten times compared to older methods. In materials science, some experiments suggest training and inference became about four times faster. In mathematics, AlphaEvolve discovered a way to multiply 4x4 complex matrices with fewer operations than previously thought possible, and some new mathematical bounds were also found.

DeepSeek Seeks $7.3B Funding, Founder May Invest $2.9B
AI News & Trends5h ago

DeepSeek Seeks $7.3B Funding, Founder May Invest $2.9B

DeepSeek, a Chinese AI lab, may raise up to 50 billion yuan (about $7.3 billion) in its first big funding round, with its founder Liang Wenfeng possibly investing around $2.9 billion of his own money. The company is talking with state-backed funds and big companies like Tencent to join the investment, but no major deals have closed yet. If successful, this funding round could be the largest of its kind in China's AI sector, much bigger than other recent deals. The money may be used for new computer systems and to keep engineers from leaving. Talks are still ongoing, so the final numbers might change.

Anthropic Integrates Claude Into Microsoft 365, Starting at $20/Month
AI News & Trends5h ago

Anthropic Integrates Claude Into Microsoft 365, Starting at $20/Month

Claude by Anthropic is now available as secure add-ins for Microsoft 365 apps like Excel, Word, and PowerPoint, with Outlook in public beta. Claude helps users by making suggestions and drafts in a sidebar, but changes are only made if the user accepts them. The tool may be useful for companies that need human approval for edits because it only gives read-only suggestions. Pricing starts at $20 per month, and the add-ins can be quickly installed, with options for individual or team use. Early reaction suggests some users like the reviewable, read-only approach, but it is unclear how widely it is being adopted so far.

Brier's AI Framework Integrates Humans, Agents for Better Alignment
AI Deep Dives & Tutorials5h ago

Brier's AI Framework Integrates Humans, Agents for Better Alignment

Noah Brier's essay suggests that the main challenge in building with AI tools is team coordination, not just code generation. He proposes a framework with five layers - standards, architecture, specs, plans, and code - to help align humans and AI agents toward the same goals. Brier warns that without clear artifacts and strong standards, AI-generated code may increase technical debt and cause quality issues. Early reports suggest that using AI can speed up routine tasks, but may also introduce security and maintainability risks. Brier's approach aims to keep both humans and AI agents working together smoothly by making rules and processes clear to everyone involved.

OpenAI's o1-preview AI Outperforms Doctors in ER Diagnosis Study
AI News & Trends7h ago

OpenAI's o1-preview AI Outperforms Doctors in ER Diagnosis Study

A study suggests that OpenAI's o1-preview AI may list the correct or near-correct diagnosis more often than doctors in a Boston emergency room. However, its performance improvement was not seen in all areas, and emergency teams still need to check the AI's uncertainty before using its advice. Patient-safety groups and regulators are watching AI in healthcare closely, recommending strict policies and monitoring for any problems. Newer AI models appear to do even better in some high-stakes cases, but more studies are needed to see if these tools really lead to safer and faster care in real hospitals.

HR adopts checklist for safe AI use with employee data
Business & Ethical AI7h ago

HR adopts checklist for safe AI use with employee data

HR leaders are rapidly adding AI tools to their work, but this may risk exposing personal employee data. A checklist is suggested to help HR teams safely use AI by first reviewing policies and legal issues, mapping data, limiting data use, informing workers, testing for bias, and controlling vendors. Legal reviews may be required under upcoming laws and existing regulations. The checklist offers steps to create records showing that HR considered risks before using AI, but it does not guarantee full compliance. Careful following of these steps may help build trust and provide proof if questions arise later.

Nvidia commits $40B+ to AI investments, including $30B to OpenAI
AI News & Trends7h ago

Nvidia commits $40B+ to AI investments, including $30B to OpenAI

Nvidia has committed over $40 billion to investments in artificial intelligence for 2026, including a $30 billion pledge to OpenAI that is linked to future purchases of computing power. The company may be investing in many parts of the AI supply chain, like data centers, optical fiber, and photonics, which could help increase demand for its own hardware. Some observers suggest that many of these deals are "circular," because money may end up returning to Nvidia through hardware sales. Supporters say this approach might help Nvidia secure important components and speed up new technology, while critics worry it could blur the lines between customers and partners. The exact amounts and full list of Nvidia's investments may not be fully disclosed yet and could change over time.

OpenAI unveils Enterprise Agent API, Microsoft integrates into Copilot 365
AI News & Trends9h ago

OpenAI unveils Enterprise Agent API, Microsoft integrates into Copilot 365

OpenAI announced new Enterprise Agent APIs that may let companies use AI agents for complex, multi-day tasks. Microsoft appears to be adding these capabilities to Copilot 365, allowing agents to run longer workflows with more security. At the same time, new rules from the EU are making it necessary for businesses to track and control how they use AI, with key deadlines in 2026. Experts suggest that companies might see less manual work and more automated processes, but warn there could be risks if controls are not in place. The pace of change means firms that act quickly and follow new rules may gain an advantage.

Google: Hackers Use AI Models to Find Security Flaw Before Mass Exploitation
AI News & Trends9h ago

Google: Hackers Use AI Models to Find Security Flaw Before Mass Exploitation

Google revealed that hackers used AI models to find a new security flaw and planned a mass attack, but Google's team stopped them before the attack code was released. The specific AI model used was not named, and Google said its own Gemini model does not appear to be involved. Reports suggest that both attackers and defenders increasingly rely on AI, and the number of attacks may be rising because of automation. Some experts believe this could mark a change, as AI may help find new vulnerabilities faster than before. It remains uncertain if current AI safety measures are enough, so companies are using multiple layers of defense and are closely monitoring AI usage.

Forrester: B2B buyers now use AI over vendor sites for discovery
AI News & Trends9h ago

Forrester: B2B buyers now use AI over vendor sites for discovery

Recent studies suggest B2B buyers now use AI tools and platforms like ChatGPT and Reddit more than vendor websites to find information. Surveys indicate that AI chatbots may influence buyer shortlists more than company sites, which now appear to serve mainly as validators later in the process. Influence seems to come from conversational AI answers, review sites, community posts, and machine-readable listings, which brands may not directly control. Experts believe tracking website visits alone might not show true brand influence, as key decisions may happen off-site. Companies may need to check how AI tools describe them and strengthen their visibility in external channels.