AI News & TrendsCloudflare cuts 20% of staff, pivots to 'AI-first' model
Cloudflare announced it will cut about 20% of its staff, or around 1,100 jobs, as it shifts to an 'AI-first' business model.
AI News & TrendsCloudflare announced it will cut about 20% of its staff, or around 1,100 jobs, as it shifts to an 'AI-first' business model.
Institutional Intelligence & Tribal KnowledgeA report from FMI and NielsenIQ suggests that about 94% of U.S. grocery shoppers may use both online and in-store methods to buy groceries, leading to most sales coming from these blended shoppers. Most people want the same prices and offers whether they shop online or in person, and differences in prices or images might make them trust a store less. To meet these needs, stores may need to make sure their product details and prices are the same everywhere and update them quickly. Simple technology steps and regular checks might help stores keep things matched and avoid confusing customers. Following these practices may help stores build trust and possibly increase how loyal shoppers are.
Business & Ethical AICFOs are adopting new rules to watch AI costs and keep productivity high. Surveys suggest that most finance leaders see AI as very important, but there is uncertainty about how to control spending and measure returns. Experts recommend clear policies, named owners for each AI system, and regular checks to avoid extra costs from unused models. Many companies now track AI spending closely and may use special profit and loss statements to understand the real costs. Ongoing monitoring and regular reviews appear to help avoid hidden expenses and keep AI projects useful.
Business & Ethical AIAnthropic, Costco, and Novo Nordisk use formal rules to keep their company missions strong, even when short-term profits might compete with long-term goals. They use tools like special voting shares, benefit corporation charters, foundation ownership, strict rules for changing company purpose, and special board committees. Evidence suggests these methods may help protect mission, but some investors seem unsure and certain stock indexes might exclude companies with unequal voting rights. Research also suggests that benefit corporation status alone may not fully protect a company's purpose. Experts say it is often easier and cheaper to these rules early, before taking outside investment.
Business & Ethical AIEric Ries suggests that founders consider a simple two-page Delaware Public Benefit Corporation (PBC) filing to help startups lock in their mission early. This filing may require company leaders to balance profit, stakeholder impact, and a specific public benefit. Early adoption appears to help founders keep control over the mission, especially before bringing in outside investors. Some investors might worry that this structure could slow down quick exits, but Ries points to examples that may show long-term benefits. Following the PBC process may help startups clearly state their purpose and responsibilities from the start, while still needing good decision-making and oversight as they grow.

Anthropic appears to balance AI safety with fast product development through a unique governance approach and team structure. The company created a trust that may gradually control its board, aiming to prioritize long-term benefits for humanity over profits. This trust, run by independent experts, is supposed to shield Anthropic from investor pressure, though it has not fully filled all its board seats yet. The product team uses quick feedback and strong internal tools, which reportedly allows them to release new features much faster than competitors. However, some observers note the system's success might depend on the trust fully controlling the board before more advanced AI systems are released.

Mentions of AI by S&P 500 companies have reached record highs, but clear productivity gains have not yet appeared. Executives often express hope about doing more work with the same number of staff, but hard data is still limited, so predictions rely mostly on outside studies. Some reports suggest AI might affect millions of jobs, but opinions about whether jobs will be gained or lost are not settled. There is strong demand for AI skills and retraining, while routine jobs may shrink. Whether recent talk about AI will lead to real cost savings is still uncertain and will be clearer after more data comes in.

Microsoft's 2026 Work Trend Index introduces "Owned Intelligence," which means capturing company knowledge in systems that learn from every task. The report suggests that organizations using this approach may see faster productivity growth and higher revenue compared to those running small, separate AI projects. A five-step playbook is recommended for building Owned Intelligence, including digitizing documents, creating templates, and setting up feedback loops. Companies that measure and manage these systems well might have better financial returns. The Index also notes that when managers use these tools and encourage feedback, employees may value AI more and knowledge loss in teams could decrease.

Martin Fjeldbonde suggests that trustworthy AI is mostly about how much authority is given to AI systems, not just their software quality. He describes six main controls, or levers - Sensors, Memory, Effectors, Autonomy, Coordination, and Embedding - that can be adjusted to match an AI's abilities with the right level of oversight. The scope of these levers appears to affect the risk and influence of AI in real-world tasks. Some experts believe that using a shared language for these controls may help organizations audit and manage AI more safely. There may still be challenges as AI tools get used quietly in everyday work, raising new questions about control and security.

Anthropic announced new features for its Managed Agents and a partnership with SpaceX's Colossus supercomputer. These upgrades may help agents run all the time without needing people to watch over them. The Managed Agents now include tools for working in teams, learning from past sessions, and measuring progress. Anthropic also gained access to over 220,000 GPUs from SpaceX to handle more traffic. This suggests Anthropic wants to make it easier for businesses to use and manage AI agents at a large scale.

Katie Parrott's essays suggest that choosing when to work closely with AI and when to let it work alone is important. She writes that some tasks, like bug triage, can be handed off to AI, but others, like writing emails or policies, may need humans to work together with the AI. Parrott says the real skill is learning to pick the right way to work - either side by side or by giving tasks away. Early research suggests that people working with AI may do better work, but many AI projects still fail because of problems with teamwork, not technology. She explains that knowing how to switch between modes might become a key skill for the future.

AlphaFold may reduce the time and cost needed to discover new drug targets, with some protein structures now appearing in seconds instead of months. Case studies suggest that AlphaFold models can speed up drug discovery in about one-third of projects, and target selection may become more effective using large-scale genetic data. Automation and single-cell assays might improve the accuracy and speed of early drug screening. Financial reports and partnerships appear to show growing industry trust in these new AI tools, though experimental checks and regulatory proof are still required. Experts suggest these computational methods could make research and development more efficient, but results can vary by case.

The Pasadena Chamber is adding more webinars, including sessions about a Japan trip, tariff refunds, and using AI in business. The AI webinar may help companies understand both the benefits and limits of automation, with topics such as chatbots and bookkeeping. There is also a special focus on small businesses, as many appear to be interested in using AI tools, though some workers reportedly worry about its impact on company reputation. The Chamber's schedule includes over 30 events in May, both online and in-person. Businesses might be able to get tariff refunds if they meet certain conditions set after a court ruling, but there could be waiting times and taxes on the refunds.

The US government has made agreements with Microsoft, Google, and xAI to test new AI models for national security risks before they are released to the public. This program appears to focus on finding possible problems like hacking, dangerous chemicals, or loss of human control, while companies can still make changes. The testing is voluntary, and the government is not forcing companies to join or change their products. These steps may suggest a shift toward checking AI tools before they cause real-world issues. Experts think this ongoing testing and feedback might help shape future rules, but right now it is described as a partnership, not a requirement.

Pinterest introduced a two-layer security model for its AI agents. The first layer uses OAuth-based tokens at the network edge to check basic permissions quickly, and the second layer checks deeper business logic inside each server, which may include human approval for risky actions. Every server is listed in a central company catalog and must pass a compliance check before going live. In some cases, special certificates may be used for less risky automated tasks, but stronger checks still use OAuth. This model appears to help Pinterest protect its systems without slowing down most requests and supports clear audit trails and human safeguards for sensitive operations.