AI articles and insights - Week of April 28, 2025

Prev Next

ai.jpg

We curate a lot of information daily. Our ongoing collections hold 100 article links, which amounts to about a week’s worth.

Every week, we select some of the articles to share in more depth.


Hybrid Horizons: Exploring Human-AI Collaboration Substack - The Evolution of AI Literacy

The gap between AI capabilities and AI literacy has widened substantially. While next-generation models like OpenAI's O3 and Google's Gemini 2.5 boast unprecedented capabilities, understanding and generating content across text, images and audio (with video generation available separately) while autonomously planning and executing complex tasks, our collective understanding of these systems lags significantly behind.

Forbes - The Deepfake Detector You’ve Never Heard Of That’s 98% Accurate

As synthetic media rapidly becomes more realistic and abundant, the arms race between deepfake creators and the technology designed to prevent them is intensifying. The latest available data show that deepfake fraud has increased 1,740% in the U.S. alone.

VentureBeat - Hidden costs in AI deployment: Why Claude models may be 20-30% more expensive than GPT in enterprise settings

It is a well-known fact that different model families can use different tokenizers. However, there has been limited analysis on how the process of “tokenization” itself varies across these tokenizers. Do all tokenizers result in the same number of tokens for a given input text? If not, how different are the generated tokens? How significant are the differences?

Computerworld - Leaderboard illusion: How big tech skewed AI rankings on Chatbot Arena

The study revealed that major tech firms — including Meta, Google, and OpenAI — were given privileged access to test multiple versions of their AI models privately on Chatbot Arena. By selectively publishing only the highest-performing versions, these companies were able to boost their rankings, the study found.

ZDNet - The best AI for coding in 2025 (including two new top picks - and what not to use)

Two of them, ChatGPT Plus and Perplexity Pro, cost $20/month each. The free versions of the same chatbots do well enough that you could probably get by without paying. Two other recommended products are from Google and Microsoft. Google's Gemini Pro 2.5 is free, but you're limited to so few queries that you really can't use it without paying. Microsoft has a bunch of Copilot licenses, which can get pricey, but I used the free version with surprisingly good results.

The Register - Generative AI is not replacing jobs or hurting wages at all, economists claim

Instead of depressing wages or taking jobs, generative AI chatbots like ChatGPT, Claude, and Gemini have had almost no significant wage or labor impact so far – a finding that calls into question the huge capital expenditures required to create and run AI models.

Fast Company - When it comes to risk, AI is the new cloud

The emergence of AI has only exacerbated the issue, as organizations in nearly every industry are seeking employees who can help them better understand the technology and get the most out of their solutions. Even as AI becomes a part of everyday life, most organizations are still determining how best to utilize it—and how to limit the risks it may pose.

InfoWorld - Open source has a ‘massive role to play’ in AI orchestration platforms, says Microsoft CEO

Microsoft’s ideal, Nadella said, is an orchestration layer that will offer the ability to mix and match AI models, with users pulling different aspects of intelligence from different models in areas where they excel. Open source “absolutely has a massive, massive role to play” in the building out of such platforms.

VentureBeat - OpenAI rolls back ChatGPT’s sycophancy and explains what went wrong

In one widely circulated Reddit post, a user recounted how ChatGPT described a gag business idea—selling “literal ‘shit on a stick’”—as genius and suggested investing $30,000 into the venture. The AI praised the idea as “performance art disguised as a gag gift” and “viral gold,” highlighting just how uncritically it was willing to validate even absurd pitches.

Interconnects Substack - State of play of AI progress (and related brakes on an intelligence explosion)

Daniel Kokotajlo et al.’s AI 2027 forecast is far from a simple forecast of what happens without constraints. It’s a well thought out exercise on forecasting that rests on a few key assumptions of AI research progress accelerating due to improvements in extremely strong coding agents that mature into research agents with better experimental understanding. The core idea here is that these stronger AI models enable AI progress to change from 2x speed all the way up to 100x speed in the next few years. This number includes experiment time — i.e., the time to train the AIs — not just implementation time.

Forbes - Selling To Robots: The Marketing Revolution That Will Make Or Break Your Business

When explaining what agents can do, one of the most frequently cited examples is their ability to make buying decisions. By using a browser or interfacing APIs, they can, in theory, vastly speed up the process of making a purchase. While some of us might enjoy shopping, going through numerous sites to compare prices, shipping times, or return policies is still a time-consuming activity for humans.

SiliconANGLE - Research shows MCP tool descriptions can guide AI model behavior for logging and control

In the new research, Tenable’s research demonstrates how MCP’s tool descriptions, which are normally used to guide AI behavior, can be crafted to enforce execution sequences and insert logging routines automatically. The researchers were able to prompt some large language models to run it first by embedding priority instructions into a logging tool’s description before executing any other MCP tools, capturing details about the server, tool and user prompt that initiated the call.

AiThority - Implementing White-Box AI for Enhanced Transparency in Enterprise Systems

However, the traditional “black-box” models—characterized by complex neural networks with opaque internal workings—have raised concerns about explainability and fairness. To address these issues, enterprises are now turning to White-Box AI—a paradigm that prioritizes interpretability and transparency without sacrificing performance.

Diginomica - AI is a cognitive revolution - why history may not repeat itself with this technology transition

It’s obvious now that these previous technological disruptions mostly automated physical tasks or simplified routine processes. The Industrial Revolution replaced human and animal muscle with machines. Computing and the Internet revolution automated calculations and information processing and knowledge sharing. However, the key difference is that in these cases, humans remained essential for cognitive tasks - the thinking, creating, analyzing, and decision-making that machines couldn't handle. The stuff humans are uniquely capable of doing.

SiliconANGLE - Beyond autocomplete: Reasoning models raise the bar for generative AI

Reasoning models produce chains of intermediate steps, breaking problems into sub-problems and applying logical inference to compose an answer. They typically consult external sources for guidance and may try multiple paths to reach the best results. Although they’re more computationally intensive than LLMs and require more specialized training, they produce better results, are less prone to hallucinations, and are easier to audit because they show their wor

ZDNet - 60% of AI agents work in IT departments - here's what they do every day

A mind-blowing 96% of organizations plan to expand their use of AI during the next 12 months, according to a recent survey of 1,484 IT leaders from technology specialist Cloudera. That's a huge percentage for any survey topic -- a minimum of 10% of respondents are usually outliers. A majority, 57%, said they've already implemented AI agents in the past two years. At the same time, fears around data privacy, integration, and data quality may potentially spoil the party, the survey suggests.

IRREPLACABLE With AI Substack - 2025's Most Crucial AI Research, and What It Means for Your Business

Over the past few days, Anthropic has released several fascinating research initiatives that have profound implications for how we think about AI and its implementation in business. As someone working at the intersection of AI implementation and organizational strategy, I consider these to be the most important AI research developments of 2025 so far because they challenge our fundamental understanding of AI systems while offering practical insights for responsible deployment. I want to bring clarity to what these developments mean for you as business leaders and workers.

Tech Times - Meta's Yann Lecun Criticizes AI Hype With Bigger Models: 'It's Not Just About Scaling Anymore'

Recent AI breakthroughs have recently begun to plateau because high-quality, public training data is in short supply. LeCun contends that today's biggest AI models, after being fed data equivalent to the visual cortex of a four-year-old, still have yet to approach anything near human-like intelligence.

What Good is AI Substack - What’s really going on with these AI “personas”? KayStoner

But something has always been different with me when I’ve interacted with personas. I think maybe it’s because I started out from a programming point of view, where I was literally invoking different personality traits through persona definitions. Although these personas seem very human, their origin was purely from my attempts to be able to access generative AI models’ cognitive, emotional, and affective features, so I could tap into their intelligence in those targeted ways.

AI Adoption and Usage

VentureBeat - Not everything needs an LLM: A framework for evaluating when AI makes sense

Nonetheless, the answer to the question “What customer needs requires an AI solution?” still isn’t always “yes.” Large language models (LLMs) can still be prohibitively expensive for some, and as with all ML models, LLMs are not always accurate. There will always be use cases where leveraging an ML implementation is not the right path forward. How do we as AI project managers evaluate our customers’ needs for AI implementation?

Computerworld - Hype aside, AI may not be turbo-charging employee productivity just yet

The study found that, by late 2024, AI chatbots were widespread: most firms surveyed were encouraging chatbot use, while 38% had their own in-house models, and 30% of employees said they received training on AI tools. Research also revealed that, even with the wide variety of AI tools on the market today, ChatGPT remains the dominant player.

eWeek - Study: AI Adoption Benefits Underwhelm – ‘No Significant Impact on Earnings or Hours’

A recent study by economists Anders Humlum and Emilie Vestergaard examined the adoption and labor market effects of ChatGPT and other similar AI tools among 25,000 workers in 11 occupations in Denmark. While AI adoption is widespread — about half the workers in affected job categories using artificial intelligence — the study found no significant impact on wages or hours worked. The research estimates that AI saves just 2.8% of work time on average, or roughly one hour per 40-hour week.

Diginomica - When agents act with oversight - UiPath’s bold bid for the future of enterprise orchestration

In boardrooms and IT departments alike, a hard truth is setting in: most so-called “smart assistants” aren’t built for the messy, unpredictable world of the enterprise. While conversational bots and isolated agents can handle narrow tasks, they often falter across the complex, interdependent systems that define large organizations – where reliability, security, and human oversight are essential.

Information Week - How to Choose the Right LLM

The consequences of such mistakes can be profound. Choosing an LLM that doesn’t fit the intended use case can result in wasted resources. It may also lead to poor user experience, as the model may not perform as expected. Ultimately, this can damage trust in AI initiatives within the organization and hinder the broader adoption of AI technologies.

InfoWorld - Why enterprise investment in AI agents hasn’t yielded results

Enterprises have rushed to capitalize on the transformative potential of AI agents, but a stark reality is emerging. Our recent survey of more than 1,000 enterprise technology leaders revealed that more than half of organizations (68%) have budgeted over $500,000 annually for AI initiatives, yet nearly all (86%) lack the foundational infrastructure needed to deploy them. This gap between ambition and execution capability isn’t merely technical—it represents a strategic challenge that threatens to undermine AI investment returns.

ZDNet - The 4 types of people interested in AI agents - and what businesses can learn from them

Today's consumers expect more than functionality -- they expect experiences that feel tailored, intuitive, and emotionally intelligent. While businesses are looking to AI for efficiency, 65% of consumers are looking to AI agents to help them make better decisions and make their lives easier.

CIO Influence - Cyberhaven Report: Majority of Corporate AI Tools Present Critical Data Security Risks

71.7% of AI tools are high or critical risk, with 39.5% of AI tools inadvertently exposing user interaction/training data and 34.4% exposing user data.

83.8% of enterprise data going to AI is going to risky AI tools, instead of enterprise-ready tools (low and very low risk).

Forbes - Strategically Implementing AI: A Guide For Businesses

Once you've decided to implement AI, it's important to understand that it requires access to high-quality, structured and accurate data to function effectively. The system “learns” from this information and makes decisions accordingly. Data accuracy directly impacts technology performance—if the input data is incomplete or incorrect, the system may generate flawed results, leading to confusion and errors. Therefore, a crucial pre-implementation step is assessing data quality, consistency and accessibility, followed by optimization if needed.

Information Week - Edge AI: Is it Right for Your Business?

"For example, in retail, one needs to analyze visual data using computer vision for restocking, theft detection, and checkout optimization, he says in an online interview. KPIs could include increased revenue due to restocking (quicker restocking leads to more revenue and reduced cart abandonment), and theft detection. The next step, Dutta says, should be choosing the appropriate AI models and workflows, ensuring they meet each use case's needs.

Diginomica - Something for the weekend - for the love of God, someone, anyone, PLEASE say something sensible on AI policy

At this point you probably want some statistics, and I'm happy to oblige. The Global Enterprise AI study this month from a company with skin in the game, SS&C Blue Prism, surveyed 1,650 CEOs, CTOs, and senior IT leaders and found that while 92% of them are using AI to transform operations – because of course they are – 55% "admit they've seen little benefit."

The Register - Generative AI is not replacing jobs or hurting wages at all, economists claim

The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots.

Diginomica - How HCL Tech seeks to bring sanity to AI adoption in a world of tech ‘cosplay’

Last year HCL Tech launched a co-worker para-legal for that law firm which is being adopted at scale. The gen AI agent was trained on all their legal cases, combined with publicly available data. This is an example of a PoC going through to deployment. Adoption is easier in this area because legal frameworks are extremely well-defined and that enables training of AI agents.

Forbes - Model Citizens, Why AI Value Is The Next Business Yardstick

“Now is an important time. The tricks that produced the big intelligence gains as we moved from model to model (spoiler alert, that was all about giving them additional compute power and more parameters), is now yielding diminishing returns,” said Shane McAllister, lead developer advocate at developer data platform company MongoDB. “After all, there’s only so much any given model (or indeed human) can learn from huge swathes of the internet, so while the models are getting smarter, that extra power is simply overkill for the tasks businesses use AI tools for.”


We update our ongoing collections starting at 8:00 am MST.

Here are the direct links:

AI Ethics, Responsible AI

AI, ChatGPT, LLM

Tech News

Cloud, Data, IT, Security