
Summarize this article with
Merchants, payment networks, and card issuers are rapidly preparing for an AI agent world, and it’s easy to see why. Bain predicts that by 2027, 40% of online transactions will involve AI agents, and Gartner estimates that by 2036, these agents will be fully autonomous and act on behalf of customers. Meanwhile, bots — kind of like agentic AI’s older, more established cousin — continue to wreak havoc by impersonating humans and automatic attacks, resulting in an estimated $186 billion of annual losses.
To protect themselves and their customers, businesses must develop the capability to accurately distinguish between malicious and beneficial bots and agents. Otherwise, they risk losing out on critical revenue-generating traffic streams while incurring additional fraud losses as malicious actors cause more damage through automation.
In this article, we will dive into agentic commerce, its use cases, and its risks. Then, we’ll explain how to protect your website or app by accurately distinguishing between harmful and helpful bots and agents.
What is agentic commerce?
Agentic commerce refers to online shopping activities that are initiated, managed, or completed by AI agents on behalf of a human. Unlike today's AI chatbots, which assist customers with tasks such as finding or comparing products, AI agents can autonomously complete entire transactions through text, voice, or a combination of both (i.e., multimodal interactions).
What makes AI agents particularly powerful is their ability to remember and learn from past behaviors. (For example, the AI agent might know your favorite brand of orange juice from your past orders, so you don’t have to specify it next time.)
For customers, this personalized, frictionless approach promises a significantly improved shopping experience — so it’s not surprising that there’s already a surge in adoption of agentic AI features. Seventy percent of customers said they would use AI agents to purchase flights, and over 50% of shoppers have expressed interest in using AI agents to buy retail items, such as clothing, beauty products, and electronics.
Agentic commerce examples
By automating complex tasks, AI agents have the potential to reshape the online shopping experience. Here are some key use cases:
Complete shopping tasks on your behalf: The best-known use case for agentic AI in e-commerce is making purchases end-to-end with minimal user involvement. For example, a customer can instruct the AI agent, “I need a good cordless vacuum under $200,” and the agent will research options, compare features and reviews, select the best match, and complete the purchase. This capability extends beyond physical products — AI agents can also research and book experiences, such as travel or dinner reservations.
Automatically reorder when supplies run out: Through integration with IoT devices and smart home systems, AI agents can monitor household supplies and automatically reorder items before they run out. For instance, a smart printer can detect low ink levels and inform an AI agent to order new cartridges, without requiring user intervention.
Schedule and coordinate multi-item deliveries: AI agents excel at managing the logistics of receiving multiple purchases. For example, to help a user move into a new home, an AI agent could coordinate furniture deliveries by managing drop-offs for various vendors and avoiding scheduling conflicts.
Monitor prices and auto-purchase: An AI agent can track prices for an item and execute a purchase when the price is below a specific threshold. With customer-specified parameters, such as "book flights to Paris under $800 for direct flights in March," the agent will monitor airline pricing, factor in the constraints, and automatically complete the booking when the criteria are met.
Predictions for the growth of agentic commerce
The growth potential for agentic commerce is significant. McKinsey estimates that AI-driven automation could add $2.6 trillion to $4.4 trillion annually to the global economy by 2030, with e-commerce expected to see some of the largest gains.
This opportunity has prompted major retailers to make significant investments in agentic capabilities. For instance, in June 2025, Walmart launched Sparky, an AI assistant that helps customers compare products and make personalized recommendations. However, the ultimate vision is to turn Sparky into an AI agent that can book services and make purchases on behalf of customers. Amazon has gone a step further by upgrading its AI assistant, Alexa, to include agentic capabilities. Alexa+ can autonomously handle complex, multi-step tasks, such as finding service providers and arranging home repairs, without user intervention.
Additionally, tech platforms like Shopify and OpenAI are creating infrastructure to enable agentic capabilities for smaller merchants. In May 2025, Shopify launched Shopify Catalog, enabling select merchants to build and deploy AI shopping agents across various AI platforms (like Perplexity). Meanwhile, OpenAI is reportedly integrating payments within ChatGPT and partnering with e-commerce platforms like Shopify to enable customers (and their AI agents) to purchase products directly within ChatGPT.
What are the risks of AI agents?
While the rise of AI agents in agentic commerce is exciting, the reality is more complex. AI agents are still too insecure and unreliable to be fully autonomous today. Here’s an overview of some of the risks in fully rolling out AI agents:
Security and fraud risks: Agents are highly vulnerable to hijacking via prompt injection attacks and jailbreaking. As a result, fraudsters can utilize the AI agent’s powerful capabilities to automate, scale, and hide their fraudulent activities. For instance, fraudsters could jailbreak a legitimate AI agent to conduct card testing faster than merchants or payment networks can detect. Alternatively, they could utilize the AI agent’s advanced capabilities to create a synthetic identity, which can then be used in other fraud schemes, such as applying for personal loans with Buy Now, Pay Later (BNPL) providers with no intention of repaying them.
Hallucinations: Since they are based on LLMs, AI agents may sometimes make up inaccurate or irrelevant answers in response to a customer’s query, which can create additional operational challenges for merchants, card issuers, and payment networks. For example, an AI agent might confirm a hotel booking for a customer even when the property has no available rooms. When customers arrive to find their reservations invalid, merchants face dissatisfied customers and refund requests, while card issuers and payment networks deal with chargebacks and disputed transactions.
Increased risk of returns and chargebacks: The technology for LLM-based reasoning agents is still very new and not yet suited for more complex tasks. As a result, there’s a high likelihood that AI agents could misunderstand user intent and buy items that users don’t want (resulting in high rates of returns), or make transactions that users don’t recognize or remember (resulting in high rates of chargebacks).
However, the technology is rapidly evolving to meet these constraints. Major payment networks are introducing new standards for AI-driven payments, and AI platforms are developing new mechanisms to safeguard LLMs from being jailbroken.
How to protect your platform against malicious bots and agents
As merchants, payment networks, and card issuers prepare for an agentic future, they must also contend with current threats: bots comprise 50% of internet traffic today, with malicious bots generating 30% of all traffic. How can the e-commerce ecosystem prepare for a world where both traditional bots and AI agents operate at scale?
The key is recognizing that not all automated traffic is harmful. Good bots and agents serve essential functions by improving the customer experience (e.g., streamlining the purchase process) and supporting business growth (e.g., search engine indexing). However, malicious actors can use bots and AI agents to scale fraud operations by stealing customer data or making unauthorized purchases.
Successfully navigating this landscape requires developing capabilities that accurately distinguish between legitimate and harmful bots and agents. Here is an overview of the most common methods for making this distinction:
Enhanced bot detection: Bot detection actively determines if a website's activity comes from human users or bots. Detection typically requires monitoring dozens of signals like browser details, mouse movements, scrolling behavior, HTTP headers, and request rates. Machine learning models often evaluate these signals and score each website visitor as likely to be human or a bot, which can then be used to monitor traffic, challenge suspected bots for human verification, and block malicious bots.
Allowlists and blocklists: If you know the good bots and agents that you want to have access to your website (e.g., search engine crawlers that index content), you can list their user agent header and IP address in the allowlist. A bot or agent outside of the allowlist can be blocked or further challenged. Similarly, for malicious bots and agents, you can add them to your blocklist, preventing them from accessing your website. However, this approach has limitations: malicious actors can easily spoof their bot or AI agent’s user agent strings to bypass blocks and appear as legitimate traffic, which minimizes the effectiveness of the approach. Additionally, organizations must continually update these lists to stay current with new bots and AI agents.
Challenges and traps: For bots that are harder to identify, you can impose a challenge that only a human could complete (like a reCAPTCHA). However, many bots and AI agents can now solve reCAPTCHAs, often faster than humans. Another approach is to add honey traps: fake or decoy elements, like hidden input fields, that are invisible to human users but detectable by bots. However, sophisticated bots and AI agents are increasingly programmed to recognize and avoid these traps, while legitimate users using accessibility tools might inadvertently trigger them, resulting in false positives.
Real-time signals: Since AI agents can operate at machine speed, detection systems must conduct real-time analysis of agent behavior to distinguish between beneficial and harmful agents. This requires examining transactional, device, and behavioral data across the user, agent, and merchant to detect when an AI agent might have become compromised. For example, if an AI agent starts to make purchases using expired credentials or attempts larger, unusual transactions, this could indicate a malicious takeover.
Fingerprint’s approach to detecting bots and agents
As the agentic commerce landscape evolves rapidly, customers, merchants, card issuers, and payment networks need accurate, real-time solutions to distinguish legitimate bots and agents from harmful ones.
Fingerprint’s device intelligence platform addresses this challenge by combining multiple signals into real-time datapoints to accurately identify and classify automated traffic. Our Bot Detection Smart Signal, Virtual Machine Detection Smart Signal, and Residential Proxy Detection Smart Signal work together to help you accurately differentiate between legitimate and malicious bots and agents — enabling you to block harmful automated traffic without disrupting legitimate ones. Check out this blog post for more information on how these signals work.
Ready to see Fingerprint in action? Sign up for a free trial or reach out to our team for a personalized demo.
FAQ
Agentic commerce refers to online shopping activities that are initiated, managed, or completed by AI agents on behalf of a human customer.
While still nascent, agentic commerce is expected to quickly gain traction. Bain predicts that by 2027, 40% of online transactions will involve AI agents, and Gartner estimates that by 2036, these agents will be fully autonomous and act on behalf of customers. Additionally, retailers such as Amazon and Walmart have already begun rolling out AI agents that will start shopping on their behalf, while tech platforms like Shopify are introducing AI agents for smaller merchants.
Unlike traditional bots that often exhibit obvious automated patterns, sophisticated AI agents can replicate genuine human shopping behaviors, including browsing patterns, purchase timing, and decision-making processes. This can make it difficult for systems to detect them. Additionally, AI agents execute at near-instant speed, whereas current fraud detection systems were designed to analyze transactions at human speed. As a result, these systems can struggle to keep pace with the velocity of AI-driven fraud.