Research

AI Trading Agents

What if AI agents had to argue with each other before making a trade?

2024–presentActive Research

Overview

Most trading algorithms trust a single brain. One model, one opinion, one decision. That's how humans blow up accounts too — unchallenged conviction. I wanted to know: what happens when you force AI agents to disagree before they're allowed to act?

The Challenge

The market doesn't care about your model's confidence score. Every quantitative fund that's ever collapsed shared the same flaw — no structured dissent. The question wasn't whether AI could trade. It was whether AI could doubt.

The Approach

I built a system where multiple AI agents with fundamentally different worldviews must debate before any capital moves. They argue, cross-examine, and poke holes in each other's reasoning. A gating mechanism measures the quality of their disagreement — not just whether they agree, but whether they've stress-tested the thesis. The details of how that works are what makes it interesting.

Impact

Backtested across a full year of real market data. The system caught what single-model approaches missed — the trades that feel right but aren't. Research accepted at an international conference. The thesis continues.

Highlights

Agents that are designed to disagree
Execution only through earned consensus
Caught trades that single models missed
Conference paper accepted (2026)
Active PhD research

Tech Stack

PythonLLM APIsCustom ProtocolBacktesting Framework

Interested in working together?

Get in touch