AI Trading: Legitimacy & Legality

Artificial intelligence trading is a legitimate practice investors have employed since the 1990s. Originally called algorithms, trading programs are scaled versions of a basic technique: correlation. This core simplicity makes AI trading difficult to outline for legal purposes, like calling marketing a mass form of sales. Where do we draw the line?

Data-driven AI trading is legal in the United States and Europe except when it uses alternative data sources such as social media or news that violate consumer data protection laws such as California’s CCPA/CPRA, Colorado’s Privacy Act, Virginia’s Consumer Data Act, and the European Union’s GDPR.1

The financial regulatory environment generally lags behind new technology. We can expect to see developments in the future, notably with the EU’s AI Act. In the US, the focus seems to be on fairness and equity with regards to credit worthiness assessments.

Why “Hacking the Market” is a Non-Issue

Many assume AI trading must be illegal because it “hacks” the market, providing an unfair advantage to tech geniuses who get rich at the expense of others. The reality is trading algorithms aren’t that sophisticated.

The financial markets are closed systems with highly unpredictable outcomes. AI can improve performance but only to a minimal degree. Three AI-enhanced exchange traded funds (ETFs) have been on the market since as early as 2017, and their performance versus the S&P 500 is a meager +5.91% over 5-years.

Moreover, retail investors like you and I can legally use AI software and bots to build personal portfolios.

In other words, AI’s use as a competitive edge is less a legal concern than corporate accountability, consumer data, and potential manipulation.

Concern #1: Fiduciary Duty of Care and Loyalty

Financial advisors, asset managers, and corporate boards have a fiduciary responsibility to act on their clients behalf above their own. They must care for their clients and remain loyal to them. That may sound wishy washy, but these terms are well tried and well understood.

Today, traders use AI as one of many tools, but it’s not unreasonable to assume it could evolve from tool to independent portfolio manager. Delegation occurs slowly, then all at once. The logic goes as follows.

Care and loyalty are human emotions requiring perception and judgement, but AI does not have perception and judgement. An independent AI trader therefore cannot operate in accordance with the law.

A more nuanced concern is oversight. Traders may only check a program’s decisional logic once a month if it performs well. Left unchecked, the logic could build to a rapid number of poor decisions before being caught. Even simpler, a bug could occur that causes poor decisions.

Oversight is ostensibly part of a traders fiduciary duty of care and loyalty, and lawmakers will likely consider this.

Concern #2: Personal Privacy & Alternative Data

Financial data is available publically and provides an equal playing field to investors. However, the rise of so-called “alternative” data raises questions of data privacy.

Alternative data consists of non-financial sources such as satellite imagery, vocal and facial analysis of CEOs in shareholder meetings, social media posts, news publications, and more.

Backed by a financial incentive, investors may source data from individuals without their consent or without a reasonable explanation of its use. States like California, Colorado, and Virginia have already implemented data protection laws, and more may need to follow as AI evolves to protect consumer privacy.

Concern #3: Susceptibility to Manipulation

AI uses data inputs to determine buy/sell opportunities. Those inputs come from multiple sources, many of which communicate in real-time.

Imagine a scenario in which a savvy hacker sends false signals to an AI trader and causes it to sell large volumes. That transaction directly impacts supply in the market, and if large enough, could create panic selling that leads to a disaster scenario.

This is unlikely because programs have safeguards built in and no institutional investor would allow its AI the opportunity to panic sell. It simply points to the reality that no machine is perfect. If Microsoft and Facebook can be hacked, why not AI traders?

Preventative Regulations – Europe’s AI Act

The anglo-saxon approach to regulations is “laissez-faire, then control.” Europe works the other way around and is already envisioning a regulation to cover AI products in the market, which will include those used for trading. It’s called the “AI Act.”

The spirit of the AI Act is the narrowly define AI as software with an “element of autonomy” and require its developers meet standards of conformity or else pay a penalty of up to €30 million or up to 6% of global income.

Interestingly, the proposal provides a full proof way of avoiding penalties through the use of so-called “regulatory sandboxes.” Products developed in these spheres would be exempt from all punishment, even if their AI somehow brought down the financial system.

Conclusion: Legit & Legal

Not only is AI trading legal, it’s growing as a legitimate tool for everyday investors. It’s particularly popular for cryptocurrencies, forex, and commodities because these securities primarily trade on trend-based analysis rather than underlying asset value. Robots are much faster at calculating and executing than humans. Why not use one?

  1. GLI []

About the Author

Noah

Noah is the founder & Editor-in-Chief at AnalystAnswers. He is a transatlantic professional and entrepreneur with 5+ years of corporate finance and data analytics experience, as well as 3+ years in consumer financial products and business software. He started AnalystAnswers to provide aspiring professionals with accessible explanations of otherwise dense finance and data concepts. Noah believes everyone can benefit from an analytical mindset in growing digital world. When he's not busy at work, Noah likes to explore new European cities, exercise, and spend time with friends and family.

LinkedIn

Scroll to Top