Do-it-yourself trading robot: as usual, a little theory

Do-it-yourself trading robot: as usual, a little theory

Recently, an acquaintance approached me - he trades crypto at an amateur level - and asked me to “write a trading robot.” It is important to focus on terminology here. In conversation, all this is called robots, but in practice, a trading advisor is often needed

Advisor analyzes the market and gives recommendations: where to enter, where to put a stop, where it is better to do nothing at all.

The robot takes the next step for you - and opens/closes positions itself.

And that’s why “let’s get a robot right away” is usually a bad idea. Until the algorithm is debugged, the risk of losing money is maximum: an error in the analysis, a bug in the logic, incorrect processing of a candle/order - and the robot will calmly do exactly what you would not do by hand.

Therefore, a reasonable strategy is almost always the same: first an advisor (analysis + signals), then automation of execution, and only if there are statistics and risk control.

What the article is about: architecture of a trading advisor

In this article - without “million-dollar signals” and without promises of profitability - there is only a theory: what a trading advisor is made of, what its architecture looks like and why the boundaries of responsibility are important in it.

We don’t go deep into the analysis algorithms - the purpose of the article is different: to show what components an advisor usually has and how they fit together with each other.

Today, ready-made LLM services are often chosen for the “brain” that interprets data and formulates conclusions: ChatGPT, Gemini, Grok. If desired, they can be further trained or replaced with your own model, but for architecture this is secondary.

The nuance remains the same: the neural network does not read your thoughts. She needs a prompt - clear, formalized and repeatable.

Prompt is a text instruction (and context) that you pass to the model: what kind of data is in front of you, what exactly needs to be done and in what format to return the response.

And we also need data with which we will work.

The main question here is: what exactly to put in the prompt (candles, indicators, order book, transactions, risk context) and in what form.

Data and API

We'll talk about the prompt later, but now about the data.

The entry level looks simple:

  1. Get data on rates (quotes).
  2. Assemble them with the request context.
  3. Pass this to the neural network prompt - through the same API.

Almost all exchanges provide access to quotes programmatically. And yes - also via API..

Let's digress for a moment and understand what we even call an API, since the word has already appeared several times and will continue to appear constantly.

API (Application Programming Interface) is an interface for interaction between programs.

In practice, these are the “rules of the game”:

  • what requests can be made (for example: get candles, get order book);
  • in what format to send the data (often JSON);
  • what the response looks like (field structure, errors, limits, authorization).

Simply put: the API allows you to use the capabilities of an exchange (or neural network) as a set of functions, without getting into the implementation.

Custom layer: how to show the result

So, at this stage we already have two basic components:

  • data (we take it from the exchange);
  • analyst (neural network to which we feed data and receive output/signal).

But an application that itself receives data and analyzes it itself is useless until a person sees the result.

So we need a third component - an interface.

The simplest and most universal option is the web interface: chart, latest signals, explanation of “why this is so,” risk settings.

As an option - notifications in instant messengers (for example, Telegram): short signal + link to details on the web.

Context over price: news

For the basic version this is enough, but if you want to go deeper, you can add another layer of context: news.

The idea is simple: we collect relevant news articles and add them to the prompt. Usually - not “raw materials”, but in the form of short summaries (so as not to clog up the context and not get lost in tokens).

This is not a mandatory part of the architecture, but in some modes it significantly improves the quality of advice: the model begins to take into account the reasons for the movement, and not just the shape of the graph.

How to tie everything together

Okay, the modules are clear. The question now is practical: how to assemble this into a single application.

In practice, you almost always get two modes..

On-demand analysis

The user pressed a button (or pulled the endpoint) → we pulled up the latest quotes → collected a prompt → received the model’s response → showed a signal.

  • Pros: economical in terms of tokens/resources, easier to debug.
  • Disadvantages: you are guaranteed to miss some events between requests - entry points are also missed along with them.

Regular analysis on a schedule

The user sets the frequency (for example, once every 5 minutes/hour/day) → the system itself runs the conveyor and pushes the result.

  • Pros: signals arrive regularly, you can keep the “pulse of the market”, it is easier to collect statistics.
  • Disadvantages: more expensive (tokens/infrastructure), you need limits and protection from “crossbow” (rate limit, retrays, signal deduplication), plus monitoring and logging are required - otherwise you won’t understand why at 03:17 everything suddenly went silent.

In any of the modes, the set of basic components is the same:

  • web server (interface and API for the user);
  • exchange data module (quotes, candles, if necessary - order book/transactions);
  • LLM module (collection of the prompt → calling the model → parsing the response);
  • storage (user settings, signal history, quote cache, analysis results).

And there needs to be one “glue” that ties it all together: an application-level entity—let’s call it core/service—that manages dependencies, lifecycles, and scripts (on-demand or scheduled).

For the future, it is almost always beneficial to take a plugin approach:

  • plugins for quote sources (we added a new exchange - we don’t rewrite the rest);
  • plugins for LLM providers (a new model has appeared / tariffs have changed - we change the adapter, not the architecture).

Note: An architecture diagram (diagram) is planned here - it is intentionally not included in this version of the HTML.

Then you can get down to implementation: what interfaces do the modules have, what data structures, and where is the border between an “advisor” and a “trading robot”?

According to itprolab.dev

You May Also Like

282018-06-07

Apple publishes new rules on cryptocurrency apps

According to TechCrunch, a number of Apple developers recently joined forces to create the “Developers Union.” The union asked Apple to provide Apple Store users with the opportunity to download free trials of their applications. This is one of the first times that independent iOS developers are trying to fight back against Apple Store policies.

Development
232018-06-09

We accept payment in bitcoin: Part two. Tools and preparation

Perhaps I failed to scare you enough to make you give up this crazy idea of ​​accepting payment in bitcoin. Well then, I have another portion of a headache for you.

Development

Latest articles from Development category

Fresh video on our Channel