Tech corner - 30. March 2026

AI assistant architecture for energy market research

header_image

In 2024, we built DIMI — an AI research assistant for Energy Aspects, a leading global provider of real-time energy market intelligence. DIMI helps analysts consume data that was previously scattered across multiple applications: research publications, time-series datasets, and API documentation. Instead of searching three different systems, analysts ask DIMI a question and get a precise, source-backed answer.

This post is a technical breakdown of how we designed the system. If you’re building an AI assistant for a domain-specific enterprise use case, the patterns here are directly applicable.

The core problem: three data types, one interface

Energy Aspects’ analysts work with fundamentally different data types every day: narrative research (long-form publications with analysis and forecasts), structured time-series data (commodity prices, supply/demand figures), and technical API documentation. Each type requires a different retrieval strategy, a different response format, and different quality criteria.

A generic RAG pipeline would collapse these into a single vector store and hope for the best. We took a different approach.

Architecture decision 1: Content-type-specific agentic search

Rather than building one search module, we built a modular agentic search system with specialized handlers for each content type. Each handler understands the specific characteristics of its data:

  1. Research publications require semantic search across long documents, with date-aware relevance scoring (recent analysis matters more) and commodity-specific filtering.
  2. Time-series data requires structured query translation — converting natural language questions like “What was Brent crude in Q3 2024?” into database queries against specific datasets.
  3. API documentation requires code-aware retrieval that understands endpoints, parameters, and response schemas.

Each handler was evaluated independently using an extensive evaluation set compiled in collaboration with Energy Aspects’ domain experts.

Architecture decision 2: Smart routing

Before any retrieval happens, every user query passes through a routing module that classifies it and directs it to the appropriate search agent. The routing module was developed and refined based on real user queries — not synthetic test data.

Getting routing right is critical in any multi-agent system. A misrouted query doesn’t just produce a bad answer; it produces a confidently wrong answer in the wrong format, which is worse than no answer at all. We compiled a separate evaluation set specifically for routing accuracy, validated with domain expert feedback.

Architecture decision 3: Evaluation-driven quality

Every component in the system — search, routing, response generation — has its own evaluation framework. We do not ship features that pass subjective “looks good” reviews. Quality is measured against curated evaluation sets that cover both common queries and edge cases.

The results validated this approach: DIMI delivers over 90% positive response ratings from real users.

What we shipped

  1. Unified AI assistant that consolidates scattered data into a single interface
  2. Agentic search with content-type-specific retrieval
  3. Smart routing that classifies queries and directs them to the right agent
  4. Commodity and time-range filters for precise answers
  5. Instant chart generation with direct links to Energy Aspects’ Data Explorer
  6. License-aware access control that only surfaces content the user is permitted to see
  7. Evaluation-driven quality assurance across every component

Patterns worth stealing

If you’re building an AI assistant for a domain-specific enterprise use case, three patterns from this project are broadly applicable:

Don’t build one retrieval pipeline. If your data has fundamentally different structures, build specialized search modules for each type. The overhead is worth it.

Invest in routing as a first-class component. In multi-agent systems, the router is the single point of failure. Build it, evaluate it, and iterate it separately from the agents themselves.

Build evaluation sets from day one. Evaluation is not a phase that comes after development. It’s a continuous process that shapes every architectural decision. Involve domain experts early — their feedback catches failures that synthetic tests miss.

Energy Aspects’ analysts now consume research, time-series data, and documentation through a single AI interface. The system routes queries intelligently, retrieves from the right source, and delivers answers that users rate positively over 90% of the time.

Hotovo is ISO 42001 certified for AI management systems. If you’re building an AI assistant and want to discuss architecture decisions, reach out at sales@hotovo.com.

blog author
Author
Dastin Adamowski

With over 12 years of international product management experience I engineer critical infrastructure and build AI products for early stage FinTech companies. Having launched over 33 products valued at 1.4 billion USD I guide Hotovo partners to eliminate inefficiencies by transitioning teams from outdated processes to robust multi agent orchestrations and rapid AI augmented prototyping. Beyond orchestrating swarms of AI agents I am passionate about mountaineering in the Tatra mountains and going offline to touch grass in the wilderness. These quiet moments away from technology give me the perfect space to dig deeply into the rabbit holes of life.

Read more

Contact us

Let's talk