We Taught Our AI Assistants Everything About Omnium OMS

At Novacare, we work with Omnium OMS across multiple client projects. Our developers were spending real time on the same loop: building an integration, needing to know which endpoint to call or how workflow steps chain together, opening the docs, searching around, clicking through a few pages, and eventually piecing it together.

It works. But it's slow. And context-switching between docs and your code editor isn't exactly a flow state.

So we built a structured knowledge package, a "skill", that puts the entire Omnium OMS knowledge base directly into our AI assistants. It works as a custom skill in Claude, but the format is just as useful with Cursor, Windsurf, GitHub Copilot, or any other AI coding assistant that supports custom instructions or context files. And since we figured anyone working with Omnium would have the same problem, we made it open source.

What's a skill, exactly?

A skill is a structured knowledge package that gives an AI assistant deep expertise in a specific domain. Instead of pasting documentation into a chat window and hoping for the best, a skill provides layered, well-organized reference material that the LLM can navigate on its own: it knows where to find endpoint specs, when to pull in conceptual context, and how to combine the two when your question demands it.

The concept originated with Claude's custom skills feature, but the underlying format (markdown files with a routing table and structured reference docs) works well with any LLM that can ingest custom context. Feed it to Cursor as project docs, point Windsurf at it, or drop it into whatever AI-assisted workflow you're already using.

The Omnium OMS skill is open source on GitHub, so you can grab it and start using it right away.

What it actually covers

The skill is structured in three layers that mirror how you'd actually look things up:

Layer 1 is a routing table, a compact overview that helps the LLM decide which files to read based on your question. This is always loaded and keeps things fast.

Layer 2 is conceptual documentation pulled from docs.omnium.no. This is the "how does it work" layer: order lifecycles, workflow configuration, cart and checkout flows, inventory logic, pricing rules, customer management for both B2C and B2B, click & collect, delta queries, event handling, and more.

Layer 3 is the raw API reference: every endpoint, every parameter, every request and response schema. When you ask "which endpoint do I call to search orders?" or "what does the cart checkout payload look like?", this is where the assistant goes.

The key insight is that many real questions need both layers. "How do I implement click & collect?" requires understanding the order lifecycle concepts first, and then knowing the specific API calls to make it happen. The skill is designed to handle exactly that kind of compound question.

What you can ask it

Here are some examples of things that become much faster with the skill installed:

  • "How do workflows work in Omnium? Show me the happy flow for an online order."
  • "Which endpoint do I use to add items to a cart and then check out?"
  • "How do I set up delta queries to sync changed orders to our ERP?"
  • "What's the difference between Scroll and Search, and when should I use each?"
  • "How do I configure click & collect with store pickup?"
  • "Show me how to do a bulk update of product prices."
  • "What are the common HTTP error codes and how should I handle rate limiting?"

Instead of hunting through docs, you get an answer that combines the relevant concepts with the actual endpoint details and request formats. In one shot.

Who is this for?

Primarily developers and integrators working with Omnium OMS. If you're building a webshop frontend, connecting a WMS or ERP, setting up payment or shipping connectors, or configuring order workflows, this skill turns your AI assistant into a colleague who has actually read all the documentation.

It's also useful for business analysts and project managers who need to understand what Omnium can do without diving into the API specs themselves.

How to use it

The skill is available at github.com/novacare-as/omnium-oms-skill. How you set it up depends on your tool of choice:

  • Claude: Install it as a custom skill in your project. Any conversation where you mention Omnium or OMS will automatically activate it.
  • Cursor / Windsurf: Add the skill files to your project's docs or context folder so the assistant picks them up when you're coding.
  • GitHub Copilot: Include the files as workspace context or reference them in your custom instructions.
  • Other LLMs: The skill is just structured markdown. Point your tool at the files however it prefers to ingest custom knowledge.

No API keys, no complex setup. Clone the repo and wire it into your workflow.

Why a skill and not just docs?

The Omnium documentation is good, but it's spread across a lot of pages. An LLM without the skill might give you a plausible-sounding answer based on vague training data, but it won't know the exact endpoint path, the correct request body, or how Omnium's workflow steps actually behave.

The skill fixes that. It gives consistent, accurate answers because it's grounded in the actual documentation. The LLM reads the right reference files for your question instead of guessing. That's the difference between "I think the endpoint is something like /api/orders" and a concrete, correct example with the right parameters.

Try it out

Grab the skill, wire it into your preferred AI tool, ask it a question about Omnium, and see how it goes. If something is missing or could be better, the repo is open for contributions.

Happy integrating.