Bringing AI to Offri
Author
Younes
Date Published

How I built an AI microservice for generating proposals in Offri
Introduction
Offri is an online tool that helps businesses create and share interactive proposals. Instead of sending static PDFs, users can build proposals with blocks like text, media, pricing, and signatures — and then track how their clients interact with them.
When I started my graduation project at Internetbureau Slik, the team had an important question: what if proposals could be drafted with the help of AI? Competitors were beginning to experiment with AI, and Offri needed to keep up. More importantly, writing proposals can take a lot of time, and users were asking for ways to speed up the process.
The goal of my project was to research, design, and build an AI microservice that could generate proposals based on a user briefing, document uploads, chosen options and more. This would not only save users time but also give them inspiration when starting from scratch.
Problem Statement
At the start of my project, Offri had no AI support. Users built proposals manually by adding blocks of text, media, pricing and more. While this approach worked, it was also time-consuming and sometimes overwhelming for users who didn’t know where to start.Also, few or no companies were specializing in AI-generated proposals which made it a great market opportunity for Offri.
The challenge was to design and implement a solution that could:
- Generate proposals automatically from a short user input.
- Integrate seamlessly into Offri’s existing architecture and workflows.
- Be scalable and maintainable, so it could evolve as the product grew.
In short: Offri needed an AI service that could create high-quality proposals quickly, while fitting into the platform’s existing structure.
Analysis
Before I could design a solution, I needed to understand the context I was working in. Offri was already a mature web application built with Vue.js on the frontend and NestJS with MySQL on the backend. Services communicated through RabbitMQ, and the platform ran in Docker containers. This meant any new component had to fit smoothly into this setup.
The first key question was architectural: should the AI logic be added directly into the existing backend, or should it run as a separate service?
- Adding it into the backend (monolith style) would make integration easier at first but risked making the codebase harder to maintain.
- A separate microservice would require more setup but offered scalability and independence from the main application.
Another part of my analysis was evaluating large language models (LLMs). I tested providers like Claude, GPT, and Gemini to compare their output quality, speed, and cost. Each model had different strengths, and choosing one required balancing these trade-offs.
Finally, I reviewed how users might interact with the feature. Generating a proposal isn’t just a single AI call — it requires progress updates, error handling, and the ability to retry a generation. That meant the solution needed real-time communication and a clear user flow in the UI.
Advice
My advice had two parts.
For the AI feature, I recommended building it as a separate microservice. This kept the implementation isolated, allowed the team to experiment with different LLM providers, and made scaling easier without touching the existing backend.
For the platform as a whole, I advised the opposite: to eventually migrate Offri from a microservices setup to a monolith. The product had grown rapidly, and while microservices offered flexibility, they also added complexity in deployment and maintenance. A monolith would make development and operations simpler in the long run. The team agreed with this advice and decided to take this direction in the future.
On the model side, I compared Claude, GPT, and Gemini.
- Claude: stable results and strong with long contexts, achieved the highest quality.
- GPT: powerful but more expensive.
- Gemini: Cheap, but inconsistent in my tests.
To stay flexible, I recommended making the microservice provider-agnostic, so switching models or adding new ones would be straightforward.
I also suggested extending the database with tables like prompt_history and feature_flags. This allowed for storing user prompts, reusing them later, and rolling out AI gradually to selected users.
Design
Once the approach was clear, I translated it into a concrete design. The design had two main parts: the system architecture and the user experience.
On the technical side, I designed the AI as a NestJS microservice connected to Offri through RabbitMQ. This allowed the main backend to send prompts to the service and receive responses asynchronously. I also added new database tables:
- prompt_history for storing past requests.
- feature_flags for controlling the rollout of the AI feature.
For the user interface, I went through four design iterations, each followed by feedback from the Offri team. The final flow consisted of:
- A briefing screen where the user enters context about the proposal.
- A step for choosing a template.
- A screen showing different AI-generation options.
- A live progress indicator that updated in real time as the proposal was being built.
Realisation
With the design in place, I started building the AI microservice in Node.js & NestJS. The service was responsible for handling user prompts, communicating with an LLM provider, converting the output into usable proposal content, and returning updates to the frontend.
To keep the integration smooth, the microservice communicated with Offri’s backend through RabbitMQ. This allowed requests and responses to be handled asynchronously. I also used websockets to send progress updates directly to the frontend.
On the frontend side, I added new Vue.js components:
- A briefing screen where users could provide input.
- A template selection step.
- An options screen with multiple AI-generation options.
- A progress screen that updated in real time.
For rollout, I implemented feature flags. This meant the AI functionality could be tested with specific users before making it available to everyone.
Testing
Once the AI microservice was running, I focused on testing its reliability and behavior. I used unit testing with Jest in NestJS to check the core logic — for example, validating inputs, handling responses from the LLM, and converting text into proposal blocks. This helped ensure that small parts of the system behaved as expected.
Beyond automated tests, I also carried out exploratory testing in a staging environment. By simulating real user flows, I was able to identify issues that wouldn’t show up in unit tests, such as incomplete outputs, slow response times, or edge cases in prompts.
For tracking and managing bugs, the team used Notion. Each issue was documented, prioritized, and resolved in collaboration with the developers at Slik. This made the testing process transparent and structured.
Results & Reflection
By the end of my project, Offri had a working AI microservice integrated into the platform. Users could now enter a short briefing, select a template, and receive a complete proposal draft generated in real time. The system sent live updates, handled errors gracefully, and stored prompt history for later use.
For Offri, this meant staying competitive with other proposal tools while also offering customers a faster and more inspiring way to create proposals. The feature turned a manual task into an assisted process, saving time without losing flexibility.
For me, this project was more than just coding. I learned how to:
- Research and compare architectural approaches.
- Build and deploy a microservice that works within a larger system.
- Integrate LLMs into a real product.
- Collaborate with a professional team and gather feedback in iterative design rounds.
It was the first time I delivered a feature that touched every layer of a system: frontend, backend, infrastructure, and AI integration. That made it both challenging and rewarding.
Conclusion
Integrating AI into Offri was both a technical challenge and a valuable opportunity. The project showed how large language models can move beyond experiments and become part of a production-ready system. By building a dedicated AI microservice, I was able to design a solution that was flexible, maintainable, and directly useful to Offri’s users.
For Offri, the result was a feature that saves customers time and helps them create better proposals. For me, it was a chance to apply everything I had learned during my studies and internships, and to grow as a software engineer who can take an idea from research all the way to implementation.
This project confirmed my interest in software engineering and system design, and it gave me the confidence to keep building solutions that connect modern technologies with real user needs.