- Published on
A Practical AI Integration in CountPesa
When ChatGPT first launched, chatbots were all the rage, with many predicting traditional UI would become obsolete. Today, the hype has died down and we're now seeing practical AI applications hitting the products we use.
But chat interfaces aren't necessarily taking center stage - they're more complementary while products mostly maintain their familiar experiences. For example, how Notion with NotionAI.
I decided to join the hype train and add some AI functionality in the CountPesa Web App without necessarily making it the main act.
AI in a Privacy-First App
CountPesa is an expense management app that organizes M-Pesa transactions in a way that makes it easy to identify spending patterns over time. This visual approach provides valuable insights at a glance.

A core aspect of CountPesa is privacy. The app stores all user data locally in the browser using IndexedDB without any server-side storage.
This created an interesting challenge. How could the app use AI without sharing the user's data?
Natural Language Query Translation
While researching, I came across the idea of natural language query translation — where rather than having AI do data analysis, you use it to translate user questions into a queries that retrieve relevant data. So AI doesn't have get the user's data.
Here's an overview of how I implemented it:
- I provided AI with some context of my application and how filtering works.
- I gave it examples of questions and the respective filters it should respond with (few-shot prompting).
- I asked it to standardize its output in a way that enables easy parsing.
- Whenever AI responds, I parse it's output into valid filters if any and apply the filters on the application.
I was essentially using AI to help the user choose what data to visualize, then the platform would be responsible for making it easy to understand.
This worked reasonably well, with more examples significantly improving response quality. Using a popular filter pattern (MongoDB-like) also helped, as the model had probably been trained on such logic.
User Feedback
Upon releasing, the feature didn't get as much usage as I had anticipated.

Another challenge was that users didn't know the limitations of what they could ask. I had to prompt the AI to gracefully fail and inform them of its capabilities. This also meant handling invalid responses in code.
This pushed me to extend the feature to handle even more nuanced queries for example:
"Show me all transactions with Jane that happened on weekends"
"How much did I spend on weekdays during morning rush hour last year?"

Users were impressed, but they weren't adopting it as widely as I'd expected. Upon asking for feedback, they wanted even more capabilities:
"Can it be voice activated?"
"The questions are limited. Can it answer more general questions?"
"I want something like personalized advice?"
"Too much typing."
A Simple Middle-Ground
Some of the features the users wanted required giving AI more direct access to their data.
While I wasn't comfortable doing that, I realized I could share limited, anonymized transaction aggregates for analysis purposes.
This led to two refinements:
- I made the chat more conversational, adding AI analysis after filtered results were applied. This entailed aggregating the data and presenting a summarized version to the model.
- I added a one-click financial assessment. This was simply a dedicated "Analyze" button on the header that would generate an analysis of the data in the current view. The analysis was in 2 flavours,
SERIOUS MODE
andROAST MODE
.
While implementing the post-filter analysis feature in the chat, I noticed using a single AI model instance for both filtering and analysis led to confusion. The solution was seperating the two functionalities:
- one model maintains the filtering conversation context
- another model handles financial analysis with access only to the filtered data aggregates.
The one-click analysis feature was particularly well-received, especially the ROAST MODE
where AI roasts your financial decisions.
Because of this experiment, I realized the AI financial analysis would be a great addition to the mobile app and now, they are generated automatically at the beginning of every week and month.
The reception has been overall positive so far, with one user saying: It's a great way to get a quick overview of the data I have and it helps me focus my analysis on some aspects of my spending.
Notes
Whenever AI failed to give an output, I tracked the questions asked to be able to reproduce the issue. This helped reduce failure rates.
The analyze feature was much easier to implement than the natural language translation feature — yet it was more popular.
Complex != Better
Users are lazy. Don’t make them click things a lot. Get them the result with just 1 click — or none, if possible.
I'd initially implemented this in early 2024. Now in April 2025, the feature works better and faster. I am still using the free Gemini Flash model (before it was 1.5, now its 2).
The prompting can always be improved.
I'd implemented this web app using Angular last year, but now that I'm all-in on React, I decided to rewrite it in React.
If you're interested in exploring the implementation details, check out the code on github or the detailed docs of deepwiki.