Vibe coding a food diary bot in one hour
Fri, Aug 15, 2025 in post AI artificial intelligence programming
I haven't written much about the new fancy AI tools in the blog, so I thought I'll do a short writeup on a sample workflow through ChatGPT, VS Code with Github Copilot and Claude Code that resulted in a fully working food diary app in about one hour from the idea.
The Problem Statement
I decided keeping a food diary would be nice way to avoid unhealthy habits. I experimented with using ChatGPT's vision capabilities some time ago to do calorie estimates, and that worked pretty well, so I thought I'd combine a simple Google Sheet and occasional ChatGPT use for a 95 % working solution without needing to bother with installing an app that would likely want to overhaul my whole diet, and subscribe me to a $19.99/mo plan while at it.
However, manually firing up the Google Sheet when on computer (the mobile version is crummy) wasn't ideal, so
enter ChatGPT -- maybe I could do a native Android client with AI calorie estimates easily? So I asked GPT-5
:
I'd like to create a super simple food diary app on my Android phone:
- Add a diary entry either by writing what I ate, taking a picture, or both
- Gemini API used for a calorie estimate from AI
- Diary stored permanently on my server (maybe including the pics?)
Outline for me three different architectures. I have a Raspberry Pi available, and a cloud server. I'd like minimal backend solution for storage which I can also view manually (say JSON or SQLite or even Google Sheet). Native app would be fun, but at least something I can quickly fire up from phone and upload photo.
After a while of thought, GPT5 kindly gave me three ideas, to summarize:
- Telegram bot + tiny API on your Pi/cloud (fastest to use from phone)
- Google Sheets + Apps Script Web App (no server to run)
- Native Android app (Kotlin) + tiny API on Pi/cloud (most control, best UX)
Now the first one was actually pretty smart. I had recently done a Telegram bot for a IoT project and it was quite easy, and it would readily come with picture sending and chat capabilities. That would be plenty.
Initial version: The Spec
Continuing with the prompt, I gave a sketch of how the interaction could go:
I think the telegram bot sounds nice. I'd like to be able to group messages within a short period to impact the same entry, so the interaction with bot could be like:
> Had a pasta carbonara for lunch
< That's about 700 calories
> [image]
< Based on your description and photo, that's about 900 calories
> 850 cal
< Thanks, adjusted based on your inputit
And on the AI went. I got these artifacts out:
- FastAPI server (SQLite + media + Gemini)
import os, uuid, re, io, csv, time, datetime
from pathlib import Path
from typing import Optional
from fastapi import FastAPI, UploadFile, Form, File, HTTPException
from fastapi.responses import FileResponse, StreamingResponse
from fastapi.staticfiles import StaticFiles
import sqlite3, json
# --- config ---
DATA_DIR = Path(os.getenv("DATA_DIR", "./data"))
MEDIA_DIR = DATA_DIR / "media"
DB_PATH = DATA_DIR / "entries.db"
AUTH_TOKEN = os.getenv("API_TOKEN", "dev-secret")
GROUP_WINDOW_SECONDS = int(os.getenv("GROUP_WINDOW_SECONDS", "600")) # 10 min
GEMINI_KEY = os.getenv("GEMINI_API_KEY", "")
app = FastAPI()
# And on it goes...
- Telegram bot (groups messages, supports “850 cal”)
import os, re, requests, tempfile
from telegram import Update
from telegram.ext import Application, CommandHandler, MessageHandler, ContextTypes, filters
BOT_TOKEN = os.environ["TELEGRAM_TOKEN"]
API_BASE = os.getenv("API_BASE", "http://localhost:8080")
API_TOKEN = os.getenv("API_TOKEN", "dev-secret")
CAL_RE = re.compile(r"^\s*(\d{2,5})\s*(k?cal)?\s*$", re.I)
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
await update.message.reply_text("Food diary ready. Send text, photo, or both. Say '850 cal' to override.")
# And so on...
Now running two things on my Raspberry Pi 4 and having them communicate sounds a bit involved, maybe I could combine them?
Could we combine the bot and backend code to single executable? Let's simplify storage even more:
- Accumulate text and photos, take timestamp on first new message
- Once user inputs "xxx cal" that will "close" the current entry and flush to disk
JSONL could be just {'sent': '2025-08-15 14:40 UTC', 'description': 'All text concatenated here', images: [/* local path(s) */], calories: 123}
And basically, with this input I got a working single-file Python code for a Telegram bot I could just run with:
pip install python-telegram-bot==21.*
export TELEGRAM_TOKEN="your-bot-token"
python fooddiary_bot.py
I have skipped the part where I made a new Telegram Bot using Telegram's BotFather
— ask your local AI for
info if you want to know how to do that. And no, I am not
generating this text with AI even though there was an mdash right there.
Making a project with UV
Python dependencier are irksome, and creating a separate virtual environment felt a bit heavy. I recalled this thing called UV had pretty much a combination of venvs, poetry and what npm
does in Node projects. I even had it installed
on my Macbook so I could skip the one-liner in the docs.
To be frank I forgot if there were other alternatives, so this is how I got started with ChatGPT:
What's a modern python package management solution that would allow me to install the telegram bot
And out it spit instructions:
# Install uv (single binary)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create project
uv init fooddiary-bot
cd fooddiary-bot
# Install deps in isolated venv
uv add python-telegram-bot
# Run
uv run python fooddiary_bot.py
Well, almost ready, I wasn't fully inspired by fooddiary-bot
, descriptive as it was. So I turned again to my AI friend. And after not being fully happy with initial suggestions, prompted it a bit more and settled on imaginative and, according to Google, not very popular
existing project name: mealgram
.
Mealgram 1.0: Configuration handling
Replacing the project name, I had it mostly working. However, specifying my Telegram token on every run was suboptimal, so I fired up VS Code and asked the resident Github Copilot model (probably GPT 4.1):
I'd like to store the telegram token and gemini api key in a suitable env file
Basically got instructions to pip install python-dotenv
which I replaced with uv add python-dotenv
and add the lines:
from dotenv import load_dotenv
load_dotenv()
Writing these lines into the file seemed like too much work, so I just pasted my token to .env
file, switched from Ask to Edit mode (which frustratingly in VS Code deletes the context) and asked:
I installed python-dotenv and set the TELEGRAM_TOKEN in my .env, can you modify the code
Now I could just uv run main.py
and the bot worked. Neat.
Making it a github project
So far I had ran this locally, but it would be nice to brag about the vibe coding project and maybe even someone would be interested in the actual product. So I created a new github repo at https://github.com/jokkebk/mealgram and chose MIT license (this is not rocket science so why not give it away free).
Locally I basically ran (thanks to modern git which has main
and not master
so github won't complain):
$ git init
$ echo .env >> .gitignore
$ echo "data/" >> .gitignore
$ git add .
$ git commit -a -m "Initial commit"
Now since I chose to create a repo with LICENSE
in there, I now have two repos without common base I need to merge. Never remember how to do it. Not a problem, Claude Code to the rescue!
$ claude
# Yes I'm fine with this folder...
> I already create github repo for this at https://github.com/jokkebk/mealgram -- there is already a LICENSE file there, can you add remote origin and push the local stuff there and merge any conflicts?
And Claude Code tried to push, noticed there are separate repository bases, fixed it for me and voila: We're live!
Adding pizzazz: Gemini calorie estimates
Now Gemini 2.5 Flash is a pretty cheap and powerful model. I am already using it in my personal budget app to turn web bank screenshots to TSV formatted statements. So I returned from Claude Code to VS Code and had some hopes for Claude 3.7 Thinking (didn't have version 4 enabled at the time):
Can you add /cal command that would feed text and image(s) to Google Gemini. Use the python-genai library I have installed. GEMINI_API_KEY variable is set in .env
(yes I had ran uv add python-genai
before this)
Now Claude did what most LLMs with knowledge cutoffs do: Wrote me a non-working piece of code using an older Google LLM API.
Fixing it
I tried simply:
Google genai is used wrong, here's a proper new version snippet:
from google import genai
# TODO(developer): Update below line
API_KEY = "YOUR_API_KEY"
client = genai.Client(vertexai=True, api_key=API_KEY)
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Explain bubble sort to me.",
)
print(response.text)
Well, that went nowhere. Here was the most manual part of the excercise as I needed to actually change the top-of-file import, mark the function that was wrong, and kindly nudge it again there. After that, it worked well.
Lo and behold, restarting the bot gave me a working /cal
command that ran fine with both text and images. Don't know if the estimates are any good, but it's AI so it's awesome, right?
Migrating to server
At this point I was ready to git commit
and git push
my changes, log into my pi4 box and git clone
the thing. A few scp
commands to copy the .env
and already written data/entries.jsonl
over, and it was working.
Now one thing was a bit bummer, I had already two days worth of data in the Google Sheet. Well, not a problem really:
$ claude
> Check out the format of @data/entries.jsonl and prepend this data there:
Päivä Ruoka kcal
13/08/2025 2 aamupalaleipää suolakurkulla 350
...
● I'll examine the format of the entries.jsonl file and prepend the provided data.
● Read(data/entries.jsonl)
⎿ Read 4 lines (ctrl+r to expand)
● Now I'll convert the provided data to match the JSON format and prepend it to the file:
● Update(data/entries.jsonl)
⎿ Updated data/entries.jsonl with 8 additions
1 + {"sent": "2025-08-13 12:00 UTC", "description": "2 aamupalaleipää suolakurkulla", "images": [], "calories": 350}
...
● The data has been successfully prepended to the entries.jsonl file. I converted the tabular data to match the JSON format used in the file, with each
entry containing a timestamp, description, empty images array, and calorie count.
Lovely modern tech.
Final touches: Simple reporting
I could always go goggle the entries.jsonl
if I wanted reporting or build something nice, but maybe I could have a simple per-day summary in the bot? Back to VS Code:
One more command, /stats should go through the JSONL and print total calories for latest 7 days logged (regardless if they are consecutive or not -- just sum each distinct day and print max 7 latest entries)
After it made the addition (which worked), I decided /report
would actually be a nicer command and edited that manually. Pushed the changes and now the app was pretty much complete.
Summary
With ChatGPT, VS Code + Github Copilot, and Claude Code I was able to build a fully working food diary bot with a github repository and some nice basic functionalities, plus a Google Gemini Flash 2.5 powered AI calorie estimation in about an hour. Most time went to debug the legacy SDK Claude 3.7 decided to use for Gemini. Not bad for a quick work!
Take a look at the project: https://github.com/jokkebk/mealgram