All posts
March 20, 2025·6 min read

Building an AI-Powered Fitness App with React Native

How I built AI-generated workout plans, computer-vision meal scanning, and sleep analytics into a production app serving 10K+ users — and what I learned.

React NativeAIExpoOpenAI

When we decided to add AI to our fitness app at Appeneure, the first challenge wasn't the model — it was the UX. Users don't want to feel like they're talking to a chatbot. They want the app to just know what to recommend.

I built the workout plan generator using OpenAI's API with a structured JSON output schema. The key insight was treating it as a data transformation problem, not a chat problem. The user fills out a profile (goals, equipment, schedule), and we pass that as a well-structured system prompt. The model returns a typed workout object — no parsing chaos.

Meal scanning was the hardest part. We used Google Vision API to identify food items from a photo, then passed the result to GPT-4o to estimate macros. The tricky bit was latency — Vision + GPT in sequence felt slow. We solved it by running Vision eagerly on image selection while the user confirms, so by the time they tap 'Log', the AI result is ready.

Sleep analytics was surprisingly the most impactful feature. We integrated with HealthKit/Google Fit to pull raw sleep data, then used a lightweight on-device model to compute recovery scores. Users engaged with this more than any other AI feature — because it was proactive. It told them something they didn't already know.

The biggest lesson: AI features that feel native to the app flow get used. AI features that require users to stop and interact with them get ignored. Design the AI around the user's existing behavior, not the other way around.