Once In A Blue Moon

Your Website Title

Once in a Blue Moon

Discover Something New!

Loading...

December 5, 2025

Article of the Day

Why someone might not appear happy on the outside but be happy on the inside

People may not appear happy on the outside while being happy on the inside for various reasons: In essence, the…
Moon Loading...
LED Style Ticker
Loading...
Interactive Badge Overlay
Badge Image
🔄
Pill Actions Row
Memory App
📡
Return Button
Back
Visit Once in a Blue Moon
📓 Read
Go Home Button
Home
Green Button
Contact
Help Button
Help
Refresh Button
Refresh
Animated UFO
Color-changing Butterfly
🦋
Random Button 🎲
Flash Card App
Last Updated Button
Random Sentence Reader
Speed Reading
Login
Moon Emoji Move
🌕
Scroll to Top Button
Memory App 🃏
Memory App
📋
Parachute Animation
Magic Button Effects
Click to Add Circles
Speed Reader
🚀
✏️

To build a phone app that is an AI powered personal assistant, you want a clear path from empty folder to working app on your device. Here is a full start to finish tutorial, written so you can follow it even if you are not an expert yet.

I will assume:

  • You want one codebase that runs on both iOS and Android
  • You are ok using JavaScript and React Native
  • You will use a cloud AI API (like OpenAI style APIs) rather than training your own model

You can swap tools or frameworks if you prefer, but the steps and concepts stay the same.


1. Decide what your AI assistant should actually do

Before you touch code, define the assistant’s job. Otherwise you end up with a messy app that “kind of” does everything but nothing well.

Answer these questions and write down your choices:

  1. Who is it for?
    • Example: “Busy professionals who want quick summaries, reminders, and reply suggestions.”
  2. What are the core features for version 1?
    Good starting set:
    • Chat with AI: ask questions and get text responses
    • Voice input: talk instead of typing
    • Voice output: assistant can speak answers
    • Simple memory: store basic user preferences or notes locally
    • Quick actions: buttons for common tasks like “Summarize text” or “Draft email reply”
  3. What platforms?
    • iOS and Android using React Native.
  4. Which AI provider?
    • Any reputable provider with a chat style API endpoint.
    • You need:
      • An API key
      • A chat or completions endpoint
      • A model that handles natural language

Write a one paragraph “product sentence” for yourself, like:
“My assistant will let a user talk or type, send that to an AI model through an API, remember a few user preferences locally, and respond with text and optional speech.”

You will refer back to this when making decisions.


2. Set up your development environment

Steps here are platform and framework choices. I will use React Native with Expo because it simplifies setup and testing.

  1. Install Node.js
    • Install a current LTS release from the official Node.js site.
  2. Install Expo CLI
    • In your terminal: npm install -g expo-cli
  3. Create your project
    • In your terminal: expo init ai-assistant-app cd ai-assistant-app
    • Choose a blank template with JavaScript.
  4. Run the starter app
    • Start the development server: npx expo start
    • Install the Expo Go app on your phone.
    • Scan the QR code that appears in your terminal or browser.
    • You should see a basic “Hello world” style React Native app.

Once this works, your environment is ready.


3. Plan your app structure

Before writing features, plan the screens and core components.

Minimum structure:

  • App.js
    • Handles navigation between screens and global context providers.
  • Screens:
    • ChatScreen: where the user talks to the assistant.
    • SettingsScreen: API key, voice options, and preferences.
  • Components:
    • MessageBubble: displays user or assistant messages.
    • InputBar: text input, microphone button, send button.
    • TypingIndicator: shows when the assistant is “thinking.”
  • Services:
    • apiClient.js: handles calls to the AI provider.
    • storage.js: wraps local storage (using something like AsyncStorage).
    • voice.js: wraps speech recognition and text to speech.

This separation keeps your app from turning into one giant file.


4. Build a basic chat interface (no AI yet)

First, build the chat UI with mock messages. You want to see the interface before wiring up the AI.

  1. Install dependencies for UI and storage
    • For basic async storage: expo install @react-native-async-storage/async-storage
  2. Create ChatScreen.js
    • Pseudocode structure: import React, { useState } from "react"; import { View, Text, FlatList, TextInput, TouchableOpacity, StyleSheet } from "react-native"; export default function ChatScreen() { const [messages, setMessages] = useState([ { id: "1", from: "assistant", text: "Hi, I am your AI assistant. How can I help?" } ]); const [input, setInput] = useState(""); function handleSend() { if (!input.trim()) return; const newMessage = { id: Date.now().toString(), from: "user", text: input.trim() }; setMessages(prev => [newMessage, ...prev]); setInput(""); } return ( <View style={styles.container}> <FlatList data={messages} keyExtractor={item => item.id} renderItem={({ item }) => ( <View style={[ styles.bubble, item.from === "user" ? styles.userBubble : styles.assistantBubble ]}> <Text style={styles.text}>{item.text}</Text> </View> )} inverted /> <View style={styles.inputRow}> <TextInput style={styles.input} value={input} onChangeText={setInput} placeholder="Ask me anything..." /> <TouchableOpacity style={styles.sendButton} onPress={handleSend}> <Text>Send</Text> </TouchableOpacity> </View> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, padding: 10, paddingBottom: 20, backgroundColor: "#101010" }, bubble: { marginVertical: 4, padding: 10, borderRadius: 12, maxWidth: "80%" }, userBubble: { alignSelf: "flex-end", backgroundColor: "#2f80ed" }, assistantBubble: { alignSelf: "flex-start", backgroundColor: "#333333" }, text: { color: "#ffffff" }, inputRow: { flexDirection: "row", alignItems: "center" }, input: { flex: 1, backgroundColor: "#222222", color: "#ffffff", borderRadius: 20, paddingHorizontal: 12, paddingVertical: 8, marginRight: 8 }, sendButton: { paddingHorizontal: 16, paddingVertical: 8, backgroundColor: "#2f80ed", borderRadius: 20 } });
  3. Wire it into App.js
    • Use this screen as your default return: import React from "react"; import ChatScreen from "./ChatScreen"; export default function App() { return <ChatScreen />; }
  4. Test on your device
    • You should be able to type messages and see them appear as chat bubbles.
    • There is no AI integration yet. This is just the interface.

5. Connect to your AI provider

Now you will turn the mock chat into a real AI assistant.

  1. Get an API key
    • Sign up with a provider and generate a secret API key.
    • Never hardcode this key directly into a public repo.
  2. Create an environment configuration
    • With Expo, you can use environment variables via app configuration, or for early development store a local constant in a file that is not committed to version control.
    • For example, create config.example.js: export const API_KEY = "YOUR_API_KEY_HERE"; export const API_URL = "https://api.yourprovider.com/v1/chat"; export const MODEL = "your-model-name";
    • Copy it to config.js locally, add your real key, and add config.js to .gitignore before committing.
  3. Create apiClient.js
    • This file sends messages to your AI provider.
    • Example structure: import { API_KEY, API_URL, MODEL } from "./config"; export async function sendChatRequest(messageHistory) { // messageHistory: array of { role: "user" or "assistant", content: "text" } const response = await fetch(API_URL, { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${API_KEY}` }, body: JSON.stringify({ model: MODEL, messages: messageHistory }) }); if (!response.ok) { const errorText = await response.text(); throw new Error(`API error: ${errorText}`); } const data = await response.json(); // Adjust to your providers response format const aiMessage = data.choices[0].message.content; return aiMessage; }
  4. Convert UI messages to API format
    • Inside ChatScreen, you need a function that transforms your local messages into the chat format expected by the API.
    • Example helper: function buildMessageHistory(messages) { // messages is current state in reverse order (newest first) const reversed = [...messages].reverse(); return reversed.map(m => ({ role: m.from === "user" ? "user" : "assistant", content: m.text })); }

6. Wire the chat UI to the AI response

Now tie everything together so pressing send triggers an AI response.

  1. Update handleSend in ChatScreen
    • Steps:
      • Add user message to state
      • Build message history including the new user message
      • Show a “typing” indicator
      • Call sendChatRequest
      • Add assistant response to messages
      • Handle errors gracefully
    • Example: import React, { useState } from "react"; import { View, Text, FlatList, TextInput, TouchableOpacity, StyleSheet, ActivityIndicator, Alert } from "react-native"; import { sendChatRequest } from "./apiClient"; export default function ChatScreen() { const [messages, setMessages] = useState([ { id: "1", from: "assistant", text: "Hi, I am your AI assistant. How can I help?" } ]); const [input, setInput] = useState(""); const [loading, setLoading] = useState(false); function buildMessageHistory(withNewUserText) { const baseMessages = withNewUserText ? [{ id: Date.now().toString(), from: "user", text: withNewUserText }, ...messages] : messages; const reversed = [...baseMessages].reverse(); return reversed.map(m => ({ role: m.from === "user" ? "user" : "assistant", content: m.text })); } async function handleSend() { const trimmed = input.trim(); if (!trimmed || loading) return; const userMessage = { id: Date.now().toString(), from: "user", text: trimmed }; setMessages(prev => [userMessage, ...prev]); setInput(""); setLoading(true); try { const history = buildMessageHistory(trimmed); const reply = await sendChatRequest(history); const assistantMessage = { id: (Date.now() + 1).toString(), from: "assistant", text: reply }; setMessages(prev => [assistantMessage, ...prev]); } catch (err) { Alert.alert("Error", err.message || "Something went wrong with the AI request."); } finally { setLoading(false); } } return ( <View style={styles.container}> <FlatList data={messages} keyExtractor={item => item.id} renderItem={({ item }) => ( <View style={[ styles.bubble, item.from === "user" ? styles.userBubble : styles.assistantBubble ]}> <Text style={styles.text}>{item.text}</Text> </View> )} inverted /> {loading && ( <View style={styles.typingRow}> <ActivityIndicator size="small" /> <Text style={styles.typingText}>Assistant is thinking...</Text> </View> )} <View style={styles.inputRow}> <TextInput style={styles.input} value={input} onChangeText={setInput} placeholder="Ask me anything..." placeholderTextColor="#888" /> <TouchableOpacity style={styles.sendButton} onPress={handleSend}> <Text style={{ color: "#fff" }}>Send</Text> </TouchableOpacity> </View> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, padding: 10, paddingBottom: 20, backgroundColor: "#101010" }, bubble: { marginVertical: 4, padding: 10, borderRadius: 12, maxWidth: "80%" }, userBubble: { alignSelf: "flex-end", backgroundColor: "#2f80ed" }, assistantBubble: { alignSelf: "flex-start", backgroundColor: "#333333" }, text: { color: "#ffffff" }, inputRow: { flexDirection: "row", alignItems: "center", marginTop: 4 }, input: { flex: 1, backgroundColor: "#222222", color: "#ffffff", borderRadius: 20, paddingHorizontal: 12, paddingVertical: 8, marginRight: 8 }, sendButton: { paddingHorizontal: 16, paddingVertical: 8, backgroundColor: "#2f80ed", borderRadius: 20 }, typingRow: { flexDirection: "row", alignItems: "center", paddingVertical: 4 }, typingText: { marginLeft: 8, color: "#ccc" } });
  2. Test it
    • Type a question and tap send.
    • You should see your message, then after a short delay, the assistant reply.

At this point you have a functioning AI chat assistant.


7. Add voice input (speech to text)

Now make the assistant feel more like a real personal assistant by adding voice input.

  1. Choose a voice library
    • With Expo, you can use expo-speech for text to speech and a separate solution for speech recognition, such as integrating native modules or a web based speech service.
    • For simplicity in this tutorial, we focus on text to speech in the app and treat voice input as an optional extension if you are comfortable adding native modules.
  2. Add a microphone button
    • Change your InputRow to include a mic icon button.
    • At first, this can just show an alert saying “voice input not implemented yet” so the layout is ready.
    • Later you can integrate a speech recognition SDK following that provider’s documentation.
  3. Integrate real speech recognition (optional advanced step)
    • This will vary by provider and platform.
    • The pattern is:
      • Press and hold mic button
      • Start listening
      • Convert speech to text
      • Fill the input field with the recognized text
      • User taps send or auto send after recognition

Focus on the text based assistant first. Voice input can be a second phase.


8. Add voice output (assistant speaking)

Text to speech is easier and adds a lot of personality.

  1. Install Expo speech
    • In your project: expo install expo-speech
  2. Create voice.js import * as Speech from "expo-speech"; export function speak(text, options = {}) { Speech.speak(text, { language: options.language || "en-US", pitch: options.pitch || 1.0, rate: options.rate || 1.0 }); } export function stopSpeaking() { Speech.stop(); }
  3. Call speak after receiving assistant replies
    • In ChatScreen, inside the part where you handle the assistant message: import { speak } from "./voice"; // ... const assistantMessage = { id: (Date.now() + 1).toString(), from: "assistant", text: reply }; setMessages(prev => [assistantMessage, ...prev]); speak(reply);
  4. Add a toggle in settings
    • Later you can add a setting “Voice responses on or off” and store it in AsyncStorage, then only call speak if that setting is true.

9. Add simple memory and personalization

A personal assistant should remember something about the user.

  1. Decide what to remember
    • Examples:
      • User name
      • Preferred tone (formal, casual)
      • Common tasks (like “summarize,” “schedule,” “draft email”)
  2. Store preferences locally
    • Create storage.js: import AsyncStorage from "@react-native-async-storage/async-storage"; const PREFS_KEY = "user_prefs"; export async function savePreferences(prefs) { await AsyncStorage.setItem(PREFS_KEY, JSON.stringify(prefs)); } export async function loadPreferences() { const raw = await AsyncStorage.getItem(PREFS_KEY); if (!raw) return {}; try { return JSON.parse(raw); } catch { return {}; } }
  3. Use preferences to shape prompts
    • When constructing your message history, you can prepend a “system” style instruction if your provider supports it.
    • For example, combine:
      • “You are a helpful assistant”
      • “The user’s name is Alex”
      • “The user prefers concise answers”
    • You can do this by adding a special first message in buildMessageHistory based on preferences.
  4. Load preferences on startup
    • Use useEffect in ChatScreen or in a context provider to load preferences from storage and keep them in state.

10. Add quick actions for common tasks

To feel like a real personal assistant, provide tappable shortcuts.

Examples:

  • Button row above the keyboard:
    • “Summarize”
    • “Draft email reply”
    • “Plan my day”

How it works:

  • When the user taps “Summarize,” you can prefill the input with a template, or send a more structured prompt behind the scenes.

Example:

function handleQuickAction(action) {
  if (action === "SUMMARIZE") {
    setInput("Summarize the following text in 3 bullet points:\n");
  }
}

You can design a row of buttons at the top or bottom of the screen and wire them to these handlers.


11. Polish the UX

Now you have all the core functionality. Time to make it feel good to use.

Key polish steps:

  1. Loading state feedback
    • Disable the send button while loading.
    • Make the loading state clear with the “Assistant is thinking” indicator.
  2. Error messages
    • Show user friendly alerts if the network fails or the API throws an error.
    • Optionally add a “Retry last message” button.
  3. Message history persistence
    • You can save past messages to AsyncStorage so the conversation is restored when the user reopens the app.
    • Save messages whenever they change, and load them on startup.
  4. Theming
    • Implement light and dark themes.
    • Add a setting in the Settings screen to toggle theme.
  5. Accessibility
    • Make sure text is readable.
    • Use proper touch target sizes.
    • Respect system font scaling when possible.

12. Test on real devices

You already tested using Expo Go, but now you want to push it harder.

Checklist:

  • Try different network conditions
  • Ask varied questions including:
    • Very short questions
    • Long tasks
    • Follow up questions that refer to earlier context
  • Switch between typing and listening to voice responses
  • Close and reopen the app to confirm:
    • Conversation history persists if you implemented it
    • Preferences persist

Fix any crashes or obvious UI glitches before publishing.


13. Prepare for release

When you are ready to share your assistant with others:

  1. Create app icons, screenshots, and description
    • Show your chat interface and unique features.
    • Highlight that it is “AI powered” and mention key tasks.
  2. Configure app name and icon
    • In Expo, update app.json with:
      • name
      • slug
      • icons and splashes
  3. Build apps
    • Follow the Expo build instructions for:
      • Android APK or AAB
      • iOS build for TestFlight
  4. Submit to stores
    • Create Google Play and Apple App Store listings.
    • Fill in privacy policy and data usage details.
    • Clarify that your app calls an external AI API and describe what data is sent.

14. Iterate and improve the assistant

Once you have version 1, you can begin adding smarter behavior.

Ideas:

  1. Task centric modes
    • “Focus mode” for planning deep work sessions.
    • “Health mode” for summarizing workouts and meals.
    • Each mode can change the prompt instructions you send with each request.
  2. Background reminders
    • Integrate push notifications and local reminders for tasks the assistant helps schedule.
  3. Better memory
    • Store user notes or important facts in local storage or a secure backend.
    • Use them to make responses more personal.
  4. Multi step workflows
    • For example:
      • “Plan my week” triggers a workflow that asks follow up questions and then gives a tailored schedule.

Summary

You just saw the complete path from nothing to a working AI powered personal assistant app:

  1. Define the assistant’s purpose and features
  2. Set up React Native and Expo
  3. Build a chat UI with local state
  4. Connect to an AI API through an apiClient
  5. Wire messages to the model with proper history
  6. Add voice output with text to speech
  7. Add simple memory using local storage
  8. Add quick actions for common tasks
  9. Polish UX and handle errors
  10. Test on real devices
  11. Build and publish to app stores
  12. Iterate with smarter features

If you tell me your exact platform choice and AI provider, I can adapt these steps into concrete code tailored for your stack.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


🟢 🔴
error: