Citizen Devs + React: Building a Restaurant Recommender App with LLM Prompts and Hooks
Recreate Rebecca Yu’s dining app: a step-by-step guide using React hooks, serverless endpoints, and Claude prompts—built for non-dev editors.
Hook: Stop decision fatigue — build a tiny recommender that your friends will actually use
If you’ve ever been stuck in a group chat trying to pick a restaurant, you know the pain: endless options, opinions clashing, and consensus never arriving. For busy teams and citizen developers in 2026, the solution is often a fast, private micro app—not a full product launch. Rebecca Yu’s Where2Eat is a great example of vibe-coding: a one-week, practical app that solves one decision problem for a small group.
In this tutorial you'll recreate that idea using modern tools: React hooks for UI, lightweight serverless functions for safe LLM calls, and prompt engineering with Claude to produce recommendations. The UX will be intentionally simple so non-technical editors can add restaurants via Airtable (or another spreadsheet-style CMS) and start recommending in minutes.
What you'll build — architecture and goals
High level: a React single-page app that queries a serverless endpoint. The endpoint collects restaurants from an editor-friendly source (Airtable), builds a concise prompt, asks Claude for ranked recommendations, and returns structured JSON to the client.
- Frontend: React + TypeScript, a custom hook
useRecommendations, minimal accessible UI for voters/editors. - Editor: Airtable (or Google Sheets) used by non-devs to edit restaurants. Serverless layer reads it.
- Serverless: Vercel, Netlify, or AWS Lambda Edge functions host endpoints: /api/restaurants and /api/recommend.
- LLM: Anthropic’s Claude (2025–2026 models). The server crafts a JSON-output prompt and validates the model output.
Design goals: fast iteration, safe API keys (never exposed to the browser), simple UX for non-developers, and cost control (caching + short prompts).
Why this matters in 2026
In late 2025 and into 2026 micro apps and vibe-coding are mainstream. Mainstream device vendors are embedding strong assistants (the Apple-Google Gemini partnership shifted expectations around on-device and hybrid assistants), and teams expect quick, private tools that solve one problem. Using Claude for small, curated recommendation tasks is both cost-effective and pragmatic. The pattern below fits that reality: small DB, server-side prompt engineering, and a responsive React hook-based frontend.
Prerequisites
- Node 18+ / a modern frontend toolchain (Vite or Next.js)
- Account for your serverless provider (Vercel, Netlify, or AWS)
- Airtable (or Google Sheets) base with restaurant rows, or a JSON file hosted safely
- An Anthropic (Claude) API key — store it in serverless env vars
- Familiarity with React hooks and basic accessibility practices
Step 1 — Schema & Editor for non-devs
Keep the schema tiny. Non-technical editors should have a few fields to tune recommendations:
- name (string)
- cuisine (string)
- tags (array: “romantic”, “outdoor”, “vegan”)
- price_range (1-3)
- vibes (short text: “cozy, buzzy, casual”)
- short_desc and address
Use Airtable for its spreadsheet-style UX. Create a base and share edit access with your non-dev editors. The app will read that base via a serverless endpoint. This keeps API keys off the client and lets editors manage data without touching code.
Quick Airtable tips
- Limit the number of records for initial builds — under 200 so you can include them directly in prompts when appropriate.
- Use single-select fields for tags where possible (consistency helps prompt matching).
- Keep a “notes for editorial” field so non-devs can add context for the LLM (weekday specials, closures).
Step 2 — Serverless: fetch restaurants safely
Create a serverless endpoint that reads your Airtable base and returns normalized JSON. Keep the endpoint lightweight and cache results for 5–15 minutes to cut API calls.
// /api/restaurants.ts (Node / Vercel-style)
import fetch from 'node-fetch';
export default async function handler(req, res) {
const AIRTABLE_KEY = process.env.AIRTABLE_API_KEY;
const BASE_ID = process.env.AIRTABLE_BASE_ID;
const r = await fetch(`https://api.airtable.com/v0/${BASE_ID}/Restaurants?view=Grid%20view`, {
headers: { Authorization: `Bearer ${AIRTABLE_KEY}` }
});
const data = await r.json();
const rows = data.records.map(rec => ({
id: rec.id,
name: rec.fields.Name,
cuisine: rec.fields.Cuisine,
tags: rec.fields.Tags || [],
price_range: rec.fields.Price || 2,
short_desc: rec.fields.Description || ''
}));
// Basic caching headers
res.setHeader('Cache-Control', 's-maxage=300, stale-while-revalidate=600');
res.json(rows);
}
Step 3 — Prompt engineering on the server
Never call the LLM directly from the browser. Your server will build a context window: a short list of candidate restaurants plus a tightly-constrained prompt that asks Claude to return a JSON array ranked by suitability. Constrain output format strictly to reduce hallucination and simplify parsing.
// /api/recommend.ts
import fetch from 'node-fetch';
import crypto from 'crypto';
export default async function handler(req, res) {
const { preferences } = req.body; // { members: [...], vibe: 'date night', price_max: 2 }
// Fetch restaurants (or read from cache)
const restaurantsResp = await fetch(`${process.env.BASE_URL}/api/restaurants`);
const restaurants = await restaurantsResp.json();
// Optionally filter client-side before prompt to keep token cost down
const candidates = restaurants.filter(r => r.price_range <= preferences.price_max);
const candidateText = candidates.map(r => `- id:${r.id} name:${r.name} cuisine:${r.cuisine} tags:${r.tags.join(',')}`).join('\n');
const systemPrompt = `You are a friendly restaurant recommender. Given a small list of restaurants and a group's shared preferences, return a JSON array of up to 5 recommendations sorted best-to-worst. Each item must include {id, name, rank, reason}. Do not invent restaurants or add fields.`;
const userPrompt = `Restaurants:\n${candidateText}\n\nPreferences:\n${JSON.stringify(preferences)}\n\nReturn strictly JSON.`;
const fullPrompt = systemPrompt + '\n\n' + userPrompt;
// Call Anthropic Claude (Responses API v1) — store the API key server-side
const resp = await fetch('https://api.anthropic.com/v1/responses', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Anthropic-Api-Key': process.env.ANTHROPIC_KEY
},
body: JSON.stringify({ model: 'claude-3', input: fullPrompt, max_tokens: 500, temperature: 0.2 })
});
const body = await resp.json();
const text = body.output?.[0]?.content?.[0]?.text || body.output_text || '';
// Basic parse + defensive validation
let parsed = [];
try { parsed = JSON.parse(text); } catch (e) { parsed = []; }
// Enrich with full restaurant metadata
const enriched = parsed.map(p => ({ ...candidates.find(c => c.id === p.id), rank: p.rank, reason: p.reason }));
res.json({ results: enriched });
}
Notes:
- Keep temperature low (0–0.3) to reduce creative hallucination and ensure consistent JSON.
- Provide concrete constraints in the system prompt and include the candidate list to avoid invention.
- Validate the model response server-side and fall back to heuristics if parsing fails.
Step 4 — React hook: useRecommendations
Encapsulate fetching, debouncing, caching, and error handling in a hook so components stay simple.
import { useState, useEffect, useRef } from 'react';
export function useRecommendations(preferences) {
const [results, setResults] = useState(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState(null);
const abortRef = useRef(null);
useEffect(() => {
if (!preferences) return;
let cancelled = false;
setLoading(true);
setError(null);
const cacheKey = JSON.stringify(preferences);
const cached = sessionStorage.getItem(cacheKey);
if (cached) {
setResults(JSON.parse(cached));
setLoading(false);
return;
}
const controller = new AbortController();
abortRef.current = controller;
const timer = setTimeout(async () => {
try {
const r = await fetch('/api/recommend', {
method: 'POST',
body: JSON.stringify({ preferences }),
headers: { 'Content-Type': 'application/json' },
signal: controller.signal
});
const j = await r.json();
if (!cancelled) {
setResults(j.results);
sessionStorage.setItem(cacheKey, JSON.stringify(j.results));
}
} catch (e) {
if (!cancelled) setError(e);
} finally { if (!cancelled) setLoading(false); }
}, 300); // debounce
return () => { cancelled = true; controller.abort(); clearTimeout(timer); };
}, [preferences]);
return { results, loading, error, cancel: () => abortRef.current?.abort() };
}
Step 5 — Build the UI and editor-friendly interactions
UX principles for non-dev editors and voters:
- Expose only a few controls: vibe presets ("date night", "outdoor brunch"), max price, and must-have tags.
- Show ranked results with one-line reasons from Claude (short, actionable).
- Provide an "editor mode" (protected by a simple token) that opens Airtable for edits or shows an in-app form that writes to Airtable via a serverless update endpoint.
- Accessibility: labels, keyboard interactions, ARIA roles, contrast checks.
Example component skeleton:
function RecommenderApp() {
const [prefs, setPrefs] = useState({ vibe: 'casual', price_max: 2, members: [] });
const { results, loading, error } = useRecommendations(prefs);
return (
<div>
<PreferencesForm value={prefs} onChange={setPrefs} />
{loading ? <p>Loading…</p> : null}
{error ? <p role="alert">Error: {String(error)}</p> : null}
<ResultsList items={results} />
</div>
);
}
Prompt templates & few-shot examples (practical)
Good prompts are precise. Use a small few-shot set and a strict JSON schema. Example system prompt (trimmed):
System: You are a concise restaurant recommender. Input: a small list of cafes with id, name, cuisine, tags. Also input: preferences (members, vibe, price_max). Output: JSON array of objects [ {id, name, rank, reason} ].
Example:
Input Restaurants: ...
Preferences: {"vibe":"date night","members":["A likes spicy","B is vegetarian"]}
Output: [{"id":"rec1","name":"Salsa","rank":1,"reason":"Has intimate seating and good vegetarian tapas."}]
Keep examples short. If your dataset is small, include the exact list in the prompt so the model can only pick from those candidates.
Cost, latency, and reliability strategies
- Cache results by query hash for 5–30 minutes depending on how dynamic your data is.
- Pre-filter candidates before the prompt to minimize tokens and cost.
- Use a lower-cost Claude model for ranking and a higher-quality one only when needed for long explanations.
- Graceful degradation: if the LLM fails, fall back to a simple heuristic sorter (score = tag matches + price match).
Security and best practices
- Never expose your Anthropic key in client code. Use serverless env vars and rotate keys periodically.
- Limit the editor endpoint with simple auth (signed token or NextAuth) so only invited editors can change restaurants.
- Monitor usage and set quotas; LLM usage can escalate quickly if a public link leaks.
- Log prompts and responses for debugging—but scrub PII before storing logs.
Testing and observability
Write unit tests for your hook and serverless endpoints. Mock the LLM with canned JSON to ensure your parsing logic stands up. Add uptime checks and simple synthetic monitoring for latency spikes—recommendations are user-facing and need to be snappy.
Advanced options and future-proofing (2026 trends)
- Embeddings & similarity search: If your dataset grows (>500 rows), create embeddings (Claude/OpenAI/others) for restaurant descriptions and do a vector similarity pass to select the most relevant 20 candidates before prompting the LLM. (See edge-first strategies for cost-aware retrieval patterns.)
- Hybrid on-device + cloud: 2026 devices increasingly support compact LLMs for instant heuristics. Use an on-device model to produce a quick shortlist and ask Claude to finalize the ranked list for nuance. Related orchestration patterns are discussed in edge-aware orchestration.
- Personalization: store per-user preference vectors (with consent) and add them to the prompt for personalized ordering.
- Multimodal: add photos and let the LLM comment on ambiance; use Vision-capable Claude models carefully and cache outputs.
Example prompt engineering checklist
- Always include the full candidate list or a validated shortlist.
- Define a strict JSON schema in the prompt and ask the model to adhere to it.
- Set temperature low for deterministic outputs.
- Include 1–2 few-shot examples in the system message.
- Post-validate the model response, and have a fallback heuristic.
Actionable takeaways
- Start with Airtable + serverless functions to separate editing from API keys.
- Use a single useRecommendations hook that handles debouncing, caching, and aborts.
- Constrain the LLM with a strict JSON schema and low temperature to avoid hallucinations.
- Cache aggressively for common queries to reduce latency and cost (see a layered caching case study).
- Make the editor experience simple — non-devs should be able to add, tag, and reorder restaurants in under a minute.
"Vibe-coding and micro apps let people solve their own problems quickly. For teams, that means faster wins and less decision friction." — Practitioner insight, 2026
Next steps and where to go from here
Once the MVP works, iterate with metrics: measure time-to-decision, click-throughs on top-ranked suggestions, and editor friction. If you see recurring patterns (e.g., many users prefer "outdoor" spots), add a preset and surfacing filters for that tag. For larger deployments, add embeddings, personalized preference storage (with explicit consent), and rate-limiting per user.
Call to action
Ready to build your own Where2Eat? Clone a starter repo, wire up an Airtable base, and deploy serverless endpoints — you'll have a usable recommender in a few hours. If you want a curated starter with the hooks, serverless templates, and tested prompt templates for Claude, grab the starter kit linked below and iterate fast.
Build fast. Keep keys server-side. Tune prompts. Ship a small, useful app today.
Related Reading
- How to Build a Privacy-First Preference Center in React
- Micro Apps at Scale: Governance and Best Practices for IT Admins
- Rankings, Sorting, and Bias: How to Build a Fair 'Worst to Best' Algorithm
- Edge‑First, Cost‑Aware Strategies for Microteams in 2026
- AI Curation for Museums and Galleries: From Reading Lists to Digital Exhibits
- Collector’s Roadmap: Where to Buy Splatoon & Zelda Amiibo for ACNH Stream Giveaways
- Field Review: Top 8 Plant‑Based Snack Bars for Recovery & Energy — 2026 Hands‑On
- Goalhanger’s 250K Subscribers: How a History Podcast Company Scaled Subscriptions
- Creating a Dog-Friendly Therapy Practice: Policies, Benefits, and Ethical Boundaries
Related Topics
reacts
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you