The AI Use Case Finder walks you through a short decision tree to identify the right AI approach for your task — whether that's RAG, fine-tuning, a vision API, or another solution. Answer a few questions and get a practical recommendation.
How to Use the AI Use Case Finder
Choosing the right AI approach before building saves weeks of wasted effort. The AI Use Case Finder guides you through the key decision points — content type, task, and constraints — to recommend the most practical solution.
Step 1: Select Your Content Type
Start by identifying what kind of data you're working with: text documents, images, audio recordings, video, code, or structured tabular data. Each content type has fundamentally different optimal approaches and model families.
Step 2: Define Your Task
Within each content type, there are several distinct tasks. For text: generating new content is different from extracting entities, which is different from semantic search. The second question narrows down from 30+ potential approaches to a handful of relevant ones.
Step 3: Review Your Recommendation
The result card shows the recommended AI approach with a short description, example models to consider, cost indicator, complexity rating, and a practical first step to get started. Use the "Back" button to explore alternative paths if the first recommendation doesn't fit your constraints.
Common AI Approaches Explained
RAG (Retrieval-Augmented Generation) — Best for knowledge-base Q&A. Embeds documents, retrieves relevant chunks, passes to LLM. Moderate setup, very flexible. Fine-tuning — Best for changing model behavior. Trains on your examples. Higher setup cost, best for consistent format/style. Prompt Engineering — Best for quick wins. Craft system prompts to steer existing models. Zero code, instant start. Vision API — Best for image classification and OCR. Specialized, fast, accurate for defined tasks.
FAQ
What is RAG (Retrieval-Augmented Generation)?
RAG is an architecture that retrieves relevant documents from a knowledge base, then passes them to an LLM as context. It's ideal for question-answering over private documents, keeping responses grounded in specific data, and reducing hallucination. Key components: an embedding model, a vector database, and a generation model.
When should I fine-tune instead of using RAG?
Fine-tune when you need to change the model's style, tone, or behavior pattern — not just what it knows. For example: enforcing a specific output format, teaching a brand voice, or improving accuracy on a narrow task. RAG is better when you need access to up-to-date or large knowledge bases.
What AI approach works best for image analysis?
For object detection and classification, use dedicated vision APIs like Google Vision AI, Azure Computer Vision, or Roboflow. For image understanding with text generation (e.g., captioning, visual QA), multimodal models like GPT-4o Vision or Claude 3.5 Sonnet work well. For generating images, DALL-E 3, Midjourney, or Stable Diffusion are the leading options.
What's the cheapest AI approach for high-volume text tasks?
For high-volume classification or extraction, consider fine-tuning a small open-source model (like Llama 3.1 8B) and self-hosting it — marginal cost per request can approach zero. For moderate volumes, GPT-4o-mini or Claude Haiku offer good value at $0.15–0.30/1M tokens.
Can I use AI for structured data prediction?
Yes, but traditional ML often outperforms LLMs for structured tabular data. For regression and classification on tables, scikit-learn, XGBoost, or AutoML tools (like Google AutoML or AWS SageMaker) typically work better. LLMs excel when the task involves natural language understanding or generation on top of the structured data.
Is this tool free?
Yes, completely free with no signup required. All logic runs in your browser.