sourc.dev
Home LLMs Tools SaaS APIs
Claude 3.5 Sonnet input $3.00/1M ↓ -50%
GPT-4o input $2.50/1M
Gemini 1.5 Pro input $1.25/1M
Mistral Large input $2.00/1M ↓ -33%
DeepSeek V3 input $0.27/1M
synced 2026-04-05
Claude 3.5 Sonnet input $3.00/1M ↓ -50%
GPT-4o input $2.50/1M
Gemini 1.5 Pro input $1.25/1M
Mistral Large input $2.00/1M ↓ -33%
DeepSeek V3 input $0.27/1M
synced 2026-04-05
#34 of 50

Prompt engineering

Get better results from the same model at the same price

What is prompt engineering

Prompt engineering is the practice of structuring input text to get more accurate, useful, and consistent outputs from a language model. It includes techniques like few-shot examples (showing the model what good output looks like), chain-of-thought prompting (asking the model to reason step by step), and format specification (instructing the model to respond in JSON, bullet points, or a specific structure).

Prompt engineering is free — it does not change the model, the API, or the pricing. It changes the input.

Why it matters

The same model at the same price can produce dramatically different results depending on how you ask. A well-engineered prompt reduces errors, increases consistency, and often eliminates the need for a more expensive model. Before upgrading from GPT-4o-mini to GPT-4o, try engineering your prompt first.

Verified March 2026 · Source: Anthropic prompt engineering guide, OpenAI best practices

Related terms
System promptTokenContext window
← All terms
← Grounding Temperature →