-
Notifications
You must be signed in to change notification settings - Fork 29
Open
Labels
Description
Request Issue
No response
Website Section
Home --> Tutorials --> Java with AI
Proposal Details
Summary
A beginner friendly tutorial showing how to connect Java to a local LLM via Ollama, using only HttpClient. No cloud keys, no SDKs, no frameworks.
What the tutorial covers
- Why local: privacy, offline development, zero API cost, fast iteration.
- Prereqs: install Ollama, start the local server, pull a model.
- First call: Java
HttpClientPOST to Ollama’s/api/generatewith:modelpromptstream: false(clean single JSON response)
- Parsing JSON: print raw response first; optionally parse
responsefield with Jackson. - Sanity checks: timeout, server-not-running errors, model-not-available guidance.
Example Ollama models to use (and pull commands)
Recommended general-purpose starters
llama3.2:3b
ollama pull llama3.2:3bllama3.2:1b(smaller/faster)
ollama pull llama3.2:1b
Coding-focused
qwen2.5-coder:7b
ollama pull qwen2.5-coder:7b
Small + efficient
phi3:mini
ollama pull phi3:mini
Optional multimodal (vision)
llava
ollama pull llava
Tip for the tutorial
- “If you’re unsure, start with
llama3.2:3b.” - “If you want coding help, try
qwen2.5-coder:7b.” - “If you hit memory limits, use
llama3.2:1borphi3:mini.”
Author References
- Linkedin https://www.linkedin.com/in/ksurendra/
- Personal site/portfolio: https://surenk.com/
- Writing (Substack): https://www.techinpieces.com/ and https://surenk.medium.com/
- Helidon blog: https://medium.com/helidon/anatomy-of-helidon-mcp-ollama-designing-ai-enhanced-java-microservices-a9ddaba1325d
- Oracle blog: https://blogs.oracle.com/emeapartnerweblogic/post/helidon-with-swagger-openapi-by-suren-konathala
- Open source work: https://github.com/thesurenk
Reactions are currently unavailable