Skip to content
View nitinog10's full-sized avatar
🥶
Focusing
🥶
Focusing

Block or report nitinog10

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
nitinog10/README.md



I don't write papers about AI. I ship it.

There's a difference between people who talk about building intelligent systems and people who actually wake up at 3AM to debug a RAG pipeline before a 9AM client demo.

I'm the second kind.

Co-founder @ Bug Biceps — an AI production studio born from one conviction: that models should solve real problems, not sit in Jupyter notebooks. We build systems that remember, reason, coordinate, and see. In production. Not in theory.

Right now I'm deep in LangGraph orchestration, MCP server architecture, and fine-tuning at scale — six active builds, all moving, none abandoned.




Python JavaScript TypeScript HTML/CSS Shell







◆ NEURAL AI · ML · GenAI


Python LangChain LangGraph RAG HuggingFace PyTorch scikit-learn MLflow MCP

◆ CORE Backend · Cloud


FastAPI Docker AWS Django Node.js GCP PostgreSQL MongoDB

◆ VISION CV · Interface


OpenCV TensorFlow React Next.js TypeScript TailwindCSS Three.js

◆ GRADE Classification · Rules






▰▰▰▰▰▰▰▰▰▰ 100%

Autonomous tool orchestration across isolated servers. Built the architecture they said was too ambitious. Proved them wrong before the meeting ended.

▰▰▰▰▰▰▰░░░ 72%

Twenty specialized agents. One orchestrator. Zero chaos. The system coordinates, decides, and executes — all under graph-level control.


▰▰▰▰▰▰▰▰░░ 83%

Not retrieval. Not keyword matching. Genuine contextual memory at scale. The model doesn't hallucinate a past — it retrieves the real one.


▰▰▰▰▰▰░░░░ 65%

Live inference. Streaming frames. Sub-100ms latency targets. The machine doesn't analyze old footage — it sees now.


▰▰▰▰▰░░░░░ 50%

Done using models. Now building models. Fine-tuning weights to solve problems that off-the-shelf LLMs can't touch. Season II looks different.


▰▰▰▰▰▰▰▰▰░ 92%

Didn't join a startup. Built one. From conviction alone: AI should ship, not sit in a paper. bugbiceps.in — remember the name.









Portfolio   Gmail   LinkedIn   Instagram


BugBiceps




 



Typing


Pinned Loading

  1. MCP-server-setup MCP-server-setup Public

    A complete MCP server setup all databases.

    Python 5 1

  2. AtmoPredict AtmoPredict Public

    Python 4

  3. Transformer-model-from-scratch-using-keras-hub-to-perform-language-modeling. Transformer-model-from-scratch-using-keras-hub-to-perform-language-modeling. Public

    Jupyter Notebook 1

  4. CortexDev-Neural-intelligence-for-software-development CortexDev-Neural-intelligence-for-software-development Public

    Jupyter Notebook 1

  5. Gsap-showroom Gsap-showroom Public

    TypeScript 4

  6. langchain-agent langchain-agent Public

    Jupyter Notebook 4