Skip to content

fix: capture thought output from Gemini thinking models (#4647)#4648

Open
devin-ai-integration[bot] wants to merge 1 commit intomainfrom
devin/1772280300-gemini-thinking-config
Open

fix: capture thought output from Gemini thinking models (#4647)#4648
devin-ai-integration[bot] wants to merge 1 commit intomainfrom
devin/1772280300-gemini-thinking-config

Conversation

@devin-ai-integration
Copy link
Contributor

fix: capture thought output from Gemini thinking models (#4647)

Summary

Gemini thinking models (e.g. gemini-2.5-pro, gemini-2.5-flash) produce "thought" parts alongside text parts in their responses. Previously, these thought parts were silently discarded, and using chunk.text on streaming responses containing non-text parts triggered SDK warnings.

This PR:

  • Adds a thinking_config parameter to GeminiCompletion (accepts ThinkingConfig or dict), passed through to the generation config
  • Rewrites _process_stream_chunk to iterate over candidate parts directly instead of calling chunk.text, which avoids SDK warnings when non-text parts are present
  • Converts _extract_text_from_response from a @staticmethod to an instance method so it can store thought content
  • Captures thought parts in self.previous_thoughts (both streaming and non-streaming paths)
  • Adds 11 unit tests covering initialization, config propagation, thought extraction, and streaming behavior

Review & Testing Checklist for Human

  • Verify _process_stream_chunk rewrite doesn't break non-thinking streaming. This is the highest-risk change — the old code used chunk.text then separately looped over parts for function calls. The new code uses a single loop over parts for everything. Test with regular (non-thinking) models with streaming enabled, especially with tool calling.
  • Check the text part guard not part.function_call in the streaming loop (line ~980). Is it possible for a real SDK Part to have both .text and .function_call set? If so, this guard is correct; if not, it's harmless but worth confirming.
  • Confirm previous_thoughts is actually accessible downstream. Thoughts are captured in self.previous_thoughts but there is no consumer shown in this diff that surfaces them via the LLM event system or return values. Verify this is sufficient for the issue reporter's use case, or whether an additional integration point is needed.
  • Verify no callers of _extract_text_from_response use it as a static/class method. It was changed from @staticmethod to an instance method — any GeminiCompletion._extract_text_from_response(response) calls would break.
  • previous_thoughts accumulates indefinitely — it is never cleared between call() invocations. Confirm this is acceptable behavior for multi-turn conversations or whether it should be reset per-call.

Notes

  • The one pre-existing test failure (test_gemini_raises_error_when_model_not_supported) is unrelated to these changes
  • All 11 new tests pass locally; they use mocked Part objects rather than real SDK responses

Requested by: João
Link to Devin run: https://app.devin.ai/sessions/5d1a2a24e1e84fb7b3056281f054fc5c

- Add thinking_config parameter to GeminiCompletion.__init__ (accepts dict or ThinkingConfig)
- Include thinking_config in _prepare_generation_config when set
- Rewrite _process_stream_chunk to iterate over parts directly instead of using chunk.text, avoiding warnings when non-text parts (thought, function_call) are present
- Convert _extract_text_from_response from staticmethod to instance method; separate thought parts from text parts and store thoughts in self.previous_thoughts
- Add 11 tests covering thinking config initialization, generation config integration, thought part extraction in streaming and non-streaming paths

Co-Authored-By: João <joao@crewai.com>
@devin-ai-integration
Copy link
Contributor Author

Prompt hidden (unlisted session)

@devin-ai-integration
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants