Conversation
Summary of ChangesHello @trett, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a fundamental shift in how AI summaries are delivered to the user, moving from a batch-processing model to a real-time streaming approach. By leveraging Gemini's streaming capabilities and Server-Sent Events, the system can now progressively render summary content, metadata, and fun facts, providing a more dynamic and responsive user interface. This enhancement aims to make the summary generation process feel faster and more interactive for the end-user. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request is a significant improvement, refactoring the summary generation to use a streaming approach with Gemini's API and Server-Sent Events. The changes on the backend, API, and frontend are well-executed and will greatly enhance the user experience by providing incremental updates. I've identified a couple of areas for improvement in the backend service to increase robustness, particularly around error handling and response parsing in the new streaming logic.
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request successfully refactors summary generation to a streaming model using Gemini's API and Server-Sent Events, significantly enhancing the user experience by displaying summaries incrementally. However, it introduces security vulnerabilities related to LLM input/output handling. Specifically, the direct concatenation of untrusted RSS feed content into prompts allows for prompt injection, and the unsafe rendering of LLM-generated HTML on the client side leads to Cross-Site Scripting (XSS). These security concerns should be addressed by implementing client-side HTML sanitization and more secure prompt construction techniques. Additionally, there are suggestions to improve error handling and response parsing on the backend.
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
server/src/main/scala/ru/trett/rss/server/services/SummarizeService.scala
Show resolved
Hide resolved
| .map { geminiResp => | ||
| geminiResp.candidates.headOption | ||
| .flatMap(_.content.parts.flatMap(_.headOption)) | ||
| .map(_.text) | ||
| .map { text => | ||
| if text.startsWith("```html") then | ||
| text.stripPrefix("```html").stripSuffix("```").trim | ||
| else text.trim | ||
| } match | ||
| case Some(html) if html.nonEmpty => SummarySuccess(html) | ||
| case _ => | ||
| SummaryError("Could not extract summary from response.") | ||
| .map(_.text.replace("```html", "").replace("```", "")) | ||
| .getOrElse("") | ||
| } | ||
| .map(SummaryEvent.Content(_)) |
There was a problem hiding this comment.
Using .replace() to strip markdown fences is a bit too broad and could corrupt the generated HTML if the fence characters appear within the content itself. The previous, non-streaming implementation's use of stripPrefix and stripSuffix is safer as it only targets the beginning and end of the string.
Additionally, filtering out empty content chunks would be a good practice to avoid sending unnecessary events to the client.
.map { geminiResp =>
geminiResp.candidates.headOption
.flatMap(_.content.parts.flatMap(_.headOption))
.map(_.text)
.getOrElse("")
}
.map { text =>
if (text.startsWith("```html")) {
text.stripPrefix("```html").stripSuffix("```").trim
} else {
text.trim
}
}
.filter(_.nonEmpty)
.map(SummaryEvent.Content(_))|
Applied the suggested fix for markdown stripping and empty chunk filtering. |
|
I have addressed the unresolved comment regarding markdown stripping and empty chunk filtering. The changes have been pushed to the branch. |
|
Ran scalafix and scalafmt. Refactored SummaryPage.scala to remove mutable state and improve stream lifecycle management. |
…EventSource - Replace onMountCallback with EventStream.merge to guarantee initial request in SummaryPage - Use ListBuffer in NetworkUtils to properly track and close EventSource connections - Update AGENTS.md to enforce scalafix/scalafmt usage
fa9f38f to
13c48e4
Compare
This PR introduces streaming summary generation using Gemini's API.
Key Changes:
This change significantly improves the user experience by displaying the summary as it is generated, reducing the perceived waiting time.