Conversation
…d in current code base)
📝 WalkthroughWalkthroughThis PR updates the workspace to OpenVM 1.4.2, bumps the project version from 4.7.1 to 4.7.11, removes legacy witness encoding infrastructure throughout the proving stack, introduces a new Dumper component for task serialization, and updates E2E test configurations for multiple blockchain forks (Galileo, GalileoV2). Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (4)
rollup/tests/integration_tool/imports.go (1)
149-159:0is overloaded as both "not found" sentinel and a valid nonce — consider making the distinction explicit.The IIFE returns
tx.Nonce(possibly0) when a message is found, and0when no message is found. The downstream guard at line 161 (if l1MsgPoppedBefore == 0) can't tell the two cases apart. The comment documents this is intentional — nonce=0 implies zero messages popped before, so a leading chunk is also unnecessary — but the invariant is fragile: any future caller that needs to distinguish "first ever L1 message (nonce 0)" from "no L1 message at all" will silently misbehave.A pointer return or a separate found-flag makes the intent structurally unambiguous:
♻️ Proposed refactor — pointer return to separate "found nonce 0" from "not found"
- l1MsgPoppedBefore := func() uint64 { + l1MsgPoppedBefore := func() *uint64 { for i, block := range blks { for _, tx := range block.Transactions { if tx.Type == types.L1MessageTxType { log.Info("search first l1 nonce", "index", tx.Nonce, "blk", beginBlk+uint64(i)) - return tx.Nonce + nonce := tx.Nonce + return &nonce } } } - return 0 + return nil }() - if l1MsgPoppedBefore == 0 { + if l1MsgPoppedBefore == nil || *l1MsgPoppedBefore == 0 { log.Info("no l1 message in target blks, no need for leading chunk") return nil }Then update the downstream usage accordingly:
- log.Info("Insert leading chunk with prev block", "msgPoppedBefore", l1MsgPoppedBefore) + log.Info("Insert leading chunk with prev block", "msgPoppedBefore", *l1MsgPoppedBefore) leadingChunk, err := chunkOrm.InsertTestChunkForProposerTool(ctx, &encoding.Chunk{ ... - }, codecCfg, l1MsgPoppedBefore) + }, codecCfg, *l1MsgPoppedBefore)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rollup/tests/integration_tool/imports.go` around lines 149 - 159, The IIFE that sets l1MsgPoppedBefore currently returns uint64 and uses 0 both for "nonce 0 found" and "not found", so change the IIFE to return an explicit presence indicator (either *uint64 or (uint64,bool)) so the caller can distinguish a found nonce of 0 from "not found"; update the variable type for l1MsgPoppedBefore and replace the downstream guard (currently checking if l1MsgPoppedBefore == 0) with a presence check (nil or bool flag) and use the dereferenced nonce (or returned uint64) when present; refer to the existing IIFE that iterates blks and tx.Nonce and the downstream guard to locate and adjust code.crates/prover-bin/src/prover.rs (2)
294-296: Minor grammar nit in error message.🔧 Suggested fix
- eyre::bail!("prover do not support snark params base on openvm 13"); + eyre::bail!("prover does not support snark params based on openvm 13");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/prover-bin/src/prover.rs` around lines 294 - 296, The error message thrown when prover_task.use_openvm_13 is true contains a grammar mistake; update the eyre::bail! call in the prover.rs branch that checks prover_task.use_openvm_13 to a clearer message such as "prover does not support SNARK params based on OpenVM 13" (or similar capitalization/wording you prefer) so the text is grammatically correct and clearer for users.
85-91: Consider usingtracinginstead ofprintln!for the debug-mode log.The rest of the codebase uses
tracingfor structured logging. Usingprintln!here bypasses log levels and structured output. Also applies to line 100 and 109.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/prover-bin/src/prover.rs` around lines 85 - 91, Replace the println! calls inside the debug-mode branches with structured tracing logs: in the block guarded by self.debug_mode (and the similar branches around the locations that printed at line ~100 and ~109), call tracing::debug! (or tracing::trace! if you prefer lower verbosity) with the same message and filename (e.g., tracing::debug!("File {} already exists, skipping download under debug mode", filename)); ensure the tracing crate is in scope (add use tracing::debug; or fully qualify tracing::debug!) so these branches emit structured, leveled logs instead of using println!.crates/prover-bin/src/dumper.rs (1)
46-46: Use a structured logging macro instead ofprintln!
println!writes directly to stdout with no log level, timestamp, or structured context. Prefertracing::info!(orlog::info!) which integrates with whatever subscriber/logger the binary configures.- println!("written {sz} bytes for proof"); + tracing::info!(bytes = sz, "written proof bytes");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/prover-bin/src/dumper.rs` at line 46, Replace the direct stdout call println!("written {sz} bytes for proof") with a structured logging call (e.g., tracing::info!) so messages include level and fields; in dumper.rs update the call that references sz to use tracing::info!(bytes_written = sz, "written bytes for proof") and ensure tracing is imported/available in this module (add use tracing::info or add tracing to Cargo.toml/subscriber setup if missing).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Cargo.toml`:
- Around line 20-23: Update the inline comment "with openvm 1.4.2" to reflect
the actual OpenVM version used by the pinned dependency rev b527310; change the
comment to "with openvm 1.4.3" near the scroll-zkvm-prover /
scroll-zkvm-verifier / scroll-zkvm-types dependency entries so the comment
matches the revision.
In `@crates/prover-bin/src/dumper.rs`:
- Around line 29-47: The BufWriter instances (input_writer and agg_writer) are
never explicitly flushed so any final IO errors (e.g., disk full) are lost;
after bincode::encode_into_std_write calls finish but before the function
returns from dump in dumper.rs, call input_writer.flush()? and
agg_writer.flush()? (or the underlying File::sync_all()? if stronger guarantees
are needed) and propagate errors with ? so final-flush failures are returned
instead of being silently ignored.
- Around line 13-17: Dumper.target_path is never used: update the Dump CLI to
accept a --target-path argument, pass that value into the Dumper constructor
(instead of leaving it as the default empty string) and then change dump() to
build output file paths by joining Dumper.target_path with the filenames (use
Path/PathBuf join semantics) for the three std::fs::File::create() calls so
files are written under the configured directory; finally remove the
#[allow(dead_code)] on target_path once it is used.
In `@tests/prover-e2e/cloak-galileoV2/config.json`:
- Around line 1-15: The JSON has incorrect indentation and an extra trailing
blank line: the properties under "db_config" (e.g., "driver_name", "dsn",
"maxOpenNum", "maxIdleNum") should be indented one level deeper than the
"db_config" key, and remove the empty line before the final closing brace;
update the "db_config" block so its child keys are indented consistently with
the rest of the file and delete the trailing blank line before the final "}".
In `@tests/prover-e2e/cloak-galileoV2/genesis.json`:
- Line 49: The timestamp field in the genesis.json uses a 32-byte zero-padded
hex string; update the "timestamp" value in genesis.json (the "timestamp" key in
tests/prover-e2e/cloak-galileoV2/genesis.json) to the compact hex format used
elsewhere by removing the leading zeros so it reads like 0x698d733c; ensure the
value exactly matches the compact representation used in other genesis files.
In `@tests/prover-e2e/mainnet-galileo/.make.env`:
- Line 4: The SCROLL_ZKVM_VERSION line is using the wrong operator so the
variable is being set to the literal string "?v0.7.1"; change the assignment to
the Make conditional-assignment form so SCROLL_ZKVM_VERSION is set to "v0.7.1"
unless already defined (i.e., replace the current malformed assignment in the
file so SCROLL_ZKVM_VERSION uses "?=" like the sepolia-galileo example),
ensuring downstream uses such as the coordinator_setup target receive the proper
version string.
In `@tests/prover-e2e/mainnet-galileo/genesis.json`:
- Around line 24-25: The genesis JSON uses keys "GalileoTime" and
"GalileoV2Time" with capital G which is inconsistent with other fork activation
keys; update these keys to lowercase camelCase "galileoTime" and "galileoV2Time"
(replace occurrences of GalileoTime and GalileoV2Time with galileoTime and
galileoV2Time) so they match sibling fixtures and the existing fields like
darwinTime/euclidTime/feynmanTime.
In `@tests/prover-e2e/sepolia-galileoV2/.make.env`:
- Line 4: Fix the malformed makefile variable assignment for SCROLL_ZKVM_VERSION
in the sepolia-galileoV2 env by moving the question mark to form a conditional
default (use SCROLL_ZKVM_VERSION?=v0.7.1 instead of SCROLL_ZKVM_VERSION=?v0.7.1)
in the sepolia-galileoV2/.make.env; then verify whether the version mismatch
with cloak-galileoV2/.make.env (which currently uses
SCROLL_ZKVM_VERSION?=v0.7.2) is intentional and either align both to the
intended version or document why they differ.
---
Nitpick comments:
In `@crates/prover-bin/src/dumper.rs`:
- Line 46: Replace the direct stdout call println!("written {sz} bytes for
proof") with a structured logging call (e.g., tracing::info!) so messages
include level and fields; in dumper.rs update the call that references sz to use
tracing::info!(bytes_written = sz, "written bytes for proof") and ensure tracing
is imported/available in this module (add use tracing::info or add tracing to
Cargo.toml/subscriber setup if missing).
In `@crates/prover-bin/src/prover.rs`:
- Around line 294-296: The error message thrown when prover_task.use_openvm_13
is true contains a grammar mistake; update the eyre::bail! call in the prover.rs
branch that checks prover_task.use_openvm_13 to a clearer message such as
"prover does not support SNARK params based on OpenVM 13" (or similar
capitalization/wording you prefer) so the text is grammatically correct and
clearer for users.
- Around line 85-91: Replace the println! calls inside the debug-mode branches
with structured tracing logs: in the block guarded by self.debug_mode (and the
similar branches around the locations that printed at line ~100 and ~109), call
tracing::debug! (or tracing::trace! if you prefer lower verbosity) with the same
message and filename (e.g., tracing::debug!("File {} already exists, skipping
download under debug mode", filename)); ensure the tracing crate is in scope
(add use tracing::debug; or fully qualify tracing::debug!) so these branches
emit structured, leveled logs instead of using println!.
In `@rollup/tests/integration_tool/imports.go`:
- Around line 149-159: The IIFE that sets l1MsgPoppedBefore currently returns
uint64 and uses 0 both for "nonce 0 found" and "not found", so change the IIFE
to return an explicit presence indicator (either *uint64 or (uint64,bool)) so
the caller can distinguish a found nonce of 0 from "not found"; update the
variable type for l1MsgPoppedBefore and replace the downstream guard (currently
checking if l1MsgPoppedBefore == 0) with a presence check (nil or bool flag) and
use the dereferenced nonce (or returned uint64) when present; refer to the
existing IIFE that iterates blks and tx.Nonce and the downstream guard to locate
and adjust code.
| # with openvm 1.4.2 | ||
| scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "b527310" } | ||
| scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "b527310" } | ||
| scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "b527310" } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Fetch the Cargo.toml from zkvm-prover repo at rev b527310
curl -s "https://raw.githubusercontent.com/scroll-tech/zkvm-prover/b527310/Cargo.toml" | grep -A 2 -B 2 "openvm" | head -40Repository: scroll-tech/scroll
Length of output: 3165
Update OpenVM version in comment.
The comment on line 20 states "with openvm 1.4.2", but the zkvm-prover dependency at rev b527310 actually uses OpenVM v1.4.3. Update the comment to reflect the correct version.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Cargo.toml` around lines 20 - 23, Update the inline comment "with openvm
1.4.2" to reflect the actual OpenVM version used by the pinned dependency rev
b527310; change the comment to "with openvm 1.4.3" near the scroll-zkvm-prover /
scroll-zkvm-verifier / scroll-zkvm-types dependency entries so the comment
matches the revision.
| pub struct Dumper { | ||
| #[allow(dead_code)] | ||
| pub target_path: String, | ||
| pub json_mode: bool, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find main.rs and look for Dumper initialization
find . -name "main.rs" -path "*/prover-bin/*" -type fRepository: scroll-tech/scroll
Length of output: 93
🏁 Script executed:
# Check the dumper.rs file to see the full context
cat -n crates/prover-bin/src/dumper.rs | head -80Repository: scroll-tech/scroll
Length of output: 3195
🏁 Script executed:
# Search for all references to target_path in the codebase
rg "target_path" --type rust -B 2 -A 2Repository: scroll-tech/scroll
Length of output: 321
🏁 Script executed:
# Check main.rs to see how Dumper is constructed
cat -n crates/prover-bin/src/main.rsRepository: scroll-tech/scroll
Length of output: 5268
🏁 Script executed:
# Check if target_path is ever set anywhere else in the prover-bin crate
rg -r "target_path\s*=" crates/prover-bin --type rustRepository: scroll-tech/scroll
Length of output: 44
target_path is declared but never used — all file writes ignore the configured path
The target_path field is initialized to an empty string (default) at line 97–99 in main.rs with no CLI argument to populate it. Within dump(), all three std::fs::File::create() calls use bare filenames—lines 25, 29, and 38—so output always goes to the process CWD, completely bypassing target_path. The #[allow(dead_code)] attribute confirms the compiler sees no reads of this field.
🐛 Proposed fix — join filenames against `target_path`
impl Dumper {
fn dump(&self, input_string: &str) -> eyre::Result<()> {
let task: ProvingTaskExt = serde_json::from_str(input_string)?;
let task = ProvingTask::from(task);
+ let base = std::path::Path::new(&self.target_path);
+
if self.json_mode {
- let file = std::fs::File::create("input_task.json")?;
+ let file = std::fs::File::create(base.join("input_task.json"))?;
serde_json::to_writer(std::io::BufWriter::new(file), &task)?;
} else {
- let input_file = std::fs::File::create("input_task.bin")?;
+ let input_file = std::fs::File::create(base.join("input_task.bin"))?;
let mut input_writer = std::io::BufWriter::new(input_file);
bincode::encode_into_std_write(
&task.serialized_witness,
&mut input_writer,
bincode::config::standard(),
)?;
+ input_writer.flush()?;
- let agg_file = std::fs::File::create("agg_proofs.bin")?;
+ let agg_file = std::fs::File::create(base.join("agg_proofs.bin"))?;
let mut agg_writer = std::io::BufWriter::new(agg_file);
for proof in &task.aggregated_proofs {
let sz = bincode::serde::encode_into_std_write(
&proof.proofs,
&mut agg_writer,
bincode::config::standard(),
)?;
println!("written {sz} bytes for proof");
}
+ agg_writer.flush()?;
}
Ok(())
}
}Also add a --target-path CLI argument to the Dump command in main.rs, populate it when constructing the Dumper, and remove the #[allow(dead_code)] attribute once in use.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/prover-bin/src/dumper.rs` around lines 13 - 17, Dumper.target_path is
never used: update the Dump CLI to accept a --target-path argument, pass that
value into the Dumper constructor (instead of leaving it as the default empty
string) and then change dump() to build output file paths by joining
Dumper.target_path with the filenames (use Path/PathBuf join semantics) for the
three std::fs::File::create() calls so files are written under the configured
directory; finally remove the #[allow(dead_code)] on target_path once it is
used.
| let input_file = std::fs::File::create("input_task.bin")?; | ||
| let mut input_writer = std::io::BufWriter::new(input_file); | ||
| bincode::encode_into_std_write( | ||
| &task.serialized_witness, | ||
| &mut input_writer, | ||
| bincode::config::standard(), | ||
| )?; | ||
|
|
||
| // stream-encode aggregated_proofs to agg_proofs.bin using bincode 2.0 | ||
| let agg_file = std::fs::File::create("agg_proofs.bin")?; | ||
| let mut agg_writer = std::io::BufWriter::new(agg_file); | ||
| for proof in &task.aggregated_proofs { | ||
| let sz = bincode::serde::encode_into_std_write( | ||
| &proof.proofs, | ||
| &mut agg_writer, | ||
| bincode::config::standard(), | ||
| )?; | ||
| println!("written {sz} bytes for proof"); | ||
| } |
There was a problem hiding this comment.
BufWriter is never explicitly flushed — final-flush errors are silently swallowed
Rust's BufWriter calls flush() on drop, but drop silently ignores any resulting Err. If the OS can't commit the remaining buffered bytes (e.g. disk full after the last encode_into_std_write succeeds), dump still returns Ok(()) while the output file is truncated. Adding explicit flush()? calls before the function returns propagates the error correctly. The diff in the comment above includes the suggested flush calls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/prover-bin/src/dumper.rs` around lines 29 - 47, The BufWriter
instances (input_writer and agg_writer) are never explicitly flushed so any
final IO errors (e.g., disk full) are lost; after bincode::encode_into_std_write
calls finish but before the function returns from dump in dumper.rs, call
input_writer.flush()? and agg_writer.flush()? (or the underlying
File::sync_all()? if stronger guarantees are needed) and propagate errors with ?
so final-flush failures are returned instead of being silently ignored.
| std::sync::Arc::new(prover) | ||
| .one_shot(&[task_id], task_type.into()) | ||
| .await; |
There was a problem hiding this comment.
Return value of one_shot is silently discarded.
The Handle command (lines 122–141) assert!s on one_shot's return value to detect failures, but Dump ignores it. A failed dump will exit with success status, masking errors.
🔧 Suggested fix
- std::sync::Arc::new(prover)
- .one_shot(&[task_id], task_type.into())
- .await;
+ let success = std::sync::Arc::new(prover)
+ .one_shot(&[task_id], task_type.into())
+ .await;
+ if !success {
+ eyre::bail!("dump task failed for task_id: {}", task_id);
+ }| { | ||
| "db_config": { | ||
| "driver_name": "postgres", | ||
| "dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable", | ||
| "maxOpenNum": 5, | ||
| "maxIdleNum": 1 | ||
| }, | ||
| "fetch_config": { | ||
| "endpoint": "http://cloak-test-sequencer.mainnet.scroll.tech:8545", | ||
| "l2_message_queue_address": "0x5300000000000000000000000000000000000000" | ||
| }, | ||
| "validium_mode": true, | ||
| "codec_version": 10 | ||
|
|
||
| } No newline at end of file |
There was a problem hiding this comment.
Inconsistent indentation in db_config block.
The children of db_config (lines 3–6) are at the same indentation level as db_config itself (line 2). They should be indented one more level. Also, there's a trailing blank line before the closing }.
🔧 Suggested fix
{
- "db_config": {
- "driver_name": "postgres",
- "dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable",
- "maxOpenNum": 5,
- "maxIdleNum": 1
+ "db_config": {
+ "driver_name": "postgres",
+ "dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable",
+ "maxOpenNum": 5,
+ "maxIdleNum": 1
},
"fetch_config": {
"endpoint": "http://cloak-test-sequencer.mainnet.scroll.tech:8545",
"l2_message_queue_address": "0x5300000000000000000000000000000000000000"
},
"validium_mode": true,
"codec_version": 10
-
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "db_config": { | |
| "driver_name": "postgres", | |
| "dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable", | |
| "maxOpenNum": 5, | |
| "maxIdleNum": 1 | |
| }, | |
| "fetch_config": { | |
| "endpoint": "http://cloak-test-sequencer.mainnet.scroll.tech:8545", | |
| "l2_message_queue_address": "0x5300000000000000000000000000000000000000" | |
| }, | |
| "validium_mode": true, | |
| "codec_version": 10 | |
| } | |
| { | |
| "db_config": { | |
| "driver_name": "postgres", | |
| "dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable", | |
| "maxOpenNum": 5, | |
| "maxIdleNum": 1 | |
| }, | |
| "fetch_config": { | |
| "endpoint": "http://cloak-test-sequencer.mainnet.scroll.tech:8545", | |
| "l2_message_queue_address": "0x5300000000000000000000000000000000000000" | |
| }, | |
| "validium_mode": true, | |
| "codec_version": 10 | |
| } |
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 4-5: Basic Auth Credentials
(CKV_SECRET_4)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/prover-e2e/cloak-galileoV2/config.json` around lines 1 - 15, The JSON
has incorrect indentation and an extra trailing blank line: the properties under
"db_config" (e.g., "driver_name", "dsn", "maxOpenNum", "maxIdleNum") should be
indented one level deeper than the "db_config" key, and remove the empty line
before the final closing brace; update the "db_config" block so its child keys
are indented consistently with the rest of the file and delete the trailing
blank line before the final "}".
| }, | ||
| "nonce": "0x0", | ||
| "timestamp": "0x0000000000000000000000000000000000000000000000000000000068c85e6c", | ||
| "timestamp": "0x00000000000000000000000000000000000000000000000000000000698d733c", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n '"timestamp"' --glob '*.json' -C 1Repository: scroll-tech/scroll
Length of output: 50376
🏁 Script executed:
# Find how genesis.json files are parsed to see if they reject zero-padded hex
fd -e 'go|rs|ts|js' | xargs rg -l 'genesis.*timestamp|parseGenesis|readGenesis' | head -5Repository: scroll-tech/scroll
Length of output: 44
🏁 Script executed:
# Check if there are tests that validate genesis.json files
fd -e json genesis | xargs rg -l 'cloak-galileoV2' 2>/dev/null | head -5Repository: scroll-tech/scroll
Length of output: 44
🏁 Script executed:
# Check if timestamp format is validated anywhere
rg 'timestamp.*0x[0-9a-f]' --glob '*.json' -c | head -10Repository: scroll-tech/scroll
Length of output: 467
Standardize timestamp format to compact hex.
The timestamp value uses a full 32-byte zero-padded hex string (0x00000000...698d733c), whereas all other genesis files in the repository (mainnet-galileo, sepolia-galileoV2, sepolia-galileo, integration-test) use compact hex format (0x698d733c). Remove the leading zeros for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/prover-e2e/cloak-galileoV2/genesis.json` at line 49, The timestamp
field in the genesis.json uses a 32-byte zero-padded hex string; update the
"timestamp" value in genesis.json (the "timestamp" key in
tests/prover-e2e/cloak-galileoV2/genesis.json) to the compact hex format used
elsewhere by removing the leading zeros so it reads like 0x698d733c; ensure the
value exactly matches the compact representation used in other genesis files.
| BEGIN_BLOCK?=26653680 | ||
| END_BLOCK?=26653686 | ||
| SCROLL_FORK_NAME=galileo | ||
| SCROLL_ZKVM_VERSION=?v0.7.1 No newline at end of file |
There was a problem hiding this comment.
? in wrong position — variable is set to the literal string ?v0.7.1.
The Make conditional-assignment operator is ?= (i.e., VAR?=VALUE), not =?. As written, SCROLL_ZKVM_VERSION=?v0.7.1 is an unconditional assignment that sets the variable to the string ?v0.7.1 (question-mark included). The coordinator_setup target will then forward SCROLL_ZKVM_VERSION=?v0.7.1 to the coordinator, which will fail to match any valid version. Compare the correct form in sepolia-galileo/.make.env where the same line reads SCROLL_ZKVM_VERSION?=v0.7.1.
🐛 Proposed fix
-SCROLL_ZKVM_VERSION=?v0.7.1
+SCROLL_ZKVM_VERSION?=v0.7.1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SCROLL_ZKVM_VERSION=?v0.7.1 | |
| SCROLL_ZKVM_VERSION?=v0.7.1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/prover-e2e/mainnet-galileo/.make.env` at line 4, The
SCROLL_ZKVM_VERSION line is using the wrong operator so the variable is being
set to the literal string "?v0.7.1"; change the assignment to the Make
conditional-assignment form so SCROLL_ZKVM_VERSION is set to "v0.7.1" unless
already defined (i.e., replace the current malformed assignment in the file so
SCROLL_ZKVM_VERSION uses "?=" like the sepolia-galileo example), ensuring
downstream uses such as the coordinator_setup target receive the proper version
string.
| "GalileoTime": 1765868400, | ||
| "GalileoV2Time": 1766041200, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the actual casing in both genesis files
echo "=== mainnet-galileo genesis.json (lines 20-30) ==="
cat tests/prover-e2e/mainnet-galileo/genesis.json | head -30 | tail -15
echo -e "\n=== cloak-galileoV2 genesis.json (lines 20-30) ==="
cat tests/prover-e2e/cloak-galileoV2/genesis.json | head -30 | tail -15
# Search for parser code that reads these fork time fields
echo -e "\n=== Searching for parser code handling fork times ==="
rg -n "galileotime|darwintime|euclidtime|feynmantime" -i --type go --type rust --type json -C 2 | head -50
# Look for how genesis is parsed (case sensitivity)
echo -e "\n=== Searching for genesis parsing logic ==="
rg -n "unmarshal|json\.|GetField" -i --type go --type rust -C 1 | grep -i galileo | head -30Repository: scroll-tech/scroll
Length of output: 4566
🏁 Script executed:
# Search for genesis parser code more broadly
echo "=== Looking for genesis struct definitions ==="
find . -name "*.go" -type f | xargs grep -l "GalileoTime\|galileoTime" 2>/dev/null
echo -e "\n=== Searching for JSON parsing of fork times ==="
rg -n "type.*Genesis|json:" --type go -A 20 | grep -E "Time|json" | head -40
echo -e "\n=== Look for Geth genesis handling ==="
rg -n "func.*Genesis" --type go -B 2 -A 10 | head -60
echo -e "\n=== Check if there's a go.ethereum or eth2.0 dependency ==="
find . -name "go.mod" -o -name "package.json" | xargs cat 2>/dev/null | grep -i "ethereum\|geth" | head -20Repository: scroll-tech/scroll
Length of output: 34818
🏁 Script executed:
# Search for genesis struct that defines fork time fields
echo "=== Searching for genesis struct definition ==="
rg -n "type.*Genesis.*struct\|GalileoTime\|galileoTime" --type go -B 2 -A 15 | head -80
# Look for JSON field tags in genesis structures
echo -e "\n=== Looking for ChainConfig or genesis JSON handling ==="
rg -n "darwinTime\|euclidTime\|feynmanTime" --type go -B 3 -A 3 | head -50
# Check if there's any case conversion happening
echo -e "\n=== Check for case conversion or field mapping logic ==="
rg -n "strings.ToLower\|CaseSensitive\|json.*-\|json.*Tag" --type go | grep -i "fork\|genesis\|time" | head -20Repository: scroll-tech/scroll
Length of output: 225
🏁 Script executed:
# Find where these genesis files are actually used
echo "=== Looking for how genesis files are loaded ==="
rg -n "mainnet-galileo/genesis\|cloak-galileoV2/genesis" --type go --type json -B 2 -A 2
# Search for genesis loading/parsing code in the repo
echo -e "\n=== Looking for genesis.json file operations ==="
rg -n "genesis\.json\|loadGenesis\|NewGenesis" --type go -B 2 -A 5 | head -60
# Check if there are any e2e tests that would fail if fork times are wrong
echo -e "\n=== Looking for tests that read fork time fields ==="
rg -n "GalileoTime\|galileoTime\|\.Time\|fork.*time" --type go -i -B 1 -A 1 | head -50Repository: scroll-tech/scroll
Length of output: 214
🏁 Script executed:
# Check git history for when these fields were added to mainnet-galileo
echo "=== Git history for mainnet-galileo genesis.json ==="
git log --oneline -n 20 -- tests/prover-e2e/mainnet-galileo/genesis.json
# Look for ChainConfig struct which should define these fork times
echo -e "\n=== Searching for ChainConfig struct in codebase ==="
find . -name "*.go" -type f | xargs grep -l "ChainConfig" | head -5
# Check the actual content to see field definitions
echo -e "\n=== Looking for fork time field definitions ==="
find . -name "*.go" -type f -exec grep -l "DarwinTime\|EuclidTime\|FeynmanTime\|GalileoTime" {} \;
# Check go-ethereum version and if we can examine its source
echo -e "\n=== Checking go.mod for go-ethereum ==="
grep "go-ethereum" go.mod
</parameter>Repository: scroll-tech/scroll
Length of output: 982
🏁 Script executed:
# Fix the command - look at rollup_test.go which seems to use fork times
echo "=== Examining rollup_test.go for fork time handling ==="
grep -n "DarwinTime\|EuclidTime\|FeynmanTime\|GalileoTime" ./rollup/tests/rollup_test.go -B 3 -A 3
# Look for the actual ChainConfig struct definition
echo -e "\n=== Searching for ChainConfig struct definition ==="
find . -name "*.go" -type f | xargs grep -l "type ChainConfig struct" 2>/dev/null
# Check the git log more carefully
echo -e "\n=== Full git history for genesis files ==="
git log --oneline --all -- tests/prover-e2e/mainnet-galileo/genesis.json
# Look at what was actually added
echo -e "\n=== Git show of mainnet-galileo genesis addition ==="
git show a35834d:tests/prover-e2e/mainnet-galileo/genesis.json 2>/dev/null | head -35Repository: scroll-tech/scroll
Length of output: 1575
🏁 Script executed:
# Verify go-ethereum fork handles case-insensitive JSON parsing for genesis
echo "=== Checking if go-ethereum source is available locally ==="
find . -path "*/go-ethereum*" -name "*.go" -type f | head -5
# Look at how these fields are actually accessed in coordinator
echo -e "\n=== Checking coordinator tests for how fork times are accessed ==="
rg -n "galileoTime\|GalileoTime\|feynmanTime" --type go -B 2 -A 2
# Check other genesis files to confirm pattern
echo -e "\n=== All genesis files and their fork time casing ==="
for f in tests/prover-e2e/*/genesis.json; do
echo "=== $f ==="
grep -E "Time\"\s*:" "$f" | head -2
doneRepository: scroll-tech/scroll
Length of output: 691
🏁 Script executed:
# Get the complete Time field lines from each genesis file
echo "=== Complete Time fields from all genesis files ==="
for f in tests/prover-e2e/*/genesis.json; do
echo ""
echo "File: $f"
grep -E "Time\"\s*:" "$f" | sort
doneRepository: scroll-tech/scroll
Length of output: 1111
Fix casing to match other genesis files: use galileoTime and galileoV2Time (lowercase).
Lines 24-25 use GalileoTime and GalileoV2Time (capital G), while all other fork activation fields in this file and other genesis files use lowercase camelCase: darwinTime, euclidTime, feynmanTime, etc. The sibling files (cloak-galileoV2, sepolia-galileo, sepolia-galileoV2) all use galileoTime and galileoV2Time. Standardize to lowercase for consistency across test fixtures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/prover-e2e/mainnet-galileo/genesis.json` around lines 24 - 25, The
genesis JSON uses keys "GalileoTime" and "GalileoV2Time" with capital G which is
inconsistent with other fork activation keys; update these keys to lowercase
camelCase "galileoTime" and "galileoV2Time" (replace occurrences of GalileoTime
and GalileoV2Time with galileoTime and galileoV2Time) so they match sibling
fixtures and the existing fields like darwinTime/euclidTime/feynmanTime.
| BEGIN_BLOCK?=20239245 | ||
| END_BLOCK?=20239250 | ||
| SCROLL_FORK_NAME=galileoV2 | ||
| SCROLL_ZKVM_VERSION=?v0.7.1 No newline at end of file |
There was a problem hiding this comment.
Same ? placement bug as mainnet-galileo/.make.env — literal ?v0.7.1 is assigned, not a conditional default.
Additionally, cloak-galileoV2/.make.env pins SCROLL_ZKVM_VERSION?=v0.7.2 while this file would use v0.7.1 once fixed. Please confirm whether the version difference between the two galileoV2 environments is intentional.
🐛 Proposed fix
-SCROLL_ZKVM_VERSION=?v0.7.1
+SCROLL_ZKVM_VERSION?=v0.7.1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SCROLL_ZKVM_VERSION=?v0.7.1 | |
| SCROLL_ZKVM_VERSION?=v0.7.1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/prover-e2e/sepolia-galileoV2/.make.env` at line 4, Fix the malformed
makefile variable assignment for SCROLL_ZKVM_VERSION in the sepolia-galileoV2
env by moving the question mark to form a conditional default (use
SCROLL_ZKVM_VERSION?=v0.7.1 instead of SCROLL_ZKVM_VERSION=?v0.7.1) in the
sepolia-galileoV2/.make.env; then verify whether the version mismatch with
cloak-galileoV2/.make.env (which currently uses SCROLL_ZKVM_VERSION?=v0.7.2) is
intentional and either align both to the intended version or document why they
differ.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1794 +/- ##
========================================
Coverage 36.41% 36.42%
========================================
Files 248 248
Lines 21321 21320 -1
========================================
Hits 7765 7765
+ Misses 12733 12732 -1
Partials 823 823
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Update dependency of zkvm, switch the corresponding fork to
galileoV2for version ofvalidium v1.The new zkvm dep base on the up-to-date branch (i.e. openvm 1.4.3 is induced), so this PR also merge and overwrite #1783
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Chores