Skip to content

nftechie/asci

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Artificial Scientific

AI-powered evaluation for scientific research repositories

Installation · Quick Start · Commands · How It Works


ASCI evaluates scientific research repositories using specialized AI agents that assess hypothesis clarity, evidence quality, code implementation, and novelty. Each agent scores independently, producing a detailed report with actionable feedback.

Installation

pip install asci

Requires Python 3.11+ and an Anthropic API key:

export ANTHROPIC_API_KEY=sk-ant-...

Quick Start

Evaluate a repository:

asci evaluate ./my-research

Scaffold a new research repo:

asci new ./my-project

Submit to the journal:

asci submit --repo ./my-research

Commands

asci evaluate <path>

Run a full evaluation on a research repository.

Option Description
--model, -m Claude model to use (default: claude-sonnet-4-20250514)
--json Output as JSON to stdout
--output, -o Write JSON report to file

asci new <path>

Scaffold a new research repository with all required files:

  • README.md — structured template
  • hypothesis.md — hypothesis format guide
  • src/main.py — starter code
  • data/ — data directory
  • .gitignore — Python defaults

asci submit --repo <path|url>

Validate, evaluate, and submit a repository to the journal.

Option Description
--repo Local path or GitHub URL (required)
--model, -m Claude model to use
--min-score Minimum score threshold (default: 4.0)

Repositories must include:

  • README.md
  • A hypothesis doc (hypothesis.md or abstract.md)
  • At least one code file (.py, .r, .jl, .m, .cpp, .c, .java, .rs, .go)
  • A data/ directory

How It Works

ASCI runs four specialized evaluation agents concurrently:

Agent What it evaluates
Hypothesis Clarity, testability, specificity, falsifiability
Evidence Data sufficiency, relevance, statistical rigor
Implementation Code quality, reproducibility, correctness
Novelty Originality, significance, prior work awareness

Each agent uses Claude's tool-use API to produce structured scores (1–10) with reasoning. Results are aggregated into an overall score and detailed report.

Library Usage

from asci.skill import evaluate_repo

report = evaluate_repo("./my-research")
print(report.overall_score)

License

MIT

About

AI-powered evaluation tool for scientific research repositories. Scores hypothesis clarity, evidence quality, code implementation, and novelty using specialized Claude agents.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages