| name | dedupe-rank |
| description | Dedupe and rank a raw paper set (`papers/papers_raw.jsonl`) to produce `papers/papers_dedup.jsonl` and `papers/core_set.csv`. **Trigger**: dedupe, rank, core set, 去重, 排序, 精选论文, 核心集合. **Use when**: 检索后需要把广覆盖集合收敛成可管理的 core set(用于 taxonomy/outline/mapping)。 **Skip if**: 已经有人手工整理了稳定的 `papers/core_set.csv`(无需再次 churn)。 **Network**: none. **Guardrail**: 偏 deterministic;输出应可重复(稳定 paper_id、字段规范)。 |
Dedupe + Rank
Turn a broad retrieved set into a smaller core set for taxonomy/outline building.
This is a deterministic “curation” step: it should be stable and repeatable.
Input
papers/papers_raw.jsonl
Outputs
papers/papers_dedup.jsonlpapers/core_set.csv
Workflow (high level)
- Dedupe by normalized
(title, year)and keep the richest metadata per duplicate cluster. - Rank by relevance/recency signals (and optionally pin known classics for certain topics).
- Write
papers/core_set.csvwith stablepaper_idvalues and useful metadata columns (arxiv_id,pdf_url, categories).
Quality checklist
-
papers/papers_dedup.jsonlexists and is valid JSONL. -
papers/core_set.csvexists and has a header row.
Script
Quick Start
python .codex/skills/dedupe-rank/scripts/run.py --helppython .codex/skills/dedupe-rank/scripts/run.py --workspace <workspace_dir> --core-size 50
All Options
--core-size <n>: target size forpapers/core_set.csvqueries.mdalso supportscore_size/core_set_size/dedupe_core_size(overrides default when present)
Examples
- Smaller core set for fast iteration:
python .codex/skills/dedupe-rank/scripts/run.py --workspace <ws> --core-size 25
Notes
- This step is deterministic; reruns should be stable for the same inputs.
Troubleshooting
Common Issues
Issue: papers/core_set.csv is too small / empty
Symptom:
- Core set has very few rows.
Causes:
- Input
papers/papers_raw.jsonlis small, or many rows are missing required fields.
Solutions:
- Broaden retrieval (or provide a richer offline export) and rerun.
- Lower
--core-sizeonly if you intentionally want a small core set.
Issue: Duplicates still appear after dedupe
Symptom:
- Near-identical titles remain.
Causes:
- Title normalization is defeated by noisy exports.
Solutions:
- Clean title fields in the export (strip prefixes/suffixes, fix encoding) and rerun.
Recovery Checklist
-
papers/papers_raw.jsonllines containtitle/year/url. -
papers/core_set.csvhas stablepaper_idvalues.