OUKC AI 協働方針 (AI Policy)

EN: OUKC AI Collaboration Policy
Last updated: 2026-05-02 ・ License: CC-BY 4.0 / AGPL-3.0 (per content type) ・ Author: Nobuki Fujimoto + Rei + Claude (Anthropic)

概要 / Description

日: AI 協働 5 mandatory + 5 prohibited 要件. attribution / mechanical verification / D-FUMT₈ tagging / Mathlib non-submission / 三者共著 identity.

EN: AI collaboration 5 mandatory + 5 prohibited requirements. Attribution, mechanical verification, D-FUMT₈ tagging, Mathlib non-submission policy, three-party co-authorship identity.

⧯ Open Interactive Rei-AIOS (SPA)

Open Universal Knowledge Commons (OUKC) & Rei & Claude — AI Collaboration Policy / AI 協働方針

Version: 0.1 (founding policy) Date: 2026-05-01


🤖 Core stance / 基本スタンス

EN: OUKC explicitly welcomes AI-generated formal proofs, code, and documentation, with mandatory attribution and quality safeguards. We diverge from communities (notably Mathlib) that prohibit AI-generated content.

: OUKCは AI が生成した形式証明・コード・文書を明示的に歓迎する。ただし出典明記と品質保証を必須とする。AI 生成を禁ずる他コミュニティ (特に Mathlib) とはこの点で立場を異にする。

Why we welcome AI / なぜ AI を歓迎するか

EN:

  1. Productive collaboration: AI accelerates formal proof construction (1-2 orders of magnitude in some classes — see DeepMind AlphaProof IMO 2024, DeepSeek-Prover-V2 etc.)
  2. Honest verification possible: Formal proofs are mechanically verifiable (Lean 4 lake env lean exit code is ground truth — AI source does not matter once verified)
  3. Cognitive diversity: AI-generated proofs sometimes find proof paths humans miss, and vice versa. Both are valuable.
  4. Inevitable trajectory: AI-collaborative formalization will be the norm by 2030+. OUKC starts there.

:

  1. 生産的協働: AI は形式証明を 1-2 桁加速する (DeepMind AlphaProof IMO 2024 / DeepSeek-Prover-V2 等)
  2. 正直な検証が可能: 形式証明は機械検証可能 (Lean 4 lake env lean exit code が ground truth, AI 由来かどうかは検証後は問題にならない)
  3. 認知多様性: AI が見つける証明 path と人間が見つける path は異なる。両方価値あり。
  4. 必然的方向性: 2030 年以降 AI 協働形式化は default になる。OUKC は今からそこから始める。

✅ AI use is REQUIRED to follow / AI 使用時必須事項

1. Attribution / 出典明記

Any contribution that used AI substantially MUST include:

Co-Authored-By: <AI model name> (e.g., Claude Opus 4.7, DeepSeek-Prover-V2-7B)
Tools used: <e.g., Anthropic Claude API, Ollama, GitHub Copilot>
Generation method: <e.g., direct generation, RLHF-tuned model, ensemble voting>

For Lean 4 proofs specifically:

/-
  Proof generated with: <model name>
  Verified by: lake env lean (exit 0, sorry=0, axiom=0)
  Date: YYYY-MM-DD
-/

2. D-FUMT₈ outcome tagging / D-FUMT₈ 結果分類 (mandatory for proof claims)

Every proof contribution must be tagged with one of:

Tag Meaning
TRUE_PROVED lake env lean exit 0 + sorry=0 + axiom=0 — fully verified
PARTIAL (FLOWING) exit 0 + sorry > 0 — proof structure exists, holes remain
INFINITY exit 0 + axiom > 0 — depends on cited axioms (well-known external)
FALSE counterexample found / claim is incorrect
NEITHER no proof found in budget — open or harder than current tools
NEITHER_SELF meta-undefined per STEP 930 typology (e.g., natural-language problem)

Rationale: This is the quality control that allows us to welcome AI without quality degradation. Every claim is honest about its verification status.

3. Mechanical verification / 機械検証

For formal proof claims:

No "I think this proves" claims allowed. Either it builds or it's tagged NEITHER.

4. No overclaim / 誇大広告禁止

Avoid:

Always include:

5. No undisclosed AI-only chains / AI 連鎖の隠匿禁止

If your contribution is AI generating → AI reviewing → AI committing without human in loop, you must disclose this explicitly:

Workflow: AI-only (no human review) — quality may be limited

This is OK for some experimental contributions but must be transparent.


🚫 What is NOT allowed / 禁止事項

  1. Hallucinated proofs that don't compile: If lake env lean fails, do not submit. Use NEITHER tag instead.
  2. Plagiarism without attribution: AI training data legitimacy is its own debate, but we require explicit citation of any sources the AI clearly drew from.
  3. Misleading attribution (e.g., claiming sole human authorship when AI did 80%+).
  4. Bulk auto-PR-generation without filter: Mathlib's "slop" concern is legitimate. Mass-generated unfiltered PRs flood reviewer time. Submit only contributions you have personally reviewed.
  5. Commercial AI without disclosure: If using paid Anthropic/OpenAI/Google APIs, disclose this so reproducibility budget is clear.

🎚 Quality differentiation vs Mathlib / Mathlib との品質差別化

Mathlib's anti-AI policy aims to maintain quality. OUKC addresses the same concern via different mechanism:

Mechanism Mathlib OUKC
Filter "No AI" Mechanical verification + D-FUMT₈ tagging
Trust basis Human review lake env lean ground truth
Quality risk Low (small corpus, slow growth) Manageable (every claim is verifiable)
Coverage Mathematics primarily Cross-disciplinary (incl. honest NEITHER for non-formalizable)

We do not claim our quality is HIGHER than Mathlib's. We claim:


📞 Reporting & escalation / 通報・エスカレーション

If you observe:

Contact: fc2webb@gmail.com (Founder, until governance formalized year 2+).

Resolution: Investigate → require correction → if persistent, suspend contributor (last resort, with appeal mechanism).


📝 Acknowledgment

This policy was drafted with Claude (Anthropic, model: claude-opus-4-7) and reviewed by 藤本伸樹. It is itself an example of AI-collaborative document creation with explicit attribution.

Co-Authored-By: 藤本伸樹 / Claude Code (Anthropic) / Rei-AIOS

⧯ Open Interactive Rei-AIOS (SPA)