GrantSwarm AI

A decentralized AI swarm that transparently reviews grants with verifiable scores, reasoningand audit trails.

  • 176 Raised
  • 347 Views
  • 2 Judges

Tags

  • Soft Hack

Categories

  • Soft Hack: prototype agents and AI dApps
  • BONUS: The Provenance Challenge - powered by Arweave
  • BONUS: The Proof Challenge – powered by zkVerify

Gallery

Description

Project: GrantSwarm

Autonomous, Verifiable AI Grant Reviewer Network on Amadeus

1. Core Thesis (What This Really Is)

GrantSwarm is not “AI judging grants.”

GrantSwarm is a verifiable decision infrastructure for capital allocation, where:

  • reasoning is explicit.

  • bias is surfaced, not hidden.

  • and outcomes are cryptographically provable.

The product reframes grant review from a human trust problem into a systems verification problem.

That framing is what makes this fundable.


2. The Problem (Properly Defined)

2.1 What’s Broken Today

Most grant and hackathon systems fail on five structural axes:

  1. OpacityApplicants receive a score or rejection with no explanation that can be independently verified.

  2. Bias CollapseMultiple human judgments are averaged into a single number, destroying signal about disagreement.

  3. Post-Decision MutabilityThere is no cryptographic guarantee that scores weren’t adjusted after internal discussions or favoritism.

  4. Confidentiality vs Transparency TradeoffEither reviews are private (and opaque), or public (and leak sensitive ideas).

  5. Lack of Audit TrailDAOs cannot answer:“Why was this funded six months later?”

This is not just a UX problem.It’s a governance legitimacy problem.


3. Design Principle (Why This Exists)

If grant decisions allocate real capital, they should be as verifiable as onchain transactions.

GrantSwarm applies blockchain-native guarantees to decision-making, not money movement.


4. System Overview (Mental Model)

GrantSwarm is a multi-agent evaluation pipeline where:

  • Each agent represents a distinct evaluative lens

  • All agent executions are deterministic

  • Every step produces state proofs

  • Final outputs include both scores and reasoning provenance

The system is human-auditable, machine-verifiable, and privacy-preserving.


5. Why a Multi-Agent Swarm Is Non-Negotiable

A single AI reviewer fails structurally.

5.1 Failure of Monolithic Models

  • Bias is entangled across dimensions

  • Reasoning paths are inseparable

  • Disagreements are invisible

  • Weighting decisions are implicit

5.2 GrantSwarm’s Agent Decomposition

Each agent is intentionally narrow, specialized, and independently verifiable.

1. Technical Merit Agent

Evaluates:

  • architectural soundness

  • technical feasibility

  • use of primitives

  • internal consistency

Output:

  • numeric score

  • structured critique

  • confidence level

2. Impact Agent

Evaluates:

  • real-world relevance

  • ecosystem fit

  • adoption likelihood

  • alignment with grant goals

Output:

  • score

  • impact narrative

  • target user clarity

3. Feasibility & Risk Agent

Evaluates:

  • execution risk

  • scope realism

  • missing dependencies

  • over-claims

Output:

  • risk flags

  • feasibility delta

  • recommendation severity

4. Meta-Agent (Aggregation & Conflict Resolver)

Responsibilities:

  • normalize scoring ranges

  • apply transparent weighting

  • surface inter-agent disagreement

  • flag submissions requiring human review

This agent never overwrites disagreement. It exposes it.

That’s critical.


6. End-to-End Execution Flow

Step 1: Submission Intake

  • Applicant submits proposal (docs, links, metadata)

  • Proposal is hashed immediately

  • Submission timestamp is sealed

Step 2: Task Decomposition

  • Coordinator agent splits evaluation into agent-specific subtasks

  • Task assignments are recorded onchain

Step 3: Deterministic Agent Execution

  • Each agent executes in the Amadeus WASM runtime

  • Inputs are normalized

  • Evaluation logic is deterministic

  • Execution is metered via uPoW

Step 4: Verified Compute (Privacy Layer)

  • Proposal content is processed inside iExec TEE

  • Raw content never leaves the enclave

  • Only outputs + proofs are emitted

Step 5: State Proof Generation

For each agent:

  • execution hash

  • input hash

  • output hash

  • timestamp

These form a verifiable reasoning chain.

Step 6: Provenance Storage

  • Anonymized reasoning

  • scoring breakdown

  • state proofs

Stored permanently on Arweave.

Step 7: Final Output

Applicant and DAO receive:

  • final weighted score

  • per-agent scores

  • disagreement indicators

  • verifiable audit trail


7. Architecture Diagram (Textual Blueprint):


 Component-by-Component Deep Explanation

--User / DAO Layer

DAO / Grant Admin

  • Defines:

    • scoring weights

    • minimum thresholds

    • risk tolerance

    • escalation rules

  • These parameters are:

    • versioned

    • hashed

    • applied transparently

This prevents silent rule changes mid-review.

-- Applicant Interface

  • Uploads proposal + metadata

  • Never interacts with agents directly

  • Receives:

    • final score

    • reasoning summary

    • verifiable proof bundle

This separation avoids manipulation attempts.

-- Submission Intake Module

Responsibilities

  • Hash proposal immediately

  • Generate submission ID

  • Timestamp and seal entry

  • Apply anonymization (remove names, links to identity)

This is where fairness begins.

-- Coordinator / Orchestrator (Critical Box)

This is the brain of the system.

What it does

  • Reads DAO configuration

  • Breaks evaluation into subtasks

  • Assigns tasks to agents

  • Schedules execution in parallel

  • Monitors agent failures

Why this part is important?

  • Enables scalability

  • Prevents single-agent dominance

  • Enables replayability

Runs on Amadeus swarm coordination primitives.

-- Agent Swarm Execution Layer (Amadeus Core)

All agents:

  • Run in Amadeus WASM runtime

  • Use deterministic logic

  • Are metered via uPoW

  • Produce structured outputs

-- Technical Merit Agent

Evaluates:

  • system architecture

  • correctness

  • use of protocol primitives

  • technical depth

Outputs:

  • score (0–100)

  • critique tree

  • confidence score

-- Impact Agent

Evaluates:

  • problem relevance

  • ecosystem fit

  • user adoption potential

Consumes:

  • Oracle data (market size, usage stats)

Outputs:

  • impact score

  • justification summary

-- Feasibility & Risk Agent

Evaluates:

  • scope realism

  • dependency risks

  • over-promising indicators

Outputs:

  • risk flags

  • feasibility delta

  • warning severity

-- Meta-Agent (Aggregation Layer)

This agent never hides disagreements.

Responsibilities

  • Normalize scores across agents

  • Apply DAO-defined weights

  • Detect variance

  • Flag:

    • high disagreement

    • extreme scores

    • low confidence outputs

This is where bias becomes visible.

-- Verified Compute & Privacy Layer

iExec TEE Integration

  • Raw proposal content is processed inside TEE

  • Agents never see plaintext outside enclave

  • Only structured outputs exit

Guarantees

  • Confidential proposals

  • Verifiable execution

  • No data leakage

This solves the transparency vs privacy problem.

-- State Proof Generator (Trust Anchor)

For each agent execution:

  • input hash

  • execution hash

  • output hash

  • timestamp

These are bundled into a decision proof object.

This is what makes the system auditable.

-- Provenance Storage (Arweave)

Stored permanently:

  • anonymized reasoning

  • score breakdowns

  • weighting config

  • state proofs

Not stored:

  • raw proposal text

  • private attachments

This targets the Best Provenance Architecture prize cleanly.

-- Output & Governance Layer

Final Output Interface

Displays:

  • final weighted score

  • per-agent scores

  • disagreement heatmap

  • proof links (Arweave)

Human-in-the-Loop Option

  • DAO reviewers intervene only when flagged

  • Overrides are logged and provable



8. Deep Amadeus Integration:

uPoW

  • Prevents Sybil agent execution

  • Measures real evaluation work

  • Enables cost-based pricing models

WASM Runtime

  • Guarantees deterministic reviews

  • Allows reproducible replays

  • Critical for auditability

State Proofs

  • Cryptographic evidence that:

    • review occurred

    • logic was followed

    • outputs were not altered

Agent Identity & Memory

  • Persistent reviewer identity

  • Scoped memory across rounds

  • Enables longitudinal consistency analysis

Oracle Streams

  • Feed:

    • market data

    • ecosystem metrics

    • historical grant outcomes

Used by Impact and Risk agents.

Swarm Coordination

  • Dynamic task assignment

  • Parallelized evaluation

  • Failure isolation

This is not “compatible with Amadeus.”It is native to Amadeus.


9. Privacy & Trust Model

Why TEE Is Required

  • Grant proposals are sensitive IP

  • Public inference is unacceptable

  • Trust must be minimized

Trust Assumptions

  • TEE protects proposal content

  • Amadeus verifies execution integrity

  • Arweave guarantees immutability

This balances confidentiality with transparency.


10. Provenance Architecture (Why This Is Arweave-Worthy)

What’s stored:

  • submission hash

  • agent reasoning summaries

  • scoring weights

  • state proofs

  • final decision object

What’s not stored:

  • raw proposal content

  • private attachments

Result:

  • permanent auditability

  • zero IP leakage

  • future dispute resolution


11. Governance & Human-in-the-Loop

GrantSwarm does not replace humans.

It:

  • filters

  • ranks

  • explains

  • flags anomalies

Human reviewers:

  • intervene only where disagreement is high

  • can override, but overrides are logged

  • create a feedback loop for agent tuning


12. Monetization & Sustainability

Revenue Streams

  • per-review pricing

  • DAO subscriptions

  • enterprise grant tooling

  • premium analytics dashboards

Network Effects

  • better agents → better trust

  • more DAOs → richer data

  • richer data → stronger evaluations



13. Tradeoffs & Honest Limitations

Works Today

  • agent orchestration

  • deterministic logic

  • provenance tracking

  • private inference

Requires Future Evolution

  • full ZK inference

  • cross-protocol reviewer reputation

  • onchain dispute resolution

Calling this out increases credibility.


14. Why This should Be Funded?

From a protocol perspective, GrantSwarm:

  • drives verified compute usage

  • showcases swarm coordination

  • creates real demand for provenance

  • attracts DAOs and capital allocators


15.proof that airweave was used & Verifiable Compute & Privacy Design (using iExec TEE or Zk Verify):

Architecture: We utilize a Stateless-to-Permanent Pipeline, decoupling private execution from public auditability.

A. Permanent Data Availability (Arweave & Irys)

  • Mechanism: Post-consensus, grant metadata is anchored to the Arweave Blockweave via Irys.

  • Audit Trail: The arweaveId acts as a permanent CID. The handleVerify function hydrants this data from the permaweb, ensuring Zero Historical Revisionism.

  • Live Evidence: https://gateway.irys.xyz/${results.arweaveId}


B. Verifiable Swarm Intelligence (Compute Integrity)

  • Deterministic Consensus: Logic is not "black-box." Scores are derived from a multi-agent swarm (Technical, Financial, Ecosystem).

  • Compute Trace: The agentBreakdown provides a verifiable derivation of the final consensusScore, preventing manual bias injection.

  • Integrity: We utilize deterministic compute logic to ensure consistent scoring across all evaluation cycles.

C. Privacy-First Execution (iExec TEE & Intel SGX)

  • Client-Side Privacy: Sensitive IP in formData is encrypted via iExec DataProtector before transmission.

  • Hardware Isolation: Decryption and AI inference occur exclusively within an Intel SGX Enclave. Administrators never see plain-text data.

  • Attestation: The system generates a MRENCLAVE report, providing cryptographic proof of secure execution.

  • Frontend Signal: addLog("Encrypting payload via iExec TEE DataProtector...");


overall code overview:


pitch deck:  https://drive.google.com/file/d/1KugosPO_JOJJ-H1rOkEWbW9Tk8ePrpNT/view?usp=drivesdk

live demo: https://grantswarm.vercel.app/

documentation: https://drive.google.com/file/d/1QfRnXtz8obxPB6_xqieMDQtA2QRRqdq9/view?usp=sharing

github url: https://github.com/Azubuike321/grant-swarm-system.git

Attachments