t81-foundation

CanonFS Model Load Example

This is the smallest contributor-facing example of the CanonFS-backed --weights-model lane.

It shows how to:

Build

From the repo root:

cmake -S . -B build -G Ninja -DT81_BUILD_EXAMPLES=ON
cmake --build build --target t81 t81_make_demo_model t81_make_answer_fixed_demo t81_make_classify_fixed_demo t81_make_route_fixed_demo t81_make_assess_fixed_demo t81_make_guarded_llama_demo t81_make_degraded_llama_demo t81_make_demo_safetensors t81_make_demo_float_safetensors

Files

Healthy AI Probe Path

This is the smallest healthy t81 ai inference run lane in the repo. It does not exercise the bounded logits/decode path, so it stays in a simple ready posture instead of falling into degraded evidence shaping.

tmp_root="$(mktemp -d)"
model_path="$tmp_root/ready-demo.t81w"

build/t81_make_demo_model "$model_path"
build/t81 ai inference run \
  --model ready-demo \
  --model-file "$model_path" \
  --mode strict_deterministic \
  --prompt hello

Expected result:

The same path is also wrapped in:

bash examples/ai-and-inference/model-load-canonfs/run_ready_ai_probe.sh

Assess-Fixed OS-Object Chain

This is the current canonical result-producing AI example in the repo.

The current bounded composition family for this object model is listed in AI OS-Object Chain Catalog.

It shows one narrow governed chain:

  1. assess-fixed runs in the strict deterministic lane
  2. the AI task stores a canonical result artifact plus provenance
  3. a typed downstream host-action record is created and stored
  4. a final bundle object is created and stored

The important object for this chain is the final bundle, not the intermediate task result or downstream record.

Run:

bash examples/ai-and-inference/model-load-canonfs/run_assess_fixed_host_action.sh

What the demo persists:

Canonical object roles:

Expected end state:

If you want the smallest external-consumer-shaped example for that bundle, run:

bash examples/ai-and-inference/model-load-canonfs/run_assess_fixed_bundle_consumer.sh

That script starts from the canonical bundle first, checks its schema, and only then follows record_ref and action_ref as described in AI OS-Object Bundle Consumption Contract.

If you want one concrete integration example for that same bundle, run:

bash examples/ai-and-inference/model-load-canonfs/run_assess_fixed_host_action_executor_integration.sh

That script treats the bundle as the handoff object into a host-side executor: it verifies the bundle schema, reads the typed host-action record, dereferences the canonical action artifact, and then materializes the selected host action under a separate executor output root.

If you already have a bundle_ref and want a small normalized summary projection for the admitted family, run:

bash examples/ai-and-inference/model-load-canonfs/summarize_ai_bundle.sh \
  "<bundle_ref>" \
  "<canonfs_root>"

That helper emits a stable text summary with:

Route-Fixed Path Selection Chain

This is the second bounded composition using the same typed object pipeline.

It shows one narrow route-selection chain:

  1. route-fixed runs in the strict deterministic lane
  2. the AI task stores a canonical route artifact plus provenance
  3. a typed downstream path-selection record is created and stored
  4. a final bundle object is created and stored

As in the assess-fixed chain, the important object is the final bundle rather than the intermediate task result or downstream record.

Run:

bash examples/ai-and-inference/model-load-canonfs/run_route_fixed_path_selection.sh

Expected end state:

If you want the same bundle-first consumer path for this family member, run:

bash examples/ai-and-inference/model-load-canonfs/run_route_fixed_bundle_consumer.sh

That script starts from the canonical bundle first, checks its schema, and only then follows record_ref and action_ref to recover the typed path-selection record.

Classify-Fixed Rule Selection Chain

This is the third bounded composition using the same typed object pipeline.

It shows one narrow rule-selection chain:

  1. classify-fixed runs in the strict deterministic lane
  2. the AI task stores a canonical label artifact plus provenance
  3. a typed downstream rule-selection record is created and stored
  4. a final bundle object is created and stored

As in the assess-fixed and route-fixed chains, the important object is the final bundle rather than the intermediate task result or downstream record.

Run:

bash examples/ai-and-inference/model-load-canonfs/run_classify_fixed_rule_selection.sh

Expected end state:

If you want the same bundle-first consumer path for this family member, run:

bash examples/ai-and-inference/model-load-canonfs/run_classify_fixed_bundle_consumer.sh

That script starts from the canonical bundle first, checks its schema, and only then follows record_ref and action_ref to recover the typed rule-selection record.

Guarded AI Probe Path

This is the checked-in guarded example. It uses the real tiny Hugging Face Llama artifact and lands in the guarded envelope: bounded decode stays weak, but it does not exhaust into degraded mode.

bash examples/ai-and-inference/model-load-canonfs/run_guarded_ai_probe.sh

Expected result:

Forward-State AI Probe Path

This is the smallest checked-in multi-step forward-state example. It runs the same tiny synthetic Llama-shaped model for four decode steps and now proves a bounded carried hidden-tensor path, not just row-derived forward-state summaries. Later decode steps reimport a bounded hidden tensor through the compiled tensor-pool path, expose bounded q/k signature state, and now carry a combined bounded architecture-state object through the ready envelope. The fourth decode step now takes a deeper architecture-state-led control path instead of reusing the same mode as step three. When that happens, the top-level readiness/health envelope upgrades to bounded_deep_architecture_state_probe.v1. At the current bounded 4-step ceiling, the run now ends with termination_reason: "deep_architecture_state_horizon_reached". The payload also reports bounded_horizon_steps: 4. It now also reports bounded_horizon_reached: true. It also reports bounded_horizon_remaining: 0. It now also reports bounded_horizon_utilization: 1. It also reports architecture_state_summary.deep_feedback_steps: 1. It now also reports architecture_state_summary.utilization: 0.5.

MAX_TOKENS=4 bash examples/ai-and-inference/model-load-canonfs/run_forward_state_ai_probe.sh

Expected result:

Degraded AI Probe Path

This is the smallest one-command degraded rerun path. It uses a checked-in synthetic Llama-shaped helper with an intentionally undersized embedding table, so the decode probe becomes unavailable and the lane drops into the conservative degraded posture.

bash examples/ai-and-inference/model-load-canonfs/run_degraded_ai_probe.sh

Expected result:

Real Weights Import Path

This is the smallest source-format example in the repo. It uses a generated SafeTensors file, converts it to .t81w through the real CLI, and then follows the same CanonFS-backed model load flow as the section below.

tmp_root="$(mktemp -d)"
source_path="$tmp_root/demo-model.safetensors"
model_path="$tmp_root/demo-model.t81w"

build/t81_make_demo_safetensors "$source_path"
build/t81 weights import "$source_path" -o "$model_path"

Expected result:

After that, continue with the CanonFS flow below using model_path="$model_path".

Tiny T3_K GGUF Path

This is the smallest in-repo happy path for the GGUF lane.

tmp_root="$(mktemp -d)"
float_source="$tmp_root/demo-model-f32.safetensors"
gguf_path="$tmp_root/demo-model.t3k.gguf"
model_path="$tmp_root/demo-model-from-gguf.t81w"

build/t81_make_demo_float_safetensors "$float_source"
build/t81 weights quantize "$float_source" --to-gguf "$gguf_path"
build/t81 weights import "$gguf_path" --format gguf -o "$model_path"

Expected result:

This lane avoids the bridge-gated non-T3_K GGUF path by generating a tiny native T3_K GGUF in-repo.

Run

From the repo root:

tmp_root="$(mktemp -d)"
canon_root="$tmp_root/.t81_canonfs"
model_path="$tmp_root/demo-model.t81w"
allow_policy="$tmp_root/allow.apl"
deny_policy="$tmp_root/deny.apl"

mkdir -p "$canon_root"

helper_output="$(build/t81_make_demo_model "$model_path")"
printf '%s\n' "$helper_output"
model_checksum="$(printf '%s\n' "$helper_output" | sed -n 's/^sha3-512=//p')"
model_hash="$(build/t81 canonfs put-file "$model_path" --canonfs-root "$canon_root")"
model_hash="${model_hash//$'\n'/}"

Use the emitted sha3-512= checksum in the allow policy:

cat > "$allow_policy" <<'EOF'
(policy
  (tier 1)
  (allowed-ternary-model-hashes ["sha3-512:MODEL_CHECKSUM"]))
EOF

perl -0pi -e "s/MODEL_CHECKSUM/$model_checksum/g" "$allow_policy"

cat > "$deny_policy" <<'EOF'
(policy
  (tier 1)
  (allowed-ternary-model-hashes ["sha3-512:cafebabe"]))
EOF

Now run the CanonFS-backed model load path:

export T81_CANONFS_ROOT="$canon_root"

build/t81 code run \
  tests/fixtures/t81lang_std_tensor/03_matmul_weights.t81 \
  --weights-model "$model_hash"

Expected result:

Allow path:

build/t81 code run \
  tests/fixtures/t81lang_std_tensor/03_matmul_weights.t81 \
  --weights-model "$model_hash" \
  --policy "$allow_policy"

Expected result:

Deny path:

build/t81 code run \
  tests/fixtures/t81lang_std_tensor/03_matmul_weights.t81 \
  --weights-model "$model_hash" \
  --policy "$deny_policy"

Expected result:

Real Hugging Face Tiny Model Path

This is the smallest real external model flow currently validated in the repo.

One-command runner:

examples/ai-and-inference/model-load-canonfs/run_real_hf_tiny_model.sh

The script reuses the existing tiny model under models/tiny-random-llama/ if present and only falls back to hf download when the files are missing.

Prerequisite:

Download the model:

mkdir -p models

/Users/t81dev/Library/Python/3.14/bin/hf download \
  hf-internal-testing/tiny-random-LlamaForCausalLM \
  config.json tokenizer.json model.safetensors \
  --local-dir models/tiny-random-llama

Import it into .t81w:

build/t81 weights import \
  models/tiny-random-llama/model.safetensors \
  -o /tmp/tiny-random-llama.t81w

Expected result:

Register it in CanonFS and run a real tensor operation under policy:

tmp_root="$(mktemp -d)"
canon_root="$tmp_root/.t81_canonfs"
program_path="$tmp_root/matmul_real_tensor.t81"
allow_policy="$tmp_root/allow.apl"
deny_policy="$tmp_root/deny.apl"

mkdir -p "$canon_root"

model_hash="$(build/t81 canonfs put-file /tmp/tiny-random-llama.t81w --canonfs-root "$canon_root")"
model_hash="${model_hash//$'\n'/}"
model_checksum="$(build/t81 weights info /tmp/tiny-random-llama.t81w --json | python3 -c 'import sys,json; print(json.load(sys.stdin)[\"checksum_sha3_512\"])')"

cat > "$program_path" <<'EOF'
fn main() -> i32 {
  let q: i32 = std.tensor.load("model.layers.0.self_attn.q_proj.weight");
  let k: i32 = std.tensor.load("model.layers.0.self_attn.k_proj.weight");
  let out: Tensor = std.tensor.matmul(q, k);
  let _ = out;
  print(q);
  return 0;
}
EOF

cat > "$allow_policy" <<EOF
(policy
  (tier 1)
  (allowed-ternary-model-hashes ["sha3-512:$model_checksum"]))
EOF

cat > "$deny_policy" <<'EOF'
(policy
  (tier 1)
  (allowed-ternary-model-hashes ["sha3-512:cafebabe"]))
EOF

export T81_CANONFS_ROOT="$canon_root"

Allow path:

build/t81 code run \
  "$program_path" \
  --weights-model "$model_hash" \
  --policy "$allow_policy"

Expected result:

Deny path:

build/t81 code run \
  "$program_path" \
  --weights-model "$model_hash" \
  --policy "$deny_policy"

Expected result:

Notes