Week at a glance
- Home Journey — reframed around Sally as the user, not the subject
- Onboarding — pushed toward Sheikah aesthetic, usability gaps got clearer
- iOS/Android parity — real structural drift, not just skin
- Opal Focus — rerouted toward screenshot-driven inference
- OpalOS — boots in Android emulator, overlay loops block daily use
- Hermes Sovereign — Python 3.13 inside a SwiftUI app, no server needed
- Hermes Tabs — tab-per-conversation chat client, SSE streaming
01 · Home Journey
Sally stopped being a persona
Home Journey was a linear narrative — 14 days of Sally's life with Opal, told as a story. The problem: you'd read about Sally but never feel like you were her. So we killed the narrative frame and rebuilt it as Sally's actual phone. One suggestion card at the top, a few widgets below it. The hook is sleep — "Set your bedtime tonight" — because that's the one thing every parent actually wants to control first.
02 · Onboarding
Better soul. Clearer cracks.
The onboarding had three options: boring form, chat-based intake, or something weirder. We went weird — Sheikah direction, ancient-tech aesthetic, "I was born underground" as the opening line. It makes Opal feel singular. But when we ran Sally through it, the problem was obvious: 8 screens of mythology before any product value. Users don't have that patience. The aesthetic works. The structure needs to front-load utility and weave myth in alongside it, not before it.
What works
- Chat compresses boring intake into something tolerable
- Full-screen beats for permission asks and the "pact" moment
- Oracular framing makes Opal feel unlike any other app
What doesn't
- Myth is memorable but delays product value — users bail before they get it
- Family / kids device setup is the biggest missing flow
- 17 screens in one version is way too many
03 · iOS vs Android
Two different products, same name
We ran a Figma audit of both platforms side by side. Expected skin-level differences — colors, spacing, tokens. What we found: onboarding is substantially more complete on iOS, Autofocus barely exists on Android, and settings/rules/profile/stats all have structural mismatches that go beyond design into actual product decisions. Can't talk about "the Opal experience" when the platforms diverge this much.
Biggest gaps
- Onboarding and Autofocus significantly more mature on iOS
- Settings, Rules, Apps, Profile, Stats, Timer — structural mismatch
- Some are IA differences, not skin differences
So what
- Hard to talk about "the product" when platforms diverge this much
- Autofocus is the clearest parity watchpoint
04 · Opal Focus v2
The model should see the page
Opal Focus classifies content to decide what to block. The v1 approach was metadata-only — URLs, DOM text, heuristics. It works for obvious cases and fails on everything interesting (memes, branded content, context-dependent stuff). The v2 direction: screenshot the page, send it to a vision model, let it decide. Grok 4.20 via OpenRouter on a Cloudflare Worker. Pipeline includes blur preprocessing, meme detection, allowlists, and per-platform rules. Still early, but the direction is right — metadata heuristics will always have a ceiling that vision doesn't.
Running now
- Worker: opal-focus-v2-worker.anton-6cb.workers.dev
- Model: grok-4.20-beta via OpenRouter
- Pipeline: blur, meme detection, allowlist, pre-viewport, per-platform rules
Direction
- Distraction removal stays as the base layer
- LLM-native, screenshot-aware classification
- Vision over metadata heuristics
05 · OpalOS Android
Boots. Doesn't hold.
OpalOS is the Android side of the vision — a system-level layer that sits on top of your phone and changes what you see based on a mode. Utility mode strips apps to essentials, Entertainment restores feeds, Focus blocks distractions, Sleep dims everything. It's controlled by an agent you chat with. The prototype runs in the Android emulator: four modes work, agent chat connects to OpenRouter with vision, and there's a gem orb UI. But the overlay keeps re-triggering when you reopen apps, the image picker isn't wired to the backend, and Java 17 was a prerequisite that cost a day to sort out. It proves the concept. It's not something you'd hand to someone yet.
Working
- Builds, installs, launches in emulator
- Utility / Entertainment / Focus / Sleep modes
- Agent chat → OpenRouter, vision supported
Not working
- Overlay persistence loops
- Image backend exists, frontend picker doesn't
- Not frictionless enough for daily use
06 · Hermes Sovereign
Agent in your pocket, no server
Most AI apps are thin clients — they send your prompt to an API and stream back a response. Hermes Sovereign inverts that. The agent runs locally inside the app, powered by Python 3.13 bundled via the PythonAppleSupport xcframework. SwiftTerm provides the terminal UI. The Hermes agent source ships in the app bundle. A bridge script mediates between Swift and Python. The idea: you don't need internet for the agent to work. You don't need our server. The phone is the server.
Stack
- Python 3.13 via Python.xcframework
- SwiftTerm + custom keyboard bar
- Agent at Resources/EmbeddedPython/app/hermes-agent/
- Bridge: sovereign_hermes_bridge.py
SwiftUI
Python 3.13
SwiftTerm
XcodeGen
This week
- SovereignBootstrap.swift spec for sandboxed workspace seeding
- vendor-hermes-minimal-deps.sh wires openai, anthropic, pydantic, rich, httpx — host validated
- Blockers: pydantic_core, jiter, pyyaml, markupsafe — macOS .so files, can't ship on iOS
- Builds and runs in iOS simulator
Open questions
- site-packages-ios-sim/ and site-packages-ios-device/ are empty placeholder dirs
- No iOS-compiled Python packages yet — host validation only
- Bootstrap wiring not connected to ProjectSovereignApp.swift
07 · Hermes Tabs
Safari, but each tab talks to the agent
Sovereign is the ambitious path. Hermes Tabs is the pragmatic one. A native SwiftUI app where each tab is a separate conversation with Hermes. You tap +, get a new chat. Responses stream in via SSE from the public Hermes proxy at opal-agent.decemberclaw.com. State persists locally in Application Support. It's simple — intentionally so. The tradeoff: it depends on the server being up and the API being available. But it's a real iOS app that talks to the agent today, with none of the Python packaging complexity that Sovereign carries.
Architecture
- SwiftUI, iOS 17+
- POST /v1/chat/completions with SSE
- State: Application Support/HermesTabs/state.json
SwiftUI
SSE
iOS 17
Pain points
- Draft text bound to @Published on every keystroke — jank. Fix: @State in composer
- Composer too tall — TextEditor → TextField .lineLimit(1...4)
- Tab close button is detached, not Safari-like integrated X
- Tabs are local-only. Need /v1/responses with conversation param
08 · Throughlines
What stuck
01
Sally is the user, not the persona.
Changed both Home Journey and how we critique the onboarding. When you design for "a user like Sally" you get generic. When you design for Sally, you get specific.
02
Myth works when utility is already clear.
Sheikah aesthetic is good. It just can't be the only thing people see for 8 screens before any product value.
03
Platform drift is product drift.
iOS and Android aren't skin-different. They're decision-different. That matters when you're trying to ship one product.
04
Autofocus should see the page.
Metadata heuristics have a ceiling. Screenshot-aware classification doesn't. The infrastructure to do it at speed is the hard part.
05
Two iOS paths, both real.
Sovereign = offline agent, full terminal, no server. Tabs = thin client, API-backed. Different tradeoffs, neither is a toy.
06
OpalOS proves the concept. That's it.
Boots, chats, shows four modes. Overlay loops and missing image wiring mean it stays a prototype for now.