Your two questions
How are articles selected for the Commute Player? The player calls
/api/parking-lot-list filtered to
Retrieved — articles whose Full Text has been pulled by Cowork's nightly run. Anything still
New or
Retrieve Tomorrow doesn't appear yet (no body text to read).
Does listening remove the article from the list? Yes — when playback completes, the player calls
markListened, which flips the row from
Retrieved →
Listened.
The row stays in the Reading Parking Lot; it just drops out of the Retrieved queue the Commute Player reads. To commit it to the corpus, flip it from
Listened →
Ingest in Reading Review (or skip →
Skip).
SpaceNews emails
Source
First Up, Military Space, China Report, Editor's Choice, Opinions, etc. Arrive at spacesecuritycooperation@gmail.com.
Email attachments / forwards
Source
PDFs, DOCX, PPTX, XLSX. Forwarded reports, doctrine, AMTI bulletins.
Manual newsletter URL
Source
Pasted URL to the Manual Newsletter Parking workflow. §8.4a.16 PlanC-A · Up60pCWe.
↓
n8n Step 3 — Email Ingest + Smart Inbox Handler
Workflow
id: BNBjoqXt · §8.4a.7
Gmail trigger (hourly) → MIME router → gpt-4o-mini LLM classifier routes each email to ingest / review / skip / spam. Conservative default on parse failure (§2.18). Idempotency dedupe on email_message_id.
↓ branches by classification
Reading Parking Lot
Notion DB
id: f3a2418b
Newsletter items land here as New with title, URL, Source Newsletter, Priority guess.
You triage in the Triage UI (§8.4a.15) or SKR Workspace v2 (§8.4a.16): set Domain, Priority, and flip status to Retrieve Tomorrow for items you want to read.
Direct corpus ingest
SKR + Pinecone
Email attachments classified ingest by the smart inbox handler skip the parking lot entirely — they go straight to the SKR + Pinecone via the Phase 1 ingest pipeline (Y0CCkayn).
This is the path for PDFs, doctrine, and direct article forwards. Never enters the Commute queue.
↓ overnight
Cowork nightly retrieval
Scheduled task
§8.4a.11 W3
Cowork (authenticated browser session) walks each Retrieve Tomorrow row, pulls the article body from spacenews.com, and writes Full Text + Retrieved At back to the row. Solves the paywall.
Row flips to Retrieved. Closure summary lands at a dated child page of PROJECT_LOG.
↓ two consumers read Retrieved rows
Commute Player
iPhone PWA
§8.4a.19 v1 LANDED · v2 in W5
commute-api.js calls three Worker routes:
• listQueue → /api/parking-lot-list?status=Retrieved
• getArticle → full body for the current item
• markListened → PATCH Status=Listened on playback complete
v1: Web Speech API TTS · Wake Lock · MediaSession · AirPods lock-screen controls · playlist cache + bookmark.
v2 (W5): ElevenLabs TTS via Worker proxy · dual cache (IndexedDB → R2) · 8 voice presets · 3 themes.
On article finish: auto-advance to next Retrieved row, mark prior as Listened — which drops it from this queue.
Reading Review / SKR Workspace
Desktop UI
§8.4a.14 / §8.4a.16
You open a Retrieved or Listened row in Reading Review. Edit Reframe, Domain, Topics, Source Reliability, Notes inline.
Optional: open the Research Session chat panel (§8.4a.17 W7) for AI-grounded reading — corpus auto-queried for related chunks, four inline capture affordances active (Research Note / Open Question / Linked Source / Insight).
Press Submit & Ingest → row flips to Ingest.
↓ Status=Ingest fires the pipeline
n8n Phase 1 — Ingest Item
Workflow
id: Y0CCkayn
Polls parking-lot rows where Ingest is set. For each: metadata → create SKR page (with Reframe up top) → chunk → embed (OpenAI) → Pinecone upsert. Every step logged to Ingestion Log; failures show the failing stage in the Ingestion Health dashboard.
Row flips to Done when the SKR page + Pinecone vectors are committed.
↓
SKR (Source Knowledge Repository)
Notion DB
The article body, your Reframe, domain/topic tags, source reliability — the canonical record. Linked from the Ingestion Log row that created it.
Pinecone
Vector store
Chunks embedded with OpenAI, queryable by the Phase 2 Query workflow (6g1FBLIe). Provenance retained — each chunk carries its SKR page ID.
↓
Corpus Query (UC2)
Workflow + UI
Natural-language query → embed → Pinecone search → Notion hydrate → ranked chunks with source provenance. Source of every answer is the SKR rows your articles became.
Research Session chat (UC1/UC2/UC4)
CurationChat
Worker /api/chat proxies Anthropic Claude with corpus chunks pre-loaded. Captures during chat → Insight Ledger (Tier 2) / Research Notes (Tier 3) / Open Questions (Tier-3-pending).