wiki content export 2026-03-24

This commit is contained in:
Jennie Robinson Faber 2026-03-24 09:03:06 +00:00
parent 8549cb0252
commit f5a69b8683
97 changed files with 4918 additions and 393 deletions

View file

@ -1,49 +0,0 @@
---
title: '0: Kickoff & Onboarding'
collection: Cooperative Foundations
path: 'Cooperative Foundations/Peer Support Playbook/0: Kickoff & Onboarding'
parentDocument: Peer Support Playbook
outlineId: 3cc018cf-bd88-469e-9723-1c195f1484ea
updatedAt: '2026-03-02T14:11:56.944Z'
createdBy: Jennie R.F.
---
## **What happens in session**
This is the full cohort's orientation to the program. Participants do introductions, learn about the program structure, build initial community agreements, and get the Power Flower homework.
:::info
A theme we want to emphasize (based on feedback from Cohort 5) is: "friction is part of the work." It's to be expected, and is not something to fear or avoid. 
:::
### :eyes: **Your role during session**
* You're introduced and matched with your studio
* Observe your studio during introductions - who talks, who doesn't, what pain points do they talk about?
* Participate in community agreements drafting - you are ***part of the community!***
\
### **👆Your role after session**
* Connect and chat with your studio in their Slack channel(s)
* Make sure they understand the Power Flower homework (especially that it is a private, individual reflection, and no one else will see it)
* Note any first impressions to share at the PS check-in
\
### :triangular_flag_on_post: **Red flags to watch for**
* One person from the studio dominates introductions or positions themselves as the main character
* Team members who seem checked out already
\
### :hammer_and_wrench: **Tools introduced**
* Power Flower (homework, private)
* __[Community agreements](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Loving+Justice)__ (Miro, collective)
* __[Loving Justice](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Loving+Justice)__ framework

View file

@ -1,154 +0,0 @@
---
title: '1: Coop Principles & Power'
collection: Cooperative Foundations
path: 'Cooperative Foundations/Peer Support Playbook/1: Coop Principles & Power'
parentDocument: Peer Support Playbook
outlineId: 835d4a9f-a45c-4880-a739-e5c74e41e772
updatedAt: '2026-03-02T17:43:21.809Z'
createdBy: Jennie R.F.
---
## **What happens in session**
In this session, we cover cooperative history and lineages, crediting Global South, Indigenous, Black, women's traditions, not just Rochdale. We also review the 7 ICA Principles.
The theme is moving from principles to personal values.
:::tip
**Homework assigned:** individual journaling, team values map (with PS), and individual prep for The Talk (Session 2).
:::
### :eyes: **Your role during session**
* Observe small group activity (cooperative lineage sharing) - note whose stories are shared
* Listen for how studios talk about values - vague or specific?
## **This week's Studio Support Meeting: Values Mapping**
### **📚 Materials**
* Studio Miro board with Values Mapping template
* 7 Principles reference
### **👆 Before the session**
* Confirm everyone completed their individual journaling (Session 1 homework)
* Ensure the studio Miro board has the template
* Have the 7 Principles visible (on the board or screen-shared)
### **🌊 Session flow**
#### **Check-in (5 min)**
Individual sharing (15-20 min) - Each person shares 3-5 values from their individual reflection.
Prompts:
* "What values came up when you did the journaling?"
* "You don't need to explain or justify."
As they share: each person adds values to the Miro board (stickies in their colour/section). No discussion - just capture.
Watch for: someone dominating or going first every time; someone staying quiet - invite them in gently; values that sound the same but might mean different things to different people.
#### **Noticing patterns (10-15 min)**
Look at the board together.
Prompts:
* "What do you notice?"
* "Where do you see overlap?"
* "Any surprises?"
* "Are there values that seem similar but might mean different things to different people?"
\
> Example to offer: "Transparency" - does it mean open documents? Open conversations? Both? Neither? What exactly is meant?
**Connecting to the 7 Principles (10 min)**
Look at the ICA principles together.
**Prompts**:
* "Do you see connections between your values and these principles?"
* "Draw lines or group things if it helps."
* This can be loose - don't let them fixate on making a beautiful diagram. The point is seeing that their values connect to a larger cooperative tradition.
#### **To bring back to Session 2 (5 min)**
**Prompts**:
* "What's one thing you learned about where your team aligns or diverges?"
* * *You'll share this in Session 2 - doesn't need to be polished.*
* Have someone write it down or capture it on the board.
#### **Community agreements contribution (5 min)**
"Based on this conversation, are there 1-2 values you'd propose adding to the cohort community agreements?" Capture these to bring back to the full group.
### :star: **Tips**
If someone is dominating:
* "Let's hear from someone who hasn't shared yet."
If no one talks… awkward silence:
* "Take a minute to look at the board silently. What stands out?"
If tension emerges
* "Sounds like there are some different perspectives here. That's useful but we don't need to resolve it today."
If they want to debate definitions:
* "It's okay to mean different things. The goal is simply to notice where you might need to clarify later."
If time runs short:
* Prioritize steps 2-3 (sharing and noticing). The principles connection and agreements contribution can be done async if needed.
\
### **🏁After the session**
* Note any tensions/surprises for your PS check-in
* Remind the team to bring their learnings to Session 2\n
### **👉 Also this week**
#### **Make sure they're prepping for The Talk**
Session 2 homework includes individual prep on four topics: financial reality, time/availability, skills/contributions, decision-making styles.
:::warning
They need to *write their answers down* before Session 2. Check that they're doing this!
:::
### :triangular_flag_on_post: **Red flags to watch for**
* A studio that can't name any values beyond "we want to make good games" - don't we all! Too vague.
* One person speaking for the group about "our" values
* Values that are all abstract with no grounding in practice

View file

@ -1,10 +0,0 @@
---
title: '3:'
collection: Cooperative Foundations
path: 'Cooperative Foundations/Peer Support Playbook/3:'
parentDocument: Peer Support Playbook
outlineId: 96db6ea8-719d-4b7b-a701-5a653834d309
updatedAt: '2026-03-03T13:37:22.471Z'
createdBy: Jennie R.F.
---

View file

@ -1,10 +1,10 @@
--- ---
title: Applicant Interviews title: Applicant Interviews
collection: Peer Support collection: Cooperative Foundations
path: Peer Support/Manual/Applicant Interviews path: Cooperative Foundations/Peer Support Playbook/Manual/Applicant Interviews
parentDocument: Manual parentDocument: Manual
outlineId: 23eed4f9-23bf-4ddc-a584-a72a139e21d2 outlineId: 23eed4f9-23bf-4ddc-a584-a72a139e21d2
updatedAt: '2026-03-02T17:45:21.924Z' updatedAt: '2026-03-09T15:51:41.188Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
Part of your role as a Peer Support is helping us select the studios for each cohort. You'll be involved in second round interviews only - eileen and Jennie handle eligibility screening and first round interviews, including checking that all application materials are accessible and in order. Part of your role as a Peer Support is helping us select the studios for each cohort. You'll be involved in second round interviews only - eileen and Jennie handle eligibility screening and first round interviews, including checking that all application materials are accessible and in order.

View file

@ -0,0 +1,102 @@
---
title: Applicant Selection Process
collection: Cooperative Foundations
path: Cooperative Foundations/Applicant Selection Process
parentDocument: null
outlineId: 6150980a-76a9-4d2a-99d1-acab58e3847e
updatedAt: '2026-03-23T22:24:23.433Z'
createdBy: Jennie R.F.
---
* Background on our pipeline, scoring system, rubrics, etc.
* Documentation of rubric used for each stage.
# The Decide Meeting
## What this is
This is the group meeting where we decide together which studios will join the cohort. *Every person in the meeting has a say in this decision.* We use consensus, meaning we don't move forward until everyone can fully support the new cohort.
---
## Who's invited
Program coordinators, the peer support coordinator, and all peer supports. Some peer supports interviewed studios directly. They asked their own questions and submitted their own reviews. Others didn't interview but have full access to all application materials, scores, and notes through the [hub](https://hub.babyghosts.org). No one has more weight in this discussion than anyone else.
---
## How it works
We discuss each studio one at a time, adding our thoughts to a miro board the coordinators will make in advance of the meeting. We then work through any concerns or ties until we reach a selection everyone supports.
### Your prep
Review all studio materials in the hub: Applications, scores, reviewer notes, and self-assessments (especially for studios you didn't interview!). For each studio, come with a sense of: Do I think they're ready for this program? What excites me? What concerns me? Think about the cohort as a whole, not just individual studios. Which combination would make the strongest group?
You don't need to write anything down or share beforehand! And don't spend all day on this!
### Roles
We assign three roles before the meeting. Think about volunteering to facilitate! *We prefer that a peer support runs it, not a coordinator.*
1. The *facilitator* guides the conversation and ensures everyone speaks. They manage the process, *not content*.
2. The *note-taker* captures important points, concerns raised (and by whom), and the final decision with reasoning/rationale.
3. The *time-keeper* ensures the discussion moves towards the final decision and lets everyone know how much time is remaining in each section.
### Discussion
We go through each studio one at a time. For each one:
1. The two peer supports who interviewed that studio share what they observed: What stood out, what concerned them, how the team interacted. *Coordinators hold back unless asked a direct question.*
2. Anyone can ask factual/clarifying questions.
3. *Everyone* shares a brief reaction onto the mirror board. One sentence is fine.
If you interviewed a studio, please share what you observed. What stood out? What gave you pause? How did the team interact with each other?
If you didn't interview a studio, just come ready to share your impressions based on the hub materials. Your outside perspective is extra valuable!!
Share your actual reaction. There's no need to be diplomatic (the teams won't see your comments) and it's okay to say you're not sure!
### Signal check
After discussing all studios, we use Slack reactions to see where the group stands. A coordinator posts one message per studio and the facilitator asks everyone to react.
Reactions:
* 💚 I support this studio being in the cohort
* ✋ I have concerns I'd like to discuss before agreeing
* 🛑 I can't support this - I believe it would *cause harm* to the studio or the cohort
### Working toward consensus
Studios where everyone reacted with green heart are in.
Studios where anyone raised a hand get discussed further. The concerned person explains, the group talks it through, and the concerned person decides whether the discussion has addressed their questions enough to change their reaction.
If more studios have support than there are spots, we work through the contested picks together until we land on a selection everyone can support, *even if it's nobody's first choice*.
### What consensus is *not*
Consensus does not mean everyone must be equally enthusiastic about every team. It means no one has a concern serious enough that they believe the decision would cause harm to the studio, the cohort, the program, or our community.
Consensus also does not mean the loudest concerns win. If one person has a strong objection but can't articulate why it rises to the level of "this would cause harm," the group can respectfully note the concern and still move forward.
Consensus doesn't mean this is your dream cohort. It means you can stand behind the selection and support these studios fully once the program starts.
### Closing
The facilitator posts the proposed cohort in Slack and asks everyone to react with a green heart if they can support it. If everyone reacts, the decision is made. If someone can't, we go back to their concern.
After the cohort is selected, peer supports indicate which studios they're most interested in working with (top 3). This doesn't need consensus - it's input for coordinators to use when making assignments.
:::warning
### Confidentiality
The notes from this meeting are confidential to the peer support team and coordinators. Studios should never learn the specific concerns raised about them during the decide session.
:::

View file

@ -0,0 +1,192 @@
---
title: Coordinator Tasks
collection: Cooperative Foundations
path: Cooperative Foundations/Coordinator Tasks
parentDocument: null
outlineId: ca840476-4a64-46e7-85b5-1b9ab88112b6
updatedAt: '2026-03-09T15:44:41.971Z'
createdBy: Jennie R.F.
---
## Session 0: Kickoff + Onboarding
### Pre-session
* Remind folks we'll be meeting in a Huddle in the cohort channel
* Set up Miro boards for each studio
* Prepare Tag Yourself slides
* Share pre-session welcome message with participants (what to expect, access needs)
### Post-session
* Add notes and agreements to Slack channel
* Post reflection questions for asynchronous check-ins:
* How did that session go for you?
* Anything you'd like to ask or clarify before we dive in?
* Share resources:
* Power & Privilege Wheel (Catalyst Project)
* Intro to Co-ops (ICA)
* Baby Ghosts Values & Mission documents
* Coop Journey Map visual (link to Miro or PDF)
---
## Session 1: Coop Principles and Power
### Post-session
* Share the values map and session notes in Slack
* Post follow-up reflection prompts:
* Think about a time your values and actions didn't align under pressure. What made it hard to act according to your values? What support might have helped?
* Share "The Talk" prep questions with clear instructions
* Share resources:
* International Cooperative Alliance Principles: [ICA](https://ica.coop)
* Seeds for Change: Values and Visioning Tools
* *Collective Courage* by Jessica Gordon Nembhard
* "What's In a Value?" by adrienne maree brown
---
## Session 2: Shared Purpose and Alignment
### Post-session
* Post reflection prompts in Slack:
* What's one conversation you now realize you need to have?
* What does "sustainable" mean for your studio?
* Remind teams they will be Continuing "The Talk" in Peer Support sessions \[WIKILINK-03: needs link\]
* Check in with Peer Supports about how the in-session activity went — any teams that need extra support?
* Share resources:
* Obvious Agency's original "The Talk" framework (with attribution) \[WIKILINK-04: needs link to Solidarity Economy Bosses V3_The Talk_2023_CS.pdf\]
---
## Session 3: Actionable Values and Impact
### Pre-session
* Add Why/What/How Miro template to studio boards
* Add [Layers of Effect Miro template](https://miro.com/templates/layers-effect-template/) to studio boards
### Post-session
* Post reflection prompts in Slack:
* What's one value your team *says* it has but doesn't consistently practice?
* Where do you notice a gap between your intentions and your effects?
* Check in with Peer Supports:
* How did the teams receive the WWH exercise?
* Share resources:
* Sociocracy 3.0: [Agree on Values](https://patterns.sociocracy30.org/agree-on-values.html)
---
## Session 4: Decision-Making in Practice
### Post-session
* Post reflection prompts in Slack:
* What decision-making patterns did you notice in the facilitation rotation activity?
* Where do decisions actually happen in your studio right now?
* Check in with PSs about which frameworks studios are trying this week
* Share resources:
* Informal Hierarchy Check-In questions (for studios to use)
* Seeds for Change: [Consensus Decision Making](https://www.seedsforchange.org.uk/consensus)
* Sociocracy 3.0: [Consent Decision Making](https://patterns.sociocracy30.org/consent-decision-making.html)
* Meeting agenda template
---
## Session 5: Coop Structures and Governance
### Pre-session
* Create the Gamma Space Community Rule example on communityrule.info
### Post-session
* Share governance model resources to Slack:
* "Handling Proposals and Objections"
* [Seeds for Change meeting facilitation guides](https://www.seedsforchange.org.uk/facilitationmeeting) — UK grassroots co-op guides
* Anti-Racist Facilitation Guide excerpts
* Governance model resources
* [DisCO Manifesto](https://manifesto.disco.coop)
---
## Session 6: Equitable Economics
### Pre-session
* Create sample financial summary template to show
### Post-session
* Share to Slack:
* [Seeds for Change finance resources](https://www.seedsforchange.org.uk/finance)
* Sample financial summary templates
* Reflection prompt: *What would change if you knew exactly what everyone in your workplace earned?*
* Tools mentioned:
* [CoBudget](https://cobudget.com/)
* [OpenCollective](https://opencollective.com/)
---
## Session 7: Conflict Resolution and Collective Care
### Pre-session
* Pick an example of conflict to discuss in the activity section — something related to money, workload, deferring to a single person, etc.
### Post-session
* Share conflict resolution policy template link in Slack
* Post reflection prompts for studios:
* What conflict are you avoiding?
* Is it interpersonal, structural, or both?
* What's one brave/kind/honest/humble step you could take?
* Share the following content with teams (from Session 7 deep-dive material):
#### Shame gets in the way
When someone is told they've caused harm, a common response is shame. It's a physiological response: you go inward, you lose the relational connection needed to actually hear the other person, and shut down. A performance of accountability - "I'm so sorry, I'm the worst, I'll do whatever you want" - is still centred on the person who caused harm, rather than attending to the impact on the other person.
When your body is in a shut-down shame state, you can't really take accountability. This is because it requires you to be grounded enough to *move toward* the person you've hurt: To listen, sit with discomfort, and take agency in changing your behaviour.
Centring someone else changes how you give and receive feedback. If your response to "hey, that thing you did in the meeting hurt me" is to collapse into "I'm a terrible person," you've just made the other person take care of *your* feelings about *their* pain.
A practical tip: Name the shame when you see it (in yourself or others). "I think I'm shame-spiralling right now" is an okay thing to say. It doesn't get you off the hook, but it allows your teammates to give you a beat so that you can actually ground yourself and focus on the conversation.
> Adapted from *Building Accountable Communities*, a video series by Dean Spade, Mariame Kaba, and the Barnard Center for Research on Women (BCRW). [Building Accountable Communities](https://bcrw.barnard.edu/building-accountable-communities/)
#### Reflection before conversation
Before you raise an issue, get clear on:
1. What specific behaviour did I observe? (not feelings or interpretations)
2. What "no"s are coming up for me?
3. What's my part in this?
4. What do I actually need?
* Share post-session reading:
* Window of Transformation model (Kai Cheng Thom)
* AORTA Collective: [Conflict is a Place](https://aorta.coop/conflict-handout) a short zine on navigating conflict in movement organizations
* Dean Spade: [Practicing New Social Relations, Even in Conflict](https://francesslee.medium.com/practicing-new-social-relations-even-in-conflict-dean-spade-54d4a60fcfed)
* [Building Accountable Communities](https://bcrw.barnard.edu/building-accountable-communities/) video series on transformative justice, accountability, and non-punitive responses to harm. Dean Spade, Mariame Kaba, and the Barnard Center for Research on Women.
* Soul Fire Farm's [Real Talk](https://agriculturaljusticeproject.org/toolkit/resources/relations/soulfire-real-talk/) process
---
## Session 8: Self-Evaluation and Pathways
### Post-session
* Share personal assessment form (tell folks to make a copy don't edit the original!)
* Studio assessment is already in your Miro board

View file

@ -0,0 +1,10 @@
---
title: Exercises and Prompts
collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook/Exercises and Prompts
parentDocument: Peer Support Playbook
outlineId: 35d3ce43-87d7-4b2a-854e-9060640158e9
updatedAt: '2026-03-09T15:48:16.627Z'
createdBy: Jennie R.F.
---

View file

@ -1,10 +1,12 @@
--- ---
title: ICA Values Connections title: ICA Values Connections
collection: Cooperative Foundations collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook/ICA Values Connections path: >-
parentDocument: Peer Support Playbook Cooperative Foundations/Peer Support Playbook/Exercises and Prompts/ICA Values
Connections
parentDocument: Exercises and Prompts
outlineId: 91342338-8257-4550-be63-59f78d88dd75 outlineId: 91342338-8257-4550-be63-59f78d88dd75
updatedAt: '2026-03-03T12:41:09.430Z' updatedAt: '2026-03-09T15:48:18.565Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
These are prompts to get the studio thinking about how the ICA principles connect with their group: These are prompts to get the studio thinking about how the ICA principles connect with their group:

View file

@ -1,10 +1,10 @@
--- ---
title: Manual title: Manual
collection: Peer Support collection: Cooperative Foundations
path: Peer Support/Manual path: Cooperative Foundations/Peer Support Playbook/Manual
parentDocument: null parentDocument: Peer Support Playbook
outlineId: 2257548a-2419-407c-89e9-75f419314a1d outlineId: 2257548a-2419-407c-89e9-75f419314a1d
updatedAt: '2026-03-02T22:49:10.744Z' updatedAt: '2026-03-09T15:51:41.217Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
**Thank you for your interest in becoming a Baby Ghosts Peer Support!** Please take some time to read through this manual, as our peer support program - like our Cooperative Foundations program - is pretty unique. **Thank you for your interest in becoming a Baby Ghosts Peer Support!** Please take some time to read through this manual, as our peer support program - like our Cooperative Foundations program - is pretty unique.

View file

@ -4,7 +4,7 @@ collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook path: Cooperative Foundations/Peer Support Playbook
parentDocument: null parentDocument: null
outlineId: 706d6291-f74d-46f9-bb10-4c7029c29d84 outlineId: 706d6291-f74d-46f9-bb10-4c7029c29d84
updatedAt: '2026-03-02T14:15:47.695Z' updatedAt: '2026-03-09T15:41:13.738Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
## Overview ## Overview

View file

@ -1,22 +1,22 @@
--- ---
title: 'Pre-program: Onboarding and Prep' title: 'Pre-program: Onboarding and Prep'
collection: Cooperative Foundations collection: Cooperative Foundations
path: 'Cooperative Foundations/Peer Support Playbook/Pre-program: Onboarding and Prep' path: >-
parentDocument: Peer Support Playbook Cooperative Foundations/Peer Support Playbook/Session Guides/Pre-program:
outlineId: eaf7f9a4-d788-412f-9779-0b35cea0474e Onboarding and Prep
updatedAt: '2026-03-02T17:12:08.564Z' parentDocument: Session Guides
outlineId: 90324dab-581b-4b87-a5ae-9ee5b70631b6
updatedAt: '2026-03-09T15:49:07.032Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# **Your first Studio Support Meetings** # **Your first Studio Support Meetings**
:::info :::info
Use this list to get a baseline read on your studio. These are things to *notice and gently explore* over your first couple of conversations. No need to interrogate, and you don't need to go through all of them. Use this list to get a baseline read on your studio. These are things to *notice and gently explore* over your first couple of conversations. No need to interrogate, and you don't need to go through all of them.
::: :::
### **Relational foundation** ### **Relational foundation**
* How long have they known each other? Have they made anything together before? * How long have they known each other? Have they made anything together before?
@ -26,7 +26,6 @@ Use this list to get a baseline read on your studio. These are things to *notice
* Who's doing most of the talking right now? Who's quiet? * Who's doing most of the talking right now? Who's quiet?
* Is there evidence of trust (or trust-building potential) in the group? * Is there evidence of trust (or trust-building potential) in the group?
### **Capacity and commitment** ### **Capacity and commitment**
* Is everyone working on this full-time, part-time, or around day jobs? * Is everyone working on this full-time, part-time, or around day jobs?
@ -35,7 +34,6 @@ Use this list to get a baseline read on your studio. These are things to *notice
* Who has business/admin skills? Financial literacy? Project management? * Who has business/admin skills? Financial literacy? Project management?
* Is there openness about strengths *and* limitations? * Is there openness about strengths *and* limitations?
### **Game related** ### **Game related**
* Where is the project at? (Concept, prototype, production, shipped?) * Where is the project at? (Concept, prototype, production, shipped?)
@ -45,11 +43,10 @@ Use this list to get a baseline read on your studio. These are things to *notice
* What's missing? Are they aware of the gaps? * What's missing? Are they aware of the gaps?
* Has anyone worked in games professionally before? In what capacity? * Has anyone worked in games professionally before? In what capacity?
#### **What you're doing with this information:** #### **What you're doing with this information:**
* Building your own picture of the studio's dynamics, strengths, and risk areas. * Building your own picture of the studio's dynamics, strengths, and risk areas.
* You don't need to resolve anything yet, just notice. * You don't need to resolve anything yet, just notice.
* Bring observations to your PS check-in. * Bring observations to your PS check-in.
\n*Credit:* **Effective Practices in Starting Co-ops** *and Christine Clarke of __[Freedom Dreams](https://www.freedomdreamscoop.com/)__ for inspiration/starting points.* *Credit:* **Effective Practices in Starting Co-ops** *and Christine Clarke of __[Freedom Dreams](https://www.freedomdreamscoop.com/)__ for inspiration/starting points.*

View file

@ -0,0 +1,212 @@
---
title: 'Session 0: Kickoff + Onboarding'
collection: Cooperative Foundations
path: 'Cooperative Foundations/Session Content/Session 0: Kickoff + Onboarding'
parentDocument: Session Content
outlineId: 4473dfe4-b06a-406c-98a6-6bba510cb162
updatedAt: '2026-03-09T15:40:26.477Z'
createdBy: Jennie R.F.
---
> **Peer Supports:** See [PS Guide: Session 0 — Kickoff + Onboarding](/doc/ps-guide-session-0-kickoff-onboarding-HzswkItl8f) for your role during session and this week's studio support meeting.
---
## Welcome
* Tag Yourself activity
---
## Intro - 2 min
Session 0 orients us to the shared work ahead. This opening session grounds participants in the purpose and structure of the program while setting the tone for a peer-driven, care-centred space.
We'll begin building the relational trust and shared accountability that will carry us through the following 8 sessions. We'll reflect on our own privileges and lived experiences. By the end of this session, we'll have a shared understanding of how we'll learn together. This is the beginning of practicing cooperation together.
"The most important thing is if there's **trust** between the people in the group because that's what carries it through." - Russ Christianson
---
## Agenda
### Welcome, land acknowledgement, values - 10 min
* Quick round: name, pronouns, location, why you're here
* Acknowledge land and virtual space, and share our values
* We acknowledge and thank all those who have struggled for workers' rights and racial, economic, and environmental rights and emancipation
* We are recording this session for team members who can't attend
* Please post questions as we go in the chat
* Opportunity to ask more questions during Q&A at end
* If you have any access needs, put it in the chat or DM @jennie or @eileen
### Participant intros - 15 min (3 mins each)
1. Each team says hello - have one person talk for the team and the others chime in the chat with:
* name, pronouns, location
2. Tell us about your game - *briefly*
* can share pictures in the chat if you want
3. Biggest studio pain point *right now*
### Peer Support team intros - 5 min
* Who is paired with who
* What Peer Support sessions look like
---
### Where you are: The co-op development journey - 10 min
**SLIDE: Coop Journey Map** *(visual showing: pre-formation to formation to operation)*
First, let's look at the statistics:
* Small business startup success rate: \~20% (8 in 10 fail)
* Cooperative startup success rate: \~40% (6 in 10 fail)
* Co-ops significantly outperform conventional startups but it's still not a guarantee
**You're still going against the odds. But it's a worthwhile thing to do, because you learn so much.**
Being a co-op improves your odds, it doesn't eliminate risk.
This program focuses on **pre-formation** - the relational and governance groundwork that determines whether your co-op will thrive or struggle.
Most resources out there focus on the legal and operational stuff: how to incorporate, how to file paperwork, how to structure bylaws. That matters, of course! But it's not where studios fail.
Studios fail because of unspoken assumptions about money, time, and commitment; wishy-washy and undocumented governance; conflict avoidance; unexamined power dynamics
This program exists to build the foundation *before* you incorporate. By the end of this program, you'll have shared values that you know how to put into action. We'll walk you through designing and practicing cooperative governance structures. You'll know how to decide *how to decide*! and we'll test low-stakes decisions. And you'll have drafted conflict tools ready for when (NOT IF!) tensions arise.
**You are here:** Pre-formation and building your relational infrastructure
**What comes after:** Incorporation support, ongoing community (Ghost Guild), and continued learning. We'll talk about pathways in Session 8.
---
### Program overview - 10 min
* Program schedule, session themes, and format
* Gamma Space / Slack explanation
* Slack structure: main channel(s), cohort channels, project channels, random and other general channels
* Expectations for engagement (Slack reflections, homework, participation)
* How to participate
* How to book with us
* Review accessibility practices (captions, breakout choices, asynchronous options)
* Tools we'll use (Miro, Slack Canvas, Huddles)
*Note: Much of this info will also live in a Slack Canvas for reference.*
This program will give you tools to notice when informal hierarchy forms, have hard conversations about money, power, and expectations, make decisions collectively, and navigate conflict as valuable data. It will NOT make you hierarchy-free, tell you exactly how to structure your co-op, eliminate disagreement, or do the hard conversations for you. *We're here to support you, but the work is yours.*
---
### Friction is part of the work - 5 min
Before we build our community agreements, we want to chat about something that has come up in every previous cohort.
This program will ask you to have hard conversations - about money, about power, about what you actually want from this collaboration. Some of those conversations will be *uncomfortable*. You might discover that your group is less aligned on values than you assumed. You might have disagreements you've never had before. Someone might go radio silent, and someone might get defensive.
Examples:
* "I've been doing most of the work and I'm starting to resent it."
* "We said we'd share decisions equally, but one person always gets the final word."
* "I thought we agreed on this, but I actually don't think I had a real say."
* "I can only commit 10 hours a week and you're working 40 - how do we make that fair?"
* "I want to leave the studio."
This is normal. This is the work!
We bring these questions up to normalize friction. And because unspoken assumptions are where studios fall apart. The friction you feel now, when the stakes are low and you have support, is infinitely better than discovering it later when you're under deadline pressure or financial strain.
A few things to reframe…
* Discomfort often means something important is coming up.
* Disagreement tells you something isn't clear and gives you an opportunity to include more people.
* If everything feels easy, you might not be going deep enough.
We're here to support you through the hard parts - that's what Peer Supports are for. But we can't do the hard conversations for you.
---
### Commitment and permission - 5 min
Let's talk about what commitment actually means in this program.
Time - About 2-3 hours per week (sessions + homework + Studio Support meetings). Some weeks will be heavier. If you can't make a session, let us know - recordings are available, but live participation is really important.
Openness - This work asks you to be vulnerable with your collaborators: To say what you actually think, to hear things you might not want to hear - this will take energy and might be unfamiliar. Give it your best shot.
Money - You're receiving a grant as part of this program. That comes with accountability - to yourself, your studio, and the cohort.
Purpose - Why does your studio need to be a co-op? Not "why are co-ops good" but what specific problem does working cooperatively solve for you that you couldn't solve another way?
You have permission to leave early - If you realize partway through that this isn't the right time, or this isn't the right team, or you need to step back - that's okay. It's better to face that than to go through the motions. We'd rather you make an honest choice for yourself.
Leaving isn't failure. *Sometimes it's the most cooperative thing you can do.*
---
### Code of conduct & community agreements - 15 min
Now let's build some shared agreements for how we'll be together.
We'll start with a few agreements and build from there. We are naming what we *need* to do the hard work together.
**Activity:** Collective drafting via Miro
* We'll start with a few prompts
* Add your own via stickies
* Emoji reactions
* We won't finalize today - we'll revisit and refine
**Starter prompts:**
* What do you need to feel safe raising a concern?
* What helps you stay present when things get uncomfortable?
* How do you want to be supported when you're struggling?
* What makes you feel able to jump into a conversation?
---
### Activity: Power Flower overview - 10 min
Your first piece of individual work is a reflection on your own power and privilege.
**Power Flower** - a tool for mapping the identities and experiences you bring into this space.
This is private and for your own reflection. Baby Ghosts won't see it. Your studio won't see it unless you choose to share. We'll use it as a jumping-off point in Session 1.
* What lived experiences or identities shape how you enter this space?
* What kinds of influence or resources (social, economic, relational) do you carry?
* Where do you need support?
* What hopes or expectations are you bringing into this program?
Complete this in your private Miro board before Session 1.
---
### Closing - 5 min
* Each person shares one intention or hope for the program
* Reminders: next session prep, Slack channels to check, Power Flower homework
---
## Homework
1. **Complete your Power Flower** Use the template in your private Miro board. Reflect on the identities, experiences, and forms of power you bring into this space. This is just for you we'll use it as a jumping-off point in Session 1.
---

View file

@ -0,0 +1,285 @@
---
title: 'Session 1: Coop Principles and Power'
collection: Cooperative Foundations
path: 'Cooperative Foundations/Session Content/Session 1: Coop Principles and Power'
parentDocument: Session Content
outlineId: 036d9fc6-27c0-410b-b570-6bf4d5fac80e
updatedAt: '2026-03-15T16:17:12.748Z'
createdBy: Jennie R.F.
---
> **Peer Supports:** See [Session 1: Coop Principles and Power](/doc/c6a0ee07-8c24-41f3-9975-e54955e84b5c) for your role during session and this week's studio support meeting.
## Welcome
* Slide: Tag Yourself activity
* Slide: Anonymous feedback form reminder
---
## Intro - 3 min
Working in an environment that focuses solely on shipping, profit, and growth denies us the opportunity to practice our values collectively. Worse, the outcome of those capitalist values is exploitation and dehumanization of everyone but whoever is at the top of the org chart. How can we connect with our deepest-held values to shape collective practices that challenge this harmful hierarchy?
We have some guidance to start with: The principles adopted in 1995 by the International Cooperative Alliance (ICA), which now form the ethical foundation for cooperative work around the world and are deeply reflected in cooperative history and practice in the Global South. We'll trace a line from these principles to your personal and shared values, and then to what cooperative practice can look like in your context.
Through this work, we can create a culture that stands up to extraction and burnout, and practice something different in its place.
---
## Agenda
Today we'll be talking about:
* How to cooperate (cooperative capacities)
* Coop histories/lineages
* The ICA cooperative principles
* How to move from the principles to values
### Check-in - 5 min
*Thinking back on the Power Flower reflection you did...*
* what's one thing you noticed about yourself that you hadn't named before?
* no need to share details, unless you are compelled! Just notice what came up
---
### Cooperation is a skill, not a trait - 5 min
We've been socially and economically shaped by systems that reward competition, individual achievement, and hierarchy. Most of us were just never taught *how* to cooperate.
"Most human beings have a natural propensity to cooperate." -- Russ Christianson, *Effective Practices in Starting Co-ops*
The capacity exists. We already practice solidarity economics in daily life without calling it that when we contribute to a GoFundMe or babysit our neighbour's kids. But these practices get buried under what Black economist Jessica Gordon Nembhard calls "the assumptions of neo-liberal capitalist ideology."
Can cooperation be recovered and practiced until it's reliable?
That's what this program is for. We're not here to convince you cooperation is good. Pretty sure you already know that. We're here to build the muscle and to practice until cooperative decision-making becomes your default.
---
### The skills of cooperation - 7 min
So what does "cooperation is a skill" actually mean? *What* are the skills?
We're going to introduce tools throughout this program, but tools only work if you have the *underlying capacities* to use them. A consensus process doesn't help if no one can sit with discomfort long enough to hear a dissenting view.
Here's what we'll be practicing:
**Active listening**\nThis means unlearning the tendency to simply wait for your turn to talk. It means actually focusing on the other person and trying to understand what they really mean, especially when you disagree. One practice to support this is reflecting back what you hear. You can also take notes.
**Honest communication**\nWithout making accusations, say what you actually think, and use "I" statements. The purpose is to open conversation up wider.
**Perspective-taking**\nYour collaborators experience situations differently from you, and from each other. Try to put yourself in their position/mindset and hear what they are telling you about what they are feeling.
**Emotional self-regulation**\nIt can be difficult, without prior practice, to stay present when things get uncomfortable instead of shutting down, lashing out, or agreeing just to make the tension stop. Notice discomfort and choose how to respond rather than just reacting.
**Self-awareness about your patterns in groups**\nDo you talk first? Go quiet when you disagree? Say yes to avoid tension? Take over tasks because it's faster than explaining? Notice your ingrained habits!
**Giving and receiving feedback** This is a tough one for a lot of people. When you have a concern, do you hedge so much it disappears? And when you hear critical feedback, do you get defensive or collapse? Both directions are skills. Look at feedback as a *gift*.
None of these are natural talents, but all of them can be practiced. In fact, you'll be practicing them throughout this program, starting next session!
*Sources: Munro, "United we stand: fostering cohesion in activist groups," Interface 13(1), 2021*
---
### Cooperative lineages and whose knowledge gets credited - 10 min
The foundational principles of cooperatives are rooted in survival. But the Rochdale Pioneers of 1844, often credited as cooperative "founders," didn't invent cooperation they simply codified practices that had existed for millennia. We'll cover those principles in a minute, but first let's talk about the longer lineages of cooperative history.
* Indigenous communities worldwide practiced mutual aid, collective resource management, and consensus decision-making long before European contact. Many Indigenous governance systems also held space for Two-Spirit people in leadership and decision-making roles.
* Enslaved and formerly enslaved Black communities in the Americas created mutual aid societies, burial societies, and informal credit systems out of necessity and survival
* Women formed cooperative childcare networks, domestic worker collectives, and community support systems -- often invisible and uncredited
* Immigrant communities built cooperative stores, housing, and financial institutions when mainstream systems excluded them
* Queer and trans communities built mutual aid networks, collective housing, and care systems - often out of crisis. Marsha P. Johnson and Sylvia Rivera's STAR (Street Transvestite Action Revolutionaries) in 1970s New York provided communal shelter, food, and support for homeless trans youth of colour, organized entirely on principles of shared responsibility and collective care
* During the AIDS crisis, queer communities created cooperative care networks, buyers' clubs to share medication, and mutual aid funds when governments and institutions abandoned them
*The Combahee River Collective - Black lesbian feminists organizing in the 1970s - articulated what we now call intersectionality. Cooperative movements have always been strongest when they refuse to separate one axis of liberation from another*
![Sylvia Rivera in 1970. By Roseleechs Own work, CC BY-SA 4.0. https://commons.wikimedia.org/w/index.php?curid=119409579](/api/attachments.redirect?id=ed97bdf1-8175-4d0d-a416-e4bffe8273b4 " =960x619")
The Rochdale Pioneers formalized these practices into a movement. But when we credit them as "founders," we invisibilize the communities who developed and sustained cooperative practices for generations under conditions of oppression.
Source: *Locating the Contributions of the African Diaspora in the Canadian Co-operative Sector* \[WIKILINK-01: needs URL\] Additional info: [Indigenous Governance and Tomorrow's Democracy](https://www.colorado.edu/lab/medlab/2025/07/28/indigenous-governance-and-tomorrows-democracy-join-conversation)
This matters for us because you may already hold cooperative knowledge. It could be in your family, your culture, your community.
Consider your own "cooperative lineage":
* Did you grow up with childcare swaps, community gardens, or potlucks?
* How did your family handle resources when money was tight? Who did they turn to?
* What decision-making traditions come from your culture?
* Have you been part of a band or community organization that shared resources or made decisions collectively?
Or:
* Why did you become interested in forming a cooperative?
Most of these practices go unnamed as "cooperative" but they are part of a long, global, grassroots, and informal tradition.
There are many types of cooperatives (coop housing, community land trusts, community financing like credit unions, worker cooperatives like you're trying to build) but also barter clubs, fair trade, solidarity markets.
\n ![Solidarity Economy illustration. By Caroline Woodard, art.coop, 2021.](/api/attachments.redirect?id=75c3e15f-b3fb-4268-b0ca-f17963140f72 " =1600x1158")
Cooperatives are expansive and we can add skills to your toolkit!
Share one cooperative practice from your experience in the chat. *And pay attention to what values are present.*
### Small groups -- mixed studios (3-4 people) - 15 min
* Share your cooperative lineage story
* What values were present in that experience?
* Each group identifies 3-5 values they heard across their stories
* What need brought your studio together? What were you each missing that cooperation addresses?
Brief large group share - 5 min: Each group shares 1-2 values they identified.
---
## The 7 Cooperative Principles - 10 min
The values you just named have been recognized and formalized by cooperative movements worldwide. In 1995, the International Cooperative Alliance adopted these 7 principles that now guide cooperative work globally.
*For each principle, consider: How might your co-op incorporate this principle? What policies or practices would bring it to life?*
### 1. Voluntary and Open Membership
Cooperatives are voluntary organizations, open to anyone able to use their services and willing to accept the responsibilities of membership, without gender, social, racial, political or religious discrimination.
### 2. Democratic Member Control
Cooperatives are democratic organizations controlled by their members, who actively participate in setting their policies and making decisions. Your board of directors is accountable to the membership. Each member has one vote.
* *How will the co-op balance this with the reasonable interests of different classes of members?*
### 3. Member Economic Participation
Members contribute equitably to, and democratically control, the capital of their cooperative.
* *Consider the share values, annual fees, fees-for-services, and other financial commitments that members will have to meet.*
### 4. Autonomy and Independence
Cooperatives are autonomous, self-help organizations controlled by their members. If they enter into agreements with other organizations, including governments, or raise capital from external sources, they do so on terms that ensure democratic control by their members and maintain their cooperative autonomy.
* *What policies are needed around contracts, hiring contractors, accepting donations, or taking investment?*
### 5. Education, Training, and Information
Cooperatives provide education and training for their members, elected representatives, managers, and employees so they can contribute effectively to the development of their cooperatives.
* *What education is needed about the rights and responsibilities of membership? About other topics related to your coop's activities?*
### 6. Cooperation Among Cooperatives
Cooperatives serve their members most effectively and strengthen the cooperative movement by working together through local, national, regional, and international structures.
### 7. Concern for Community
Cooperatives work for the sustainable development of their communities through policies approved by their members.
Summary source: *A People-Centred Path for a Second Cooperative Decade* \[WIKILINK-02: needs URL\] - ICA 2020
Nobody carved the 7 Principles into stone tablets and carried them down the mountain. The ICA has revised the principles three times - in 1937, 1966, and 1995 - because cooperative practice changes. You don't have to follow the rules perfectly to be a coop. But hold on to the core: democratic control, shared ownership, and surplus flowing to workers based on their labour. Everything else can be adapted to your studio's capacity and interests.
### The values beneath the principles
The principles give us structure. The values give us *why*. The International Cooperative Alliance summarizes it this way:
*"Cooperatives are based on the values of self-help, self-responsibility, democracy, equality, equity and solidarity. In the tradition of their founders, cooperative members believe in the ethical values of honesty, openness, social responsibility and caring for others."*
These are commitments to how we treat each other.
---
## From principles to your values - 3 min
What values guide *your* work or collective efforts?
Values are *beliefs* that motivate us to*act* one way or another. They guide our behaviour.
We each adopt values from a combination of our upbringing, the communities we are part of, the dominant culture, and other influences in our lives. Just like an individual's values guide how that person acts, organizational values guide how the *group* acts and makes decisions collectively.
Values also define scope and ethical constraints.
[Sociocracy 3.0: Agree on Values](https://patterns.sociocracy30.org/agree-on-values.html)
---
### How do we collaborate when we mean different things? - 2 min
Words are vague, communication is fraught, and we're all coming in from different backgrounds. The best thing we can do to support the cooperative principles of collaboration is to try and find common ground.
Where do we meet each other? And how do we build from there?
---
## Homework - 10 min
1. **Journal about your values** What values guide your work or collective efforts? Your values can be discovered through observation. Your task isn't to decide what matters to you, but to notice what already does.
* What holds your attention without effort?
* What do you find yourself doing when no one is watching?
* What topics consistently generate strong emotional responses?
* When have you felt most alive or fulfilled?
2. **Do the team values map with your Peer Supports** Use your PS session to do the values mapping exercise as a team. Where do you align? Where do you differ?
3. **Prep individually for "The Talk" (Session 2)** Next session, you'll practice having direct conversations about money, time, skills, and decision-making with your collaborators. Reflect on these questions **write your answers down** before we meet. Try to time-box to about 5 minutes per section.
**Financial reality:**
* How much do you need to make monthly to participate in this studio?
* What's your current financial capacity to contribute?
* How important is immediate income vs. long-term equity?
**Time and availability:**
* What's your actual time availability per week?
* What are your non-negotiable boundaries?
* How do you handle competing priorities?
**Skills and contributions:**
* What do you excel at vs. what drains you?
* Where do you want to grow vs. where you're already expert?
* How do you prefer to contribute when you're overwhelmed?
**Decision-making styles:**
* How do you prefer to make decisions under pressure?
* When do you need more information vs. when do you trust your gut?
* How do you handle disagreement?
And finally: **Does being part of this studio make you feel something? What is that feeling?**
Adapted from Obvious Agency's "The Talk" worksheet.
*These are for **you** first. You'll share with your team in Session 2.*
---
## Closing - 5 min
We've identified values that guide us individually and found connections to cooperative principles. But now comes the hard part: How do we actually *practice* these values together?
It might seem easy and fun to chat about these ideas with your collaborators, but until you are in conflict, or under financial or deadline pressure, you don't really know how everyone will hold on to those values.
Studios built around a shared problem - "we can't afford to make games alone," "we refuse to work in exploitative conditions again" - tend to hold together under that pressure. Studios built around a shared *aesthetic* preference for cooperation sometimes don't. Try to notice which one is yours.
The industry tells us to brute force our way through these situations with the boss ultimately "resolving" the issue the way they want, probably guided by "move fast and figure it out later." But cooperative work requires something different. What Indigenous organizer Ruth Łchav'aya K'isen Miller calls "patience for the pace of trust."
Next session, we'll explore what it actually takes to align with collaborators beyond just sharing values on a Miro board. Even the closest friends can discover they have very different expectations about work, money, and decision-making when those conversations inevitably come up.
Use your Peer Support session this week to start talking about your values as a team.
---

View file

@ -1,53 +1,52 @@
--- ---
title: '2: Shared Purpose & Alignment' title: 'Session 2: Shared Purpose and Alignment'
collection: Cooperative Foundations collection: Cooperative Foundations
path: 'Cooperative Foundations/Peer Support Playbook/2: Shared Purpose & Alignment' path: >-
parentDocument: Peer Support Playbook Cooperative Foundations/Peer Support Playbook/Session Guides/Session 2: Shared
outlineId: 1e0d7f35-2bfe-4495-93bd-c05a4cd028d6 Purpose and Alignment
updatedAt: '2026-03-03T13:37:22.848Z' parentDocument: Session Guides
outlineId: 6edd974c-3ef1-4eaa-a7cd-1620716e859a
updatedAt: '2026-03-09T17:21:11.981Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
> **Session content:** See [Session 2: Shared Purpose and Alignment](/doc/session-2-shared-purpose-and-alignment-RfzSikcGy1) for the full curriculum.
## **What happens in session** ## **What happens in session**
This session, we talk about the challenges of aligning on the studio's purpose. This session, we talk about the challenges of *aligning on the studio's purpose*.
We go over common pitfalls - vague goals like "we all just want to make good games" and assuming shared politics means shared work values. We do four rounds of The Talk, asking detailed individual questions about financial reality, time/availability, skills/contributions, and decision-making styles. Studios practice this in their channels with the Peer Support present. We go over common pitfalls vague goals like "we all just want to make good games" and assuming shared politics means shared work values. We do four rounds of The Talk, asking detailed individual questions about financial reality, time/availability, skills/contributions, and decision-making styles. Studios practice this in their channels with the Peer Support present.
### :eyes: **Your role during session** ### :eyes: **Your role during session**
***This is a big one.*** You're facilitating The Talk in your studio's breakout room (aka their project or studio channel). Here are some things to watch for: ***This is a big one.*** You're facilitating The Talk in your studio's breakout room (aka their project or studio channel). Here are some things to watch for:
Financial reality: Financial reality:
* People minimizing their own needs * People minimizing their own needs
* Wide gaps in financial situations not being acknowledged * Wide gaps in financial situations not being acknowledged
* Someone going quiet * Someone going quiet
Time/availability: Time/availability:
* Vague answers * Vague answers
* Someone over committing to match others * Someone over committing to match others
Skills/contributions: Skills/contributions:
* People only naming strengths and not gaps * People only naming strengths and not gaps
* Assumptions about roles based on past * Assumptions about roles based on past
* Someone taking on the hard or tedious stuff by default * Someone taking on the hard or tedious stuff by default
Decision-making: Decision-making:
* Very different styles that could clash (fast decider vs. slow processor) * Very different styles that could clash (fast decider vs. slow processor)
* Someone who goes along to avoid conflict * Someone who goes along to avoid conflict
* Past conflicts referenced passively * Past conflicts referenced passively
### :triangular_ruler: **Format** ### :triangular_ruler: **Format**
Each person answers in turn (1.5-2 min each), use the Miro timer, brief open discussion after everyone answers, then move to next round. Each person answers in turn (2 min each), use the Miro timer, brief open discussion after everyone answers, then move to next round.
The goal isn't to solve everything today, just to get the conversation started! The goal isn't to solve everything today, just to get the conversation started!
@ -57,7 +56,7 @@ The goal isn't to solve everything today, just to get the conversation started!
### :world_map: **Context** ### :world_map: **Context**
In Session 2, studios practiced The Talk - four rounds covering financial reality, time/availability, skills/contributions, and decision-making styles. They started these conversations but didn't finish them (this is the intention). During this Studio Support Meeting, help them go deeper: create space to continue conversations that got cut short or stayed shallow, draw out what went unsaid, help the team notice patterns. In Session 2, studios practiced The Talk four rounds covering financial reality, time/availability, skills/contributions, and decision-making styles. They started these conversations but didn't finish them (this is the intention). During this Studio Support Meeting, help them go deeper: Create space to continue conversations that got cut short or stayed shallow, draw out what went unsaid, help the team notice patterns.
This is an ongoing practice! This is an ongoing practice!
@ -71,7 +70,6 @@ This is an ongoing practice!
"Which round felt most unfinished or brought up the most tension?" Revisit the questions in the round they choose, but this time, push past the first answer. "Which round felt most unfinished or brought up the most tension?" Revisit the questions in the round they choose, but this time, push past the first answer.
***For financial reality:*** ***For financial reality:***
@ -79,15 +77,13 @@ This is an ongoing practice!
2. Are you trying not to seem demanding, and not sharing your true needs? 2. Are you trying not to seem demanding, and not sharing your true needs?
3. Are there differences in monetary needs that create (a sense of) unbalanced power dynamics? 3. Are there differences in monetary needs that create (a sense of) unbalanced power dynamics?
***For time and availability:*** ***For time and availability:***
4. How many hours per week can you reliably, actually commit - a hard number. 4. How many hours per week can you reliably, actually commit a hard number.
5. What's something that would cause you to miss a deadline? How would you want to handle that as a team? 5. What's something that would cause you to miss a deadline? How would you want to handle that as a team?
6. Are you building around one person's availability? Intentionally? 6. Are you building around one person's availability? Intentionally?
***For skills and contributions:*** ***For skills and contributions:***
@ -95,7 +91,6 @@ This is an ongoing practice!
8. When you're overwhelmed, do you want people to check in or give you space? Does the team know that about you? 8. When you're overwhelmed, do you want people to check in or give you space? Does the team know that about you?
9. Is anyone doing work that isn't visible or acknowledged? 9. Is anyone doing work that isn't visible or acknowledged?
***For decision-making:*** ***For decision-making:***
@ -103,7 +98,6 @@ This is an ongoing practice!
11. Has there been a decision in this group where you felt unheard? 11. Has there been a decision in this group where you felt unheard?
12. When you're under pressure, do you speed up or slow down? Do these styles clash between members? 12. When you're under pressure, do you speed up or slow down? Do these styles clash between members?
#### **Draw out the unsaid (15 min)** #### **Draw out the unsaid (15 min)**
@ -118,14 +112,15 @@ Give silence and let it be awkward. *You really need to relish the awkwardness.*
* Is there something you noticed about a teammate's answer that you're still thinking about? * Is there something you noticed about a teammate's answer that you're still thinking about?
* Is there an elephant in the room? * Is there an elephant in the room?
If something big comes up: help them decide - "Is this something you want to keep talking about now, or table for later?" If something big comes up: help them decide "Is this something you want to keep talking about now, or table for later?"
### Close and next steps (5 min)
4. Close and next steps (5 min) - "What's one thing you want to carry forward from this conversation?" "What's one thing you want to carry forward from this conversation?"
Remind them: these conversations don't end here; tension is interesting information, not failure; they can bring things back to future PS sessions. Remind them: these conversations don't end here; tension is interesting information, not failure; they can bring things back to future PS sessions.
**Nudge them on their Session 2 homework:** writing down tension points and unsaid questions. Check that they're doing this - we need this to build on later. **Nudge them on their Session 2 homework:** writing down tension points and unsaid questions. Check that they're doing this we need this to build on later.
## :triangular_flag_on_post:**Red flags** ## :triangular_flag_on_post:**Red flags**
@ -142,35 +137,30 @@ Note these for your PS check-in or message in the channel.
**Duration:** 15-20 minutes (can be folded into the same meeting as Continuing The Talk, or done as a separate short check-in) **Duration:** 15-20 minutes (can be folded into the same meeting as Continuing The Talk, or done as a separate short check-in)
**Context:** Session 2 homework asks each person to individually reflect on where they see the studio in 1/3/5 years and what their revenue model might look like. This isn't a formal exercise - it's a conversation starter. You're helping them notice where their assumptions about the studio's future align or diverge. **Context:** Session 2 homework asks each person to individually reflect on where they see the studio in 1/3/5 years and what their revenue model might look like. This is just a conversation starter. You're helping them notice where their assumptions about the studio's future align or diverge.
**Before the conversation:** **Before the conversation:**
* Confirm everyone has done some thinking on this (even loosely). If they haven't, give them 5 minutes of quiet writing time before you start. * Confirm everyone has done some thinking on this (even loosely). If they haven't, give them 5 minutes of quiet writing time before you start.
**How to facilitate:** **How to facilitate:**
Start with a round: each person shares one thing about where they see the studio. Keep it brief - you're listening for gaps, not building a business plan. Start with a round: Each person shares one thing about where they see the studio. Keep it brief you're listening for gaps, not building a business plan.
Prompts to draw out differences: Prompts to draw out differences:
* "When you picture the studio in three years, how many people are on the team?" * "When you picture the studio in three years, how many people are on the team?"
* "Are you imagining one game, or multiple projects?" * "Are you imagining one game, or multiple projects?"
* "What does 'success' look like for you personally - not the studio, *you*?" * "What does 'success' look like for you personally not the studio, *you*?"
* "Is this your full-time thing, or alongside other work?" * "Is this your full-time thing, or alongside other work?"
Then ground it: Then ground it:
* "Who are your players? Do you know?" * "Who are your players? Do you know?"
* "What's your revenue model - game sales, services, grants, a mix?" * "What's your revenue model game sales, services, grants, a mix?"
* "Can that sustain you? For how long?" * "Can that sustain you? For how long?"
* "What happens if the game takes twice as long as you think?" * "What happens if the game takes twice as long as you think?"
**What you're listening for:** **What you're listening for:**
* Major mismatches in ambition (one person wants a 20-person studio, another wants a 3-person collective) * Major mismatches in ambition (one person wants a 20-person studio, another wants a 3-person collective)
@ -179,18 +169,15 @@ Then ground it:
* Scale assumptions that don't match the team's actual capacity * Scale assumptions that don't match the team's actual capacity
* Different definitions of sustainability (covering rent vs. building wealth vs. just making a game) * Different definitions of sustainability (covering rent vs. building wealth vs. just making a game)
**What you're not doing:** Judging their plans or telling them what's realistic. You're helping them see whether they're actually talking about the same studio. **What you're not doing:** Judging their plans or telling them what's realistic. You're helping them see whether they're actually talking about the same studio.
:::tip :::tip
**Tip:** **If you notice a big gap** - say, one person assumes this is a side project and another has quit their job for it - name it gently. \n\n"I'm noticing you might be picturing different scales here. Is that something you've talked about?" \n\nThis is the kind of divergence that festers if it stays unspoken. **Tip:** **If you notice a big gap** say, one person assumes this is a side project and another has quit their job for it name it gently. "I'm noticing you might be picturing different scales here. Is that something you've talked about?" This is the kind of divergence that festers if it stays unspoken.
::: :::
**After the conversation:** **After the conversation:**
* Note any major alignment gaps for your PS check-in * Note any major alignment gaps for your PS check-in
* They'll keep returning to scale and pace throughout the program - this is just the first pass * They'll keep returning to scale and pace throughout the program this is just the first pass

View file

@ -0,0 +1,159 @@
---
title: 'Session 3: Actionable Values and Impact'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Peer Support Playbook/Session Guides/Session 3:
Actionable Values and Impact
parentDocument: Session Guides
outlineId: 6b557fed-1ae0-41cd-a97d-8ceea11b523b
updatedAt: '2026-03-09T17:24:38.258Z'
createdBy: Jennie R.F.
---
> **Session content:** See [Session 3: Actionable Values and Impact](/doc/session-3-actionable-values-and-impact-U1NcQXtrbg) for the full curriculum.
## Pre-session
If you are the presenting PS for this session, prepare a **10-minute** case study from your studio covering:
* How you arrived at your current values (what process did you use? what changed through iteration?)
* One example of values guiding a real decision especially a hard one
* Where you've seen a gap between stated values and actual practice, and what you did about it
Show the messy stuff. Participants need to see that this work is ongoing, not a one-time exercise.
## **What happens in session**
Studios move from identifying values to making them operational. The session introduces two tools: the Why/What/How framework (turning values into concrete practices) and Layers of Effect (mapping ripple effects of decisions). A Peer Support presenter shares a case study from their own studio. Studios work through scenarios using values-first thinking and identify a decision to run through the tools with their PS this week.
### :eyes: **Your role during session**
* If presenting: Deliver your case study. Be honest about what didn't work and what you're still figuring out.
* Observe your studio during the scenario exercise who applies values first vs. jumping to solutions?
* Note whether studios can connect their Session 1 values to the tools, or if values are still too vague to be actionable.
### 👆 **Your role after session**
* Confirm everyone understood the Why/What/How framework and the Layers of Effect template
* Make sure the Miro templates (Why/What/How and Layers of Effect) are on your studio's board
* Note which decision they chose for the homework activity
## **This week's Studio Support Meeting: Why/What/How + Layers of Effect**
### **📚 Materials**
* Studio Miro board with Why/What/How template
* Studio Miro board with [Layers of Effect template](https://miro.com/templates/layers-effect-template/)
* The studio's values map from Session 1
### **👆 Before the session**
* Confirm the Miro templates are set up and accessible
* Review the studio's values map pick 1-2 values that seem ripe for the Why/What/How exercise (have a suggestion ready in case the team gets stuck)
* Know which decision they identified at the end of Session 3 for the Layers of Effect exercise
### **🌊 Session flow**
#### **Check-in (5 min)**
"How did the scenario exercise land for you? Was it easy or hard to start with values before jumping to solutions?"
Let each person respond briefly. Listen for whether they found the tools useful or abstract.
#### **Why/What/How deep dive (20-25 min)**
Pick one value from the studio's values map together and work through the full framework.
**Step 1: WHY (5-7 min)**
"Why does this value matter to your studio? What's at stake if you don't practice it?"
Prompts if they get stuck:
* "What would go wrong if you dropped this value tomorrow?"
* "Who is affected if this value isn't practiced?"
**Step 2: WHAT (5-7 min)**
"What does practicing this value actually look like? What are you committing to?"
Push for specificity:
* "If a new member joined next month, how would they know you practice this value?"
* "'We value transparency' what does that mean concretely? Open finances? Open conversations? Open documents?"
**Step 3: HOW (5-7 min)**
"How will you actually do this? What specific activities, rituals, or outputs?"
This is where it gets real:
* "How often? Who's responsible? Where does it live?"
* "What's the minimum viable version you could start this week?"
Capture everything on the Miro board.
#### **Layers of Effect practice (15-20 min)**
Use the decision they identified in Session 3. Walk through the three rings together.
:::tip
**Parallel framework for context:** Neil Postman's "Seven Questions for any new technology" maps closely to Layers of Effect. If a studio is struggling with the concentric rings framing, try Postman's questions as an alternate way in: (1) What problem does this solve? (2) Whose problem is it? (3) What new problems does solving it create? (4) Who is most impacted? (5) What changes in language? (6) What shifts in power? (7) What unintended uses might emerge?
:::
**Primary effects (5 min):** "What are the direct, immediate impacts of this decision?"
* Who gains? Who pays? Who's invisible but affected?
**Secondary effects (5 min):** "What are the known but less obvious impacts?"
* What dependencies or new risks are you introducing?
**Tertiary effects (5 min):** "What unforeseen consequences might emerge over time?"
* What standards could this establish? What shifts over years?
Use yellow stickies for opportunities/benefits and red for risks/costs. These might be connected a benefit in one layer can create a risk in another.
**Debrief (5 min):**
* "Did mapping this change how you think about the decision?"
* "Did your values hold up, or did you notice a gap between intention and effect?"
#### **Close and next steps (5 min)**
* "How often should you revisit your values and check whether your effects match your intentions?"
* Encourage them to make this a recurring practice, not a one-time exercise
### :star: **Tips**
If the Why/What/How stays vague:
* "Can you make that even more specific? What would someone actually *see* you doing?"
If they rush through Layers of Effect:
* "Slow down at tertiary. The unforeseen stuff is where the most important learning happens."
If they only see positive effects:
* "Every decision has costs. Who bears them? Who's invisible here?"
If one person dominates the values conversation:
* "Let's hear from everyone whose experience of this value is different?"
### **🏁 After the session**
* Note whether the studio can translate values into practices or if they're still stuck at the abstract level
* Note any gaps between stated values and emerging practices these will come up again
* Remind them to discuss as a studio: how often should you revisit values and check your effects?
## :triangular_flag_on_post: **Red flags to watch for**
* Values that are all "why" with no "what" or "how" inspiration without practice
* A studio that can't see any negative effects of their decisions lack of critical thinking or avoidance
* One person defining "our" values without challenge from the group
* Tools treated as a box-checking exercise rather than genuine reflection
* "We already know our values" without being able to articulate practices

View file

@ -0,0 +1,145 @@
---
title: 'Session 4: Decision-Making in Practice'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Peer Support Playbook/Session Guides/Session 4:
Decision-Making in Practice
parentDocument: Session Guides
outlineId: 7152ed91-fcea-47b5-9b0a-a2d9c11ce212
updatedAt: '2026-03-09T17:32:56.790Z'
createdBy: Jennie R.F.
---
> **Session content:** See [Session 4: Decision-Making in Practice](/doc/session-4-decision-making-in-practice-aHPbIrtYgR) for the full curriculum.
## **What happens in session**
Studios explore cooperative decision-making frameworks (consensus, consent, majority, delegation, random chance). They practice identifying who gets to raise issues, work through decision-making steps, and discuss handling dissent. The session also covers meetings (roles, facilitation, rotating responsibilities) and the "genius trap." Studios do a facilitation rotation practice in groups of three. The Informal Hierarchy Check-In is introduced as an ongoing tool.
:::tip
**Homework assigned:** practice one decision-making framework on a real decision, map current role distribution, complete the Informal Hierarchy Check-In as a studio, and notice where decisions happen this week.
:::
### :eyes: **Your role during session**
* Observe the facilitation rotation activity note how your studio members handle facilitating, participating, and observing
* Listen for how they talk about where decisions currently happen (meetings? DMs? default to one person?)
* Note whether anyone identifies informal hierarchy patterns during the journaling activity
### 👆 **Your role after session**
* Make sure your studio understands the Informal Hierarchy Check-In questions and plans to work through them together
* Confirm they've chosen which decision-making framework to practice this week
* Check that they understand the difference between consensus and consent this trips people up
## **This week's Studio Support Meeting: Decision-Making Practice + Informal Hierarchy Check-In**
### **📚 Materials**
* Informal Hierarchy Check-In questions (from session)
* Decision-making frameworks reference (consensus, consent, majority, delegation)
* The studio's notes from the facilitation rotation activity
### **👆 Before the session**
* Know which decision-making framework the studio chose to practice
* Have a small, real decision ready in case the studio can't think of one (e.g., "What should your next team social activity be?" or "How should you structure your next sprint?")
* Review the 5 Informal Hierarchy Check-In questions so you can facilitate them smoothly
### **🌊 Session flow**
#### **Check-in (5 min)**
"What did you notice in the facilitation rotation? What was harder than expected facilitating, participating, or observing?"
#### **Practice a decision-making framework (20-25 min)**
Help the studio work through a real decision using their chosen framework.
**Set up (3 min):**
* Name the decision clearly. Write it down where everyone can see it.
* Name the framework you're using "We're going to try consent on this."
* Clarify: who is affected by this decision? Does everyone here need to be part of it?
**Work through the decision-making steps (15-20 min):**
1. Understand the context what's happening? What do people feel about it?
2. Identify the underlying need what are we actually trying to address?
3. Generate options encourage weird ideas. Notice who contributes.
4. Check alignment with values how do these options fit with who you want to be?
5. Evaluate consequences who benefits, who's affected, trade-offs?
6. Decide using the framework name the method before you begin.
7. Before finalizing: "Does anyone have concerns they haven't voiced? Is anyone agreeing just to move on?"
8. Clarify implementation who does what? When do you check back?
**Debrief (5 min):**
* "How did that feel compared to how you usually make decisions?"
* "What was different about naming the framework first?"
* "Did anyone notice moments where old patterns kicked in?"
#### **Informal Hierarchy Check-In (15-20 min)**
Work through the five questions together. Go one at a time.
Prompts to keep it exploratory, not accusatory:
* "No guilt here we're just noticing."
* "These patterns aren't problems yet. But under pressure, they become cracks."
* "What would you want to change? What's actually fine?"
Capture observations they'll bring these to Session 5.
#### **Close (5 min)**
* "What's one pattern you noticed that you want to keep an eye on?"
* Remind them to notice where decisions happen this week (in meetings? DMs? Slack? who's present?)
### 👉 **Also this week**
#### **Map your current role distribution**
This can be done async or as part of the PS meeting if there's time. The question is simple: for each role/responsibility in the studio, where did it come from explicit decision or implicit default?
Prompts:
* "Who handles finances? Was that decided or did it just happen?"
* "Who schedules meetings? Who takes notes? Who answers external emails?"
* "Are there roles no one officially has but someone 'just does'?"
This feeds directly into Session 5's governance work.
### :star: **Tips**
If the decision-making practice feels artificial:
* "The point is to *notice* the process. How you decide matters as much as what you decide."
If one person dominates the decision:
* "I noticed \[name\] spoke first and longest. Can we try a round where everyone shares before discussion?"
If no one disagrees:
* "That was quick! Is everyone actually aligned, or is someone going along to keep things moving?" (This is a direct reference to the dissent section from the session.)
If someone gets defensive:
* "It's okay noticing patterns is the hardest part. You don't need to fix anything today."
### **🏁 After the session**
* Note which patterns came up in the Informal Hierarchy Check-In especially anything the studio seemed to avoid discussing
* Note how the decision-making practice went did they actually use the framework or fall back into old patterns?
* Bring observations to your PS check-in
## :triangular_flag_on_post: **Red flags to watch for**
* A studio that "decides" everything by default to one person and calls it delegation
* Someone consistently going along without engaging "I'm fine with whatever"
* Resistance to the hierarchy check-in "we don't have hierarchy, we're all equal" (it's insidious)
* Decisions happening outside the room (in DMs between two people) and being presented as done
* The same person always facilitating, taking notes, or scheduling

View file

@ -0,0 +1,146 @@
---
title: 'Session 5: Coop Structures and Governance'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Peer Support Playbook/Session Guides/Session 5: Coop
Structures and Governance
parentDocument: Session Guides
outlineId: f317905f-1ee1-4412-89c6-6b12e007b7d4
updatedAt: '2026-03-09T17:42:26.074Z'
createdBy: Jennie R.F.
---
> **Session content:** See [Session 5: Coop Structures and Governance](/doc/session-5-coop-structures-and-governance-DxoDQCtL66) for the full curriculum.
## Pre-session
If you are the presenting PS for this session, prep a **15-20 minute** case study from your studio covering:
* How your studio makes decisions now
* What you tried that didn't work
* One example of governance helping resolve a real issue
## **What happens in session**
Studios learn about:
* legal structures (sole prop, partnership, corporation, worker coop)
* governance models (collective governance, advice process, sociocratic circles, board + membership, DisCOs)
* member management (adding, departing, removing members)
A PS presenter shares a 15-20 minute case study on their studio's governance journey. We also introduce Community Rule as a tool for documenting governance in plain language. We focus on *making governance visible*, designing structures from the patterns noticed in Session 4, and distinguishing between governance practice and legal incorporation.
### :eyes: **Your role during session**
* If presenting: deliver your case study
* Observe how your studio responds to the governance models what resonates? What causes confusion or resistance?
* Listen for whether they connect their Session 4 Informal Hierarchy Check-In observations to governance design choices
* Note how they react to the member removal discussion. It's an uncomfortable topic.
### 👆 **Your role after session**
* Make sure your studio has access to [Community Rule](https://communityrule.info/)
* Confirm they understand the homework: start a Community Rule draft with you, discuss financial sustainability, and do a personal reflection on financial access
* Note which governance model(s) they're gravitating toward
## **This week's Studio Support Meeting: Community Rule Drafting**
### **📚 Materials**
* [Community Rule](https://communityrule.info/) tool
* Studio's Informal Hierarchy Check-In observations from Session 4
* Notes on which governance model(s) interested them
### :world_map: **Context**
This is a working session. You're helping the studio start documenting their governance in plain language using Community Rule. They don't need to finish the goal is to surface where they already have answers vs. where they need more conversation. This will be a living document.
### **👆 Before the session**
* Familiarize yourself with the Community Rule interface and fields
* Review the studio's Informal Hierarchy Check-In observations
* Have the governance models overview handy (collective governance, advice process, circles, board + membership) in case they need a refresher
### **🌊 Session flow**
#### **Check-in (5 min)**
"What governance model stuck with you from the session? Did anything click, or feel wrong?"
#### **Community Rule walkthrough (10 min)**
Open the tool together. Community Rule works as a modular builder you assemble your governance from pre-made or custom building blocks.
Start with the basics: Name your studio and write a short summary of its structure.
Then explore the module library together. There are four categories:
* **Culture** values, norms, purpose, solidarity, diversity
* **Decision** how decisions get made (lazy consensus, do-ocracy, vote, ranked choice, etc.)
* **Process** how policies are implemented and evolve (accountability process, delegation, transparency, dissolution, exclusion, etc.)
* **Structure** roles and internal entities (board, council, membership, ownership, roles, committee, etc.)
Drag in the modules that feel relevant. Each one can be configured with key-value pairs for example, a "Membership" module might have configuration like "Eligibility: active worker-owners who have completed a 3-month trial period." You can also create custom modules for anything the library doesn't cover.
Don't try to build everything at once. Start by browsing the categories and noticing which modules the studio can configure easily vs. which ones lead to blank stares.
Prompts:
* "Which of these are you already doing without naming it?"
* "Where is there genuine disagreement or uncertainty?"
* "What's missing from the library that's specific to how you work?"
#### **Draft together (20-25 min)**
Start filling in what you can. Focus on the modules where there's energy or alignment. When you hit a field where there's disagreement, note it and move on. Don't try to resolve everything today.
#### **Close and gaps list (5 min)**
* Make a list of areas that still need discussion
* "What's the most important unresolved question?"
* "Who's going to take a first pass at writing up what we decided today?"
### 👉 **Also this week**
#### **Financial sustainability conversation**
Session 5 homework asks each person to reflect: *What does financial sustainability look like for you personally? What would you need from this project?*
This is prep for Session 6 (Equitable Economics). Check in during the week:
* "Has everyone spent some time thinking about the financial sustainability question?"
* "And the personal reflection: what financial information have you never been allowed to see at work?"
These don't need to be discussed as a studio yet just make sure individuals are reflecting.
### :star: **Tips**
If they want to pick a governance model immediately:
* "You don't have to commit today. Start with collective governance or advice process you can add complexity as you learn what you actually need."
If Community Rule feels bureaucratic:
* "You're already doing governance this just helps you name it."
If they skip over membership/removal:
* "This is the part that matters most when things get hard. Even a rough sketch now saves a lot of pain later."
If one person is doing all the talking about governance:
* "Governance designed by one person is just management with extra steps. Everyone needs to shape this."
### **🏁 After the session**
* Note where the studio has clear alignment vs. where they got stuck
* Note any tension around membership/removal these conversations will deepen
* Remind them about the financial sustainability reflection for Session 6 prep
* Bring the draft status to your PS check-in
## :triangular_flag_on_post: **Red flags to watch for**
* A studio that resists documenting anything "we just know how we work" (exactly the problem)
* Governance designed around one person's strengths or preferences
* Avoiding the membership/removal conversation entirely
* Confusing governance with incorporation "we're not a real coop yet so we don't need this"
* A draft that looks perfect on paper but doesn't match how the studio actually operates

View file

@ -0,0 +1,161 @@
---
title: 'Session 6: Equitable Economics'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Peer Support Playbook/Session Guides/Session 6:
Equitable Economics
parentDocument: Session Guides
outlineId: e90f4170-5f49-4705-ac9f-c42214aaf73b
updatedAt: '2026-03-09T17:44:19.888Z'
createdBy: Jennie R.F.
---
> **Session content:** See [Session 6: Equitable Economics](/doc/session-6-equitable-economics-VhhiZSc9Ej) for the full curriculum.
## **What happens in session**
This is a dense session covering revenue sources, financial transparency, compensation models (equal pay, needs-based, role-based, hybrid), profit-sharing basics, and IP ownership. Studios discuss what financial sustainability means personally, explore open-book practices, and start thinking about what "fair" compensation looks like. The session connects financial decisions to the governance structures from Session 5.
:::tip
**Homework assigned:** discuss financial transparency (what feels vulnerable to share?) and compensation models (what feels fair?). These conversations are prep for the PS meeting this week.
:::
### :eyes: **Your role during session**
* Observe how your studio reacts to the compensation models discussion where do they light up? Where do they tense up?
* Listen for financial information gaps who has financial literacy? Who doesn't?
* Note whether anyone avoids the personal financial sustainability question
* Watch the IP ownership discussion this can surface unexpected disagreements, especially if someone brought existing work into the project
### 👆 **Your role after session**
* Check that everyone understands the homework and is willing to have the financial conversations
* Note any immediate tensions about money that surfaced during the session
* Make sure they know the tools mentioned: [CoBudget](https://cobudget.com/), [OpenCollective](https://opencollective.com/), [coop.love](https://coop.love)
## **This week's Studio Support Meeting: Financial Transparency and Compensation**
### **📚 Materials**
* Compensation models reference (equal pay, needs-based, role-based, hybrid)
* Studio's Community Rule draft from Session 5 (financial decision-making sections)
* Revenue sources overview from the session
### :world_map: **Context**
Money is where values meet reality. This studio support meeting helps the studio have the financial conversations that most groups avoid. Your role is to create enough safety for vulnerability while pushing past surface-level comfort. These conversations don't need to reach decisions today they need to *happen*.
### **👆 Before the session**
* Check in about whether they've started reflecting on the homework questions
* Review the studio's governance draft what did they decide about financial decision-making?
* Be prepared for this session to be emotionally charged
### **🌊 Session flow**
#### **Check-in (5 min)**
"The session covered a lot of ground about money. What's sitting with you? Anything surprising or anything you're dreading talking about?"
#### **Financial transparency (15-20 min)**
Start with the personal reflection prompt from Session 5 homework:
"What financial information have you never been allowed to see at work. What might have been different if you had?"
Let each person share. This grounds the conversation in lived experience before it becomes abstract.
Then move to the studio:
**Prompts:**
* "What financial information would feel vulnerable to share with your studio?"
* "What would you need in order to feel safe sharing it?"
* "What's the minimum level of financial transparency you'd want in your coop?"
**Practical questions:**
* "Who currently knows the most about the studio's finances? Is that a choice or a default?"
* "If you were to do open books what would that actually look like? A shared spreadsheet? Monthly summaries? Full access to accounts?"
* "What's one step you could take this week toward more transparency?"
Don't push anyone to share financial details they're not ready to. The goal is *naming the discomfort*.
#### **Compensation models (15-20 min)**
Review the four models briefly:
* **Equal pay:** same rate regardless of role
* **Needs-based:** adjusted for members' actual financial situations
* **Role-based:** different rates for different roles
* **Hybrid:** base rate plus adjustments
**Discussion prompts:**
* "What feels fair to you? Where do you notice tension between 'fair' and 'comfortable'?"
* "What would you need to know about each other's situations to decide together?"
* "Which model aligns best with your values?"
**Dig deeper:**
* "If you chose equal pay, what happens when one person is working 40 hours and another is working 15?"
* "If you chose needs-based, who decides what counts as a 'need'?"
* "If you chose role-based, who decides which roles are worth more and doesn't that recreate hierarchy?"
You don't need to reach a decision.
#### **IP ownership first pass (5-10 min)**
If there's time, and only if the studio is ready:
* "Who owns the game you're making together?"
* "Has anyone brought existing work into the project? What happens to that?"
* "What happens to IP if someone leaves?"
If these questions create tension, name it: "This is the kind of conversation that gets harder the longer you wait. Notice where you're not aligned."
#### **Close (5 min)**
* "What's one financial conversation your team has been avoiding?"
* "What's one concrete step you can take before next session?"
* Remind them: Session 7 is about conflict and money is often where conflict shows up first
### :star: **Tips**
If someone shuts down:
* "Money stuff can be really personal. You don't have to share anything you're not ready to. But notice what you're protecting and why."
If the group avoids specifics:
* "Saying 'we'll figure it out later' is a to avoid financial conversations. Try to think of a specific decision to discuss today."
If one person has significantly more financial literacy:
* "Part of transparency is making sure everyone can participate in financial decisions. Can you explain that in plain terms?"
If there's a clear financial power imbalance:
* Don't force anyone to disclose. But you can note: "Financial differences affect power whether you name them or not. The question is whether you address it openly."
If they want to decide compensation now:
* "You can start with a provisional model. Try it for a period, then revisit. Consent-based: is this good enough for now, safe enough to try?"
### **🏁 After the session**
* Note how the financial conversations went where was there openness vs. avoidance?
* Note any power dynamics around financial literacy or financial resources
* Note any IP ownership disagreements these need to be resolved before incorporation
* Bring observations to your PS check-in
## :triangular_flag_on_post: **Red flags to watch for**
* One person controlling all financial information or decisions
* Someone minimizing their own financial needs to match the group
* "We don't need to talk about money yet" avoidance that will become a crisis later
* Financial plans that assume best-case scenarios with no contingency
* Major gaps in financial literacy that no one is addressing
* IP ownership assumptions that haven't been discussed especially if someone brought pre-existing work
* Compensation discussions where one person's opinion is treated as the default

View file

@ -0,0 +1,169 @@
---
title: 'Session 7: Conflict Resolution and Collective Care'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Peer Support Playbook/Session Guides/Session 7:
Conflict Resolution and Collective Care
parentDocument: Session Guides
outlineId: 146f541d-6ccb-42be-95f7-53ed23d5ed90
updatedAt: '2026-03-09T17:46:30.775Z'
createdBy: Jennie R.F.
---
> **Session content:** See [Session 7: Conflict Resolution and Collective Care](/doc/session-7-conflict-resolution-and-collective-care-fplNXDnOWp) for the full curriculum.
## Pre-session
* Review Baby Ghosts' [Conflict Resolution Policy](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Conflict+Resolution+Policy) before session this is the template participants will adapt for homework
* Check in with your studio about how their compensation discussions went; any friction that came up is useful for this session
## **What happens in session**
The heaviest session. Studios learn to reframe conflict as data (not failure), distinguish structural from interpersonal conflict, and practice behaviourally-specific feedback. Key tools: the Loving Justice framework (Brave? Kind? Honest? Humble?), the intent/behaviour/impact model ("stay on your side of the net"), and the Window of Transformation (zones of activation). The session covers multi-directional accountability, escalation as care, and the idea that trust comes from repair, not avoidance.
:::warning
**Before this session:** review Baby Ghosts' [Conflict Resolution Policy](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Conflict+Resolution+Policy). Check in with your studio about how their compensation discussions went any friction that came up is useful material for this session.
:::
### :eyes: **Your role during session**
* Observe how your studio responds to the conflict reframing relief, resistance, or discomfort can all be informative
* Watch the activity closely are they able to use behaviourally-specific feedback or do they slide into judgments?
* Note whether anyone identifies conflicts they've been avoiding
* Pay attention to body language during the accountability discussion who checks out? Who leans in?
### 👆 **Your role after session**
* Check in with each studio member (even briefly, via Slack) about how the session landed
* Make sure they have the link to Baby Ghosts' [Conflict Resolution Policy](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Conflict+Resolution+Policy)
* If any studio member seems activated or upset, reach out directly. This session can surface real pain.
## **This week's Studio Support Meeting: Conflict Policy and Practice**
### **📚 Materials**
* Baby Ghosts' [Conflict Resolution Policy](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Policies/Conflict+Resolution+Policy) and [Procedures](https://publish.obsidian.md/baby-ghosts-corp-docs/Public/Procedures/Conflict+Resolution+Procedures)
* Loving Justice framework reference
* Window of Transformation zones reference
### :world_map: **Context**
This PS meeting has two parts: (1) helping the studio name an avoided tension, and (2) reviewing the conflict resolution template together. The order matters naming a real tension first gives the template review practical grounding. But read the room. If the tension-naming conversation goes deep, let it run and abbreviate the template review. The real work is the conversation, not the document.
This may be the most emotionally demanding PS meeting. Be prepared to hold space without trying to fix everything.
### **👆 Before the session**
* Review the Baby Ghosts conflict resolution policy and procedures yourself know the structure well enough to guide a discussion
* Reflect on what you observed during the session and the compensation discussion last week is there an unresolved tension you've noticed?
* Check your own readiness. If you're carrying a lot from your own studio or personal life, be honest with yourself about your capacity to hold space today.
### **🌊 Session flow**
#### **Check-in (5 min)**
"How are you feeling after that session? Anything stirred up?"
This isn't a throwaway question. Give it real space. If someone needs to talk, let them.
#### **Name one avoided tension (15-20 min)**
:::warning
***This could be hard.*** Go gently but don't avoid it.
:::
"What conflict or tension has your studio been avoiding? It doesn't have to be big small avoidances are actually great to examine."
**If no one speaks up immediately**, let the silence sit. Count to 15 in your head before you intervene. Then try:
* "Is there something you've been wanting to bring up but haven't found the right moment?"
* "Think back to the last few weeks. Was there a moment where something felt off but no one said anything?"
* "Are there any patterns from the Informal Hierarchy Check-In (Session 4) that you haven't addressed?"
**If something does come up:**
Help them practice the tools from the session:
1. **Behaviourally-specific feedback:** "What did you actually observe? What's the behaviour you can point to?"
2. **Stay on your side of the net:** "What was the impact on you? Separate that from what you think they intended."
3. **Loving Justice check:** "Is what you want to say brave? Kind? Honest? Humble?"
4. **Window of Transformation:** "Where are you right now? Where do you think the other person is? Is this a good time for this conversation?"
**If something big surfaces:**
Don't try to resolve it in this meeting. Help them decide:
* "Is this something you want to keep working through now, or does it need a dedicated conversation?"
* "Would it help to have a third party present when you continue this?"
* "What would make it safe enough to keep talking?"
#### **Review the conflict resolution template (15-20 min)**
Go through Baby Ghosts' policy together. For each section, ask:
* "Does this make sense for your studio?"
* "What would you change?"
* "What's missing?"
**Key areas to discuss:**
**Who initiates:** "In your studio, who would actually be the one to say 'we need to use the process'? Is it comfortable for everyone to do that, or would some people never initiate?"
**Documentation:** "How much documentation feels right? Too little and things get lost. Too much and it becomes punitive."
**Timelines:** "How quickly should you respond to a raised concern? What's realistic?"
**When resolution isn't reached:** "What happens if you go through the whole process and still can't agree? What's the last resort?"
**Escalation:** "Who's your third party? Another studio member? Your PS? Someone outside the program?"
They don't need to finalize a policy today. The goal is to identify what resonates, what needs adapting, and what gaps exist.
#### **Close (5 min)**
* "What's one thing you want to commit to about how you handle conflict going forward?"
* "Is there anything from today's conversation that needs follow-up before next session?"
* Remind them: Session 8 is the last session. Encourage them to use this week to address anything unresolved.
### :star: **Tips**
If no one wants to name a tension:
* Don't force it. "That's okay. The invitation stays open. Sometimes naming something takes longer. You can always come back to this."
If it gets heated:
* "Let's pause. Where is everyone right now?" (Use the Window of Transformation language.) "Is this a conversation we can have right now, or do we need to step back?"
If someone minimizes:
* "You said 'it's not a big deal' but you brought it up. Can you say more about why it's on your mind?"
If someone deflects to structural issues to avoid interpersonal ones (or vice versa):
* "It can be both. What's the structural part, and what's the interpersonal part? Which one are you more comfortable talking about and which one are you avoiding?"
If the template review feels abstract:
* "Think about the tension we just discussed. Would this process have helped? Where would it break down?"
### **🏁 After the session**
* Note how the tension-naming went did something real surface, or did the studio stay safe?
* Note how they responded to the conflict resolution template did they engage or treat it as a formality?
* If any individual seems affected, follow up with them directly
* Bring observations to your PS check-in especially anything that concerns you about studio dynamics
## :triangular_flag_on_post: **Red flags to watch for**
* A studio that insists they have no conflicts avoidance is not peace
* Someone who identifies a conflict but then immediately retracts: "never mind, it's fine"
* Conflict always attributed to one person scapegoating
* Political framing used to avoid naming emotional experience (the emotional-political conflation trap from the session)
* A studio that wants the policy "just in case" but clearly has an active, unnamed conflict
* Someone who seems shut down or dissociated check in with them privately after
* Performative agreement: "I'm fine with whatever the group decides" when they clearly aren't

View file

@ -0,0 +1,162 @@
---
title: 'Session 8: Self-Evaluation and Pathways'
collection: Cooperative Foundations
path: >-
Cooperative Foundations/Session Content/Session 8: Self-Evaluation and
Pathways
parentDocument: Session Content
outlineId: 8c4f622c-661b-4e40-bb59-446b8b37cf4b
updatedAt: '2026-03-09T15:40:46.528Z'
createdBy: Jennie R.F.
---
> **Peer Supports:** See [PS Guide: Session 8 — Self-Evaluation and Pathways](/doc/ps-guide-session-8-self-evaluation-and-pathways-3hukBAaITz) for your role during session and this week's studio support meeting.
## Welcome
* Slide: Tag Yourself
---
## Intro - 5 min
This is the last session. Wahoo! Where did the time go?!
Last week we talked about conflict. Some of you may have had difficult conversations since then. That's the good stuff, as eileen would say. It's okay if things feel unfinished or messy. You don't have to have it all figured out by now. We hope you feel you have the tools and the trust to keep figuring it out together.
What happens next is going to be harder than the program. You'll ship a game, or you won't. Money will come in, or it won't. Life will get busy. And the governance practices you've built over these weeks can quietly erode if you stop doing them.
You might skip a governance meeting because you're crunching… then another. Someone starts "just handling" the finances because it's easier than showing someone else how to do it. Six months later someone asks "why are we even a co-op?" and no one has a good answer. This is the most common way cooperatives fail.
The post-program supports we're about to talk about exist to keep up your momentum and help you build your collaborative muscles - and remember the "why."
And today we pause to reflect on what you've built and where you're headed. We have two assessments - individual and studio - to help you see how far you've come and clarify your next steps.
And then we'll celebrate as a group!
---
## Check-in - 5 min
*What has shifted for you since Session 0?* *Has your emotional connection to the studio changed over the program?*
---
## Self-assessment overview - 5 min
It's easy to get into a groove and forget to check in with yourself. But clarity of self-reflection makes you a better collaborator. Most of the work is making the time and space to sit with your thoughts before writing them down. That's what prevents decisions made in haste or fear and builds intentional practice instead.
We have two assessments for you today. The first is personal and private just for you. The second is collective you'll complete it as a studio, and Baby Ghosts will review it to understand where you're at and how to support you going forward. This is also important feedback for us, so please be honest about what worked and what didn't.
---
## Personal self-assessment - 10 min
**This is private.** Baby Ghosts won't see it. Your studio won't see it unless you choose to share.
This helps you get a clearer sense of your personal and professional baseline. Be on the same page with yourself before you meet with your team. Where have you grown? Where do you still feel uncertain? What do you need from your collaborators that you haven't asked for yet?
\[TODO-06: Link to assessment form when ready. Tracked in Asana.\]
---
## Studio self-assessment - 10 min
**This is collective.** You'll complete it together as a studio, and Baby Ghosts will review it to understand where you're at and how to support you.
The template is on your studio Miro board. You'll rate where your studio is on each of seven areas using this scale:
1. **Considering/Reflecting** You've thought about it individually but haven't discussed it as a team yet.
2. **Discussing Collectively** You're talking about it together but haven't made decisions.
3. **Brainstorming** You're actively generating ideas and exploring options.
4. **Sifting/Sorting** You're narrowing down, making choices, working toward alignment.
5. **First Draft of Documentation** You have something written down a policy, a process, a shared agreement.
The seven areas map to the arc of this program:
1. Values, purpose & alignment
2. Governance
3. Decision-making & meetings
4. Equitable economics
5. Conflict & repair
6. Program reflection
7. What's next
Be honest with each other. A "2" in conflict resolution after eight weeks means you know where to focus. This assessment also helps you understand if your studio is ready to continue together, to pause, or to part ways. All of these are valid outcomes.
---
## What's next - 15 min
Two questions to start: *What do you want to focus on as a studio going forward? What's your plan for revisiting your governance and values after the program ends and who's responsible for scheduling it?*
### Stay connected: Ghost Guild
When the program wraps up, your weekly Peer Support sessions end but your Peer Support isn't going anywhere. They're still part of the community, and many are happy to hear from you as you hit milestones or run into challenges.
Going forward, your home base for support is the Ghost Guild Baby Ghosts' alumni community. Program alumni are automatically enrolled. Membership includes free access to talks and workshops, community building with solo devs, early access to resources, and opportunities to become a Peer Support or contribute to the knowledge commons.
### Keep practicing
Build in a revisit of your values and governance documents. Quarterly is ideal, twice a year at minimum. Put it on the calendar before you leave today. Ask: are we still practicing what we said we would? Where have we drifted? What needs updating? The studios that stay cooperatives are the ones that keep asking these questions.
Build in a self-accountability practice too. Values drift can happen quietly. To prevent it, make a regular habit of asking yourself: Did my choices today align with who I want to be? This can be as simple as a five-minute reflection at the end of the week, or a quick message to a collaborator: "Hey, I was short with you yesterday. That wasn't who I want to be. Sorry." You've been building this muscle all program. Stay strong! put it alongside your governance review on the calendar.
### Upcoming workshops
We offer standalone workshops throughout the year on topics we've introduced here and some we haven't had time to cover in the program. These are included with Ghost Guild membership or available for public registration. Past and upcoming workshops include: Legal Structures & By-Laws, Business Planning, Grantwriting & Alt Funding, Social Impact, Advanced Governance, Miro / Tools Workshop, Why We're Here: Telling Your Studio's Story, and Process Development.
### Interested in becoming a Peer Support?
Some of you may be interested in supporting future cohorts as a Peer Support. This is a paid role and a meaningful way to build capacity in the community you already know firsthand what studios go through, and that experience is exactly what makes a great PS.
Here's what the role involves: you'd attend all program sessions alongside your assigned studios, facilitate weekly peer support meetings with one studio, and participate in PS training before the cohort starts. It's approximately 4-6 hours per week during the 10-week program. If you're someone who found yourself energized by the collaborative work, who notices group dynamics, and who cares about holding space for others this might be a great fit. Talk to us after the session or reach out anytime.
### Incorporation
If your studio is ready to incorporate as a cooperative, we can point you toward resources and service providers who understand cooperative structures. We don't provide legal advice, but we can help connect you with people who do.
And a reminder: you don't have to incorporate to work cooperatively. Many studios practice cooperative values and governance long before or without ever filing incorporation papers. The practices matter more than the paperwork.
\[TODO-14: Develop resources, service providers, and readiness assessment - tracked in Asana.\]
---
## Collaborative Zine Making - 35 min
*eileen leads this activity.*
---
## Closing - 5 min
You're about to re-enter an industry that defaults to hierarchy. Lawyers will draft conventional corporate structures. Funders will ask for a single point of contact. Publishers will want to know who's in charge. Your own teammates under pressure may reach for the familiar. This is expected. It's how we've learned to operate.
There is no self-made entrepreneur. Everyone is embedded in community cooperatives just make that explicit. You've spent eight weeks building the muscle to do that together. Keep using it.
*What's something you're proud of from the program?* *What conversation did you have that you wouldn't have had otherwise?*
---
## Homework
1. **Complete your personal assessment** Do this before your studio meets. This is private, just for you.
2. **Complete your studio assessment together** Meet as a studio and work through the template on your Miro board. This comes back to Baby Ghosts so we can understand where you're at and how to support you going forward.
\[TODO-04: Add due dates for assessments\]
And stay in touch. You're part of this community now. 👻👻👻
---

View file

@ -0,0 +1,13 @@
---
title: Session Content
collection: Cooperative Foundations
path: Cooperative Foundations/Session Content
parentDocument: null
outlineId: 27d69e39-4bc3-4122-8d5f-83ecca10b1ce
updatedAt: '2026-03-15T16:17:56.679Z'
createdBy: Jennie R.F.
---
:::info
This overview should describe how the content should be used by presenters.
:::

View file

@ -0,0 +1,10 @@
---
title: Session Guides
collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook/Session Guides
parentDocument: Peer Support Playbook
outlineId: 62a75910-60e6-4018-9391-b4afbc50b419
updatedAt: '2026-03-09T15:48:41.025Z'
createdBy: Jennie R.F.
---

View file

@ -1,10 +1,12 @@
--- ---
title: Values Mapping title: Values Mapping
collection: Cooperative Foundations collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook/Values Mapping path: >-
parentDocument: Peer Support Playbook Cooperative Foundations/Peer Support Playbook/Exercises and Prompts/Values
Mapping
parentDocument: Exercises and Prompts
outlineId: fcba1d09-2356-4d69-9995-f512847ac552 outlineId: fcba1d09-2356-4d69-9995-f512847ac552
updatedAt: '2026-03-03T12:42:15.702Z' updatedAt: '2026-03-09T15:48:21.746Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
When: Between Session 1 and Session 2 When: Between Session 1 and Session 2

View file

@ -1,10 +1,12 @@
--- ---
title: Why-What-How title: Why-What-How
collection: Cooperative Foundations collection: Cooperative Foundations
path: Cooperative Foundations/Peer Support Playbook/Why-What-How path: >-
parentDocument: Peer Support Playbook Cooperative Foundations/Peer Support Playbook/Exercises and
Prompts/Why-What-How
parentDocument: Exercises and Prompts
outlineId: ff5419e1-cfec-48fb-b988-67b8faaad067 outlineId: ff5419e1-cfec-48fb-b988-67b8faaad067
updatedAt: '2026-03-03T12:38:59.197Z' updatedAt: '2026-03-09T15:48:23.980Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
### The Why/What/How framework ### The Why/What/How framework

View file

@ -0,0 +1,10 @@
---
title: Admin Guide
collection: Hub User Guide
path: Hub User Guide/Admin Guide
parentDocument: null
outlineId: 9433ff5a-c85e-48e4-883a-0b2fdca726c0
updatedAt: '2026-03-16T10:36:57.289Z'
createdBy: Jennie R.F.
---

View file

@ -0,0 +1,75 @@
---
title: Application Status Reference
collection: Hub User Guide
path: Hub User Guide/Reference/Application Status Reference
parentDocument: Reference
outlineId: 84ce9efa-0f15-4cb0-9036-2ef0757446d7
updatedAt: '2026-03-16T10:44:57.218Z'
createdBy: Jennie R.F.
---
Every application moves through a defined set of statuses as it progresses from submission to final outcome. This page documents each status, what it means, and the valid transitions between them.
## Status Definitions
`**new**` -- Application recently submitted. This is the starting status for every application that comes in through the apply form.
`**waiting_first_round**` -- Waiting to be scheduled for a first-round interview. The application has been advanced from its initial review stage into the first-round interview stage but does not yet have an interview booked.
`**first_round_scheduled**` -- First-round interview has been scheduled. Set automatically when a Cal.com booking webhook matches the applicant's email, or when an admin manually changes the status.
`**needs_first_decision**` -- First round complete, awaiting a decision. Set automatically when an admin locks the first-round stage (all completed reviews must be locked and the minimum reviewer threshold met).
`**waiting_second_round**` -- Waiting to be scheduled for a second-round interview. The committee decided to advance the application past the first round into a second-round stage.
`**second_round_scheduled**` -- Second-round interview has been scheduled. Same mechanism as first round -- triggered by a Cal.com webhook match or manual status change.
`**needs_final_decision**` -- Second round complete, awaiting final decision. Set automatically when the final stage is locked.
`**accepted**` -- Application accepted into the cohort. Terminal status.
`**waitlist**` -- Application waitlisted. Can still transition to `accepted` or `declined`.
`**declined**` -- Application declined. Terminal status.
## Valid Transitions
Each status can only move to specific next statuses. The system enforces these rules and rejects invalid transitions.
* `new``waiting_first_round`, `first_round_scheduled`, `declined`
* `waiting_first_round``first_round_scheduled`, `waiting_second_round`, `declined`
* `first_round_scheduled``needs_first_decision`, `waiting_first_round`, `declined`
* `needs_first_decision``waiting_second_round`, `accepted`, `waitlist`, `declined`
* `waiting_second_round``second_round_scheduled`, `accepted`, `waitlist`, `declined`
* `second_round_scheduled``needs_final_decision`, `waiting_second_round`, `declined`
* `needs_final_decision``accepted`, `waitlist`, `declined`
* `accepted` → (none -- terminal)
* `waitlist``accepted`, `declined`
* `declined` → (none -- terminal)
## What Triggers Transitions
**Automatic transitions:**
* **Stage locking** sets `needs_first_decision` or `needs_final_decision` depending on whether the locked stage is the final stage or a first-round stage.
* **Advancing to next stage** sets `waiting_first_round` or `waiting_second_round` based on the next stage's type. If an interview is already scheduled for that round, the status skips directly to `first_round_scheduled` or `second_round_scheduled`.
* **Cal.com webhooks** move an application from `waiting_first_round` to `first_round_scheduled` (or the second-round equivalent) when a booking is created.
**Manual transitions:**
* Admins can change any application's status through the **Applications** tab, provided the transition is valid.
* Bulk status changes are available for processing multiple applications at once.
* The **Decisions** tab records accept, waitlist, or decline decisions and updates the status accordingly.
:::info
An application can be declined from almost any active status. The `waitlist` status is the only non-terminal outcome status -- waitlisted applications can still be accepted or declined later.
:::
## Status Groups
The system groups statuses for filtering:
* **Active:** `new`, `waiting_first_round`, `first_round_scheduled`, `needs_first_decision`, `waiting_second_round`, `second_round_scheduled`, `needs_final_decision`
* **Decision pending:** `needs_first_decision`, `needs_final_decision`
* **Terminal:** `accepted`, `waitlist`, `declined`

View file

@ -0,0 +1,64 @@
---
title: Assigning Reviewers
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Assigning Reviewers
parentDocument: Admin Guide
outlineId: 0626a7d6-5fd6-4476-8050-565fe054f056
updatedAt: '2026-03-16T10:36:59.329Z'
createdBy: Jennie R.F.
---
Reviewer assignment is a two-step process: first add reviewers to the cohort, then assign them to specific applications. All assignment is manual -- there is no auto-assignment.
## Prerequisites
You need cohort admin or higher access.
## Step 1: Add Reviewers to the Cohort
The **Reviewers** tab on the cohort page shows all reviewers currently assigned to this cohort along with their assignment counts.
1. Click **Add Reviewers**.
2. A modal opens with two tabs:
* **Select Existing** -- Choose from users already in your organization. Select one or more users from the dropdown and click **Add Reviewers**.
* **Add New User** -- Create a new user account. Enter their name, email, and role (Reviewer, Cohort Admin, or Org Admin). Click **Create & Add as Reviewer**. They will receive a password setup email.
3. After adding, you are prompted to optionally assign them to applications immediately (Step 2 below). You can skip this and assign later.
## Step 2: Assign Reviewers to Applications
Once reviewers are on the cohort, assign them to specific applications:
### From the Reviewers Tab
Click the assignment count next to a reviewer's name to open the **Manage Assignments** modal. This shows:
* **Current Assignments** -- Applications already assigned to this reviewer, with permission level and scoring status. You can toggle between `review` and `view` permissions, or unassign applications.
* **Add Applications** -- Select additional applications to assign. Choose a permission level and click **Assign**.
### From the Applications Tab (Bulk)
Select applications using the checkboxes on the **Applications** tab, then choose **Assign Reviewers** from the **Bulk Actions** dropdown. Select the reviewers and permission level, then confirm.
## Permission Levels
* `review` -- The reviewer can view the application and submit scores. This is the default.
* `view` -- The reviewer can see the application but cannot score it. Useful for committee members or observers.
Permission can be changed at any time from the **Manage Assignments** modal. Locked reviews cannot have their permission changed.
## Removing Assignments
* **Unassign from application** -- In the **Manage Assignments** modal, click the X button next to an assignment. Locked reviews cannot be unassigned.
* **Remove from cohort** -- On the **Reviewers** tab, click the trash icon next to a reviewer to remove them from the cohort entirely.
:::warning
Removing a reviewer from the cohort does not delete their completed reviews. The reviews remain in the system but the reviewer loses access.
:::
:::info
When a reviewer is assigned to an application, that application appears on their review dashboard at `/reviews`. Unassigning removes it from their dashboard.
:::

View file

@ -0,0 +1,59 @@
---
title: Configuring Review Stages
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Configuring Review Stages
parentDocument: Admin Guide
outlineId: c3442aae-6d96-4b49-a1b9-7556546ab483
updatedAt: '2026-03-16T10:36:58.951Z'
createdBy: Jennie R.F.
---
Review stages define the evaluation pipeline for your cohort. Each stage has its own rubric, reviewer requirements, and settings. You configure stages from **Manage** > **Configure Stages** in the top navigation -- not from within the cohort view.
## Prerequisites
You need cohort admin or higher access. The cohort must already exist with stages created by a system administrator.
## Accessing Stage Configuration
1. Click **Manage** in the top navigation bar.
2. Select **Configure Stages**.
3. Choose a cohort from the dropdown.
The page shows configuration cards for three stage types: **Application Stage**, **First Round Interviews**, and **Second Round Interviews**.
## Stage Types
* **Application Reviews** (`application-reviews`) -- The initial written application review. Reviewers score the submitted application form against a rubric.
* **First Round** (`first-round`) -- First-round interview scoring. Reviewers evaluate interview performance using a separate rubric.
* **Second Round** (`second-round`) -- Final-round interview scoring. Uses its own rubric for the final evaluation.
## Settings Per Stage
Each stage card offers the following configuration:
### Scoring Rubric
Select the Flywheel rubric used for scoring this stage. Application stage rubrics score the written application; interview rubrics score interview performance. Rubrics are managed in Flywheel and appear here as a dropdown.
### Minimum Reviewers
Set the minimum number of completed reviews required before a stage can be locked. The default is 2. This threshold is enforced when locking -- you cannot lock a stage until this many reviewers have submitted scores.
### Interview Question Reference (Interview Stages Only)
For first-round and second-round stages, you can enter interview questions or talking points. This content is available to reviewers as a popup reference during scoring. Use `##` for section headers.
## Saving
Each stage has its own **Save Configuration** button. Click it after making changes. Minimum reviewer changes save immediately when you change the value.
## Stage Statuses
Each stage has a status: `draft`, `active`, `completed`, or `archived`. These are managed by the system as the cohort progresses. Stage statuses are separate from the cohort lifecycle status.
:::info
The Application stage also includes survey version management. This controls which version of the Flywheel survey is used for application questions and ensures scoring consistency.
:::

View file

@ -0,0 +1,56 @@
---
title: Customizing the Apply Form
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Customizing the Apply Form
parentDocument: Admin Guide
outlineId: e25f8302-206d-42a0-bf51-33173f5e764f
updatedAt: '2026-03-16T10:36:58.117Z'
createdBy: Jennie R.F.
---
The **Form** tab on the cohort page lets you customize the public application form's appearance and messaging. Changes are reflected on the live form at `/c/{cohort-slug}/apply`.
## Prerequisites
You need cohort admin or higher access. The cohort must have a survey configured in its settings (questions come from Flywheel).
## Live Preview
The top of the **Form** tab shows a live preview of the application form. As you edit fields below, the preview updates immediately. The preview renders actual survey questions from Flywheel, grouped into sections: Eligibility Requirements, Personal Information, Studio Information, Application Questions, and Supporting Materials.
At the bottom of the preview, the confirmation message is shown as it will appear to applicants after submission.
## Content & Messaging
* **Form Title** -- The main heading on the application form.
* **Badge Text** -- A small label displayed above the title (e.g., "Now Accepting Applications").
* **Subtitle** -- A brief description shown below the title.
* **Confirmation Message** -- The message applicants see after submitting their application.
* **Support Email** -- Contact email displayed on the form for applicant questions.
## Color Theming
Click **Customize Colors** in the preview header to open the color palette. You can configure:
* **Primary color** -- The accent color used for buttons, links, and highlights. Choose from 18 color options including black/white, red, blue, indigo, and more.
* **Neutral color** -- The base gray tone used for backgrounds and borders. Options include gray, slate, zinc, neutral, and stone.
* **Theme** -- Choose Light, Dark, or System (follows the visitor's device preference).
The badge color automatically matches the primary accent color.
## Next Steps Messaging
These fields appear on the confirmation page after submission:
* **Review Process** -- Tell applicants how long review takes.
* **Decision Notification** -- When applicants will hear back.
* **Onboarding Information** -- When the program starts if accepted.
## Saving and Previewing
Click **Save Changes** at the bottom of the page to persist your customization. Click **Preview** in the header to open the live public form in a new tab.
:::tip
Save your changes before previewing. The preview button opens the actual public form URL, which uses the last saved customization.
:::

View file

@ -0,0 +1,65 @@
---
title: Email Tasks
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Email Tasks
parentDocument: Admin Guide
outlineId: 17b15055-b5dd-4753-8d75-11d17c87315b
updatedAt: '2026-03-16T10:45:55.367Z'
createdBy: Jennie R.F.
---
The **Email Tasks** tab tracks which applicants need to be notified and provides a record of completed notifications. It appears as a tab on the cohort page, with a badge count showing pending items.
## Prerequisites
You need cohort admin or higher access.
## Notification Types
The system tracks six notification types based on application status:
* **1st Round Scheduling** -- Applicants in `waiting_first_round` or `first_round_scheduled` who have not been sent a scheduling link.
* **2nd Round Scheduling** -- Applicants in `waiting_second_round` or `second_round_scheduled` who have not been sent a scheduling link.
* **Acceptance** -- Applicants with `accepted` status who have not been notified.
* **Waitlist** -- Applicants with `waitlist` status who have not been notified.
* **Decline** -- Applicants with `declined` status who have not been notified.
## Pending Notifications
The top section shows applications awaiting notification. Each row displays the studio name, contact email, current status, notification type, and a **Mark Done** button.
Use the type filter dropdown in the header to narrow the view to a specific notification type. The filter shows counts for each type.
## Marking Notifications as Done
Notification sending happens outside of Cohort-OS (you compose and send emails manually). When you have sent a notification:
1. Click **Mark Done** on the corresponding row.
2. A modal appears with:
* **Date & time sent** -- Pre-filled with the current time. Adjust if the email was sent earlier.
* **Sent by** -- Pre-filled with your name. Change if someone else sent the notification.
3. Click **Confirm** to record the notification.
The application moves from the pending list to the completed list. The badge count on the **Email Tasks** tab updates accordingly.
## Completed Notifications
The bottom section shows all recorded notifications. Each row displays the studio name, contact email, notification type, date notified, and who marked it as sent.
### Editing a Notification
Click the edit icon on any completed notification to update the sent date or the person who sent it. Click **Save** to confirm changes.
### Deleting a Notification
In the edit modal, click **Delete** to remove a notification record. This moves the application back to the pending list.
## Tab Badge Count
The **Email Tasks** tab shows a badge with the count of pending notifications. This updates in real time as you mark items done or as application statuses change.
:::tip
Use the type filter to work through notifications by category. For example, filter to "Acceptance" after making batch accept decisions, and work through the list in one pass.
:::

View file

@ -0,0 +1,51 @@
---
title: Glossary
collection: Hub User Guide
path: Hub User Guide/Reference/Glossary
parentDocument: Reference
outlineId: 6f6f07f0-86e2-44f6-95f0-60f3273e30c0
updatedAt: '2026-03-16T10:37:04.188Z'
createdBy: Jennie R.F.
---
Key terms used throughout Cohort-OS and this documentation.
---
**Blind review** -- A stage setting that hides reviewer identities from other reviewers when viewing consensus scores. Reviewers appear as "Reviewer 1", "Reviewer 2", etc. Admins always see full names regardless of this setting. Enabled by default on all stages.
**Cohort** -- A group of applicants going through a selection process together. Each cohort has its own application form, review stages, reviewers, and decisions. Cohorts move through lifecycle statuses: `draft`, `review`, `decide`, `onboard`, `active`, `closed`.
**Cohort admin** -- A user with the `cohortadmin` role. Can manage applications, assign reviewers, configure stages, make decisions, and send notifications for cohorts they have access to. Cannot manage organization-level settings or users.
**Consensus** -- The aggregated view of all completed reviews for a single application within a stage. Shows average scores per criterion, overall averages, score spread, and a tally of reviewer recommendations. Used by admins and committees to inform decisions.
**Criterion** -- A single dimension of evaluation within a rubric (e.g., "Mission Alignment", "Governance Readiness"). Each criterion has a name, description, maximum point value, and optional scoring thresholds that describe what each performance level looks like.
**Decision** -- The final outcome recorded for an application: `accepted`, `waitlist`, or `declined`. Decisions include a rationale, rationale code, and optional committee notes. Recorded in the **Decisions** tab.
**Flywheel** -- The external assessment engine that Cohort-OS integrates with. Flywheel provides rubrics (scoring criteria and thresholds), surveys (application form question sets), and reporting capabilities. Rubrics are authored in Flywheel and linked to review stages in Cohort-OS.
**Ghostie** -- The avatar system in Cohort-OS. Each user chooses a ghostie -- a small ghost illustration defined by an expression (sweet, mild, exasperated, wtf, disbelieving, double-take) and a body color (from presets or a custom color picker). Ghosties appear on user profile circles throughout the app.
**Guided scoring** -- The primary review interface where reviewers score applications criterion by criterion. In focused mode, one criterion is shown at a time with the applicant's answers alongside. In "show all" mode, every criterion is visible at once. After scoring all criteria, reviewers proceed to the recommendation step.
**Org** -- Short for organization. The top-level entity that owns cohorts, users, and settings. Users can belong to multiple orgs via org memberships. All data is scoped to an org.
**Orgadmin** -- A user with the `orgadmin` role for an organization. Full management access: can manage users, cohorts, settings, and view the activity feed. Can do everything a cohort admin can, plus organization-level administration.
**Orphaned interview** -- A Cal.com booking that could not be automatically matched to an application. This happens when the email on the booking does not match any applicant's contact email in the cohort. Orphaned interviews appear in a modal accessible from the **Applications** tab, where an admin can manually match them to the correct application or dismiss them.
**Permission (view/review)** -- The access level assigned to a reviewer for a specific application. `review` permission allows the reviewer to score the application and submit a review. `view` permission gives read-only access to the application data without the ability to score. View-only reviews are excluded from consensus calculations.
**Rationale code** -- A short code attached to a decision that categorizes the reasoning (e.g., `manual_accept`). Used alongside the free-text rationale to provide structured data about why a decision was made.
**Recommendation** -- A reviewer's suggested outcome for an application, selected after scoring. Four options: **advance** (move to the next stage), **reject** (decline the application), **hold** (flag for further committee discussion), **undecided** (no recommendation yet). Recommendations inform but do not determine the final decision. Reviewers can change their recommendation after submission by providing a reason.
**Reviewer** -- A user assigned to evaluate applications. Reviewers are first added to a cohort's reviewer pool, then assigned to specific applications with either `view` or `review` permission. Reviewers access their assignments through the review dashboard and score applications using the guided scoring interface.
**Rubric** -- A structured scoring framework defined in Flywheel and linked to a review stage. Contains multiple criteria, each with descriptive thresholds/levels. Rubrics standardize evaluation so all reviewers assess applications against the same dimensions.
**Self-assessment** -- A survey sent to members of an applicant's team (e.g., studio members). Each team member receives a unique token-based link that does not require login. Admins can track completion status, resend emails, and view responses. Self-assessment data is available to second-round reviewers during scoring.
**Stage** -- A phase within a cohort's review process. Common stage types are `application-reviews` (initial paper review), `first-round` (first interview round), and `second-round` (second interview round). Each stage has its own rubric, reviewer settings (minimum/maximum reviewers, blind review, self-review), and status. Stages are ordered sequentially and applications advance through them.

View file

@ -0,0 +1,64 @@
---
title: Interview Scheduling
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Interview Scheduling
parentDocument: Admin Guide
outlineId: 5b616a12-b83b-4172-a3c2-820424f1e927
updatedAt: '2026-03-16T10:36:59.796Z'
createdBy: Jennie R.F.
---
Interview scheduling is handled through Cal.com integration. When applicants book interviews through Cal.com, the booking data flows into Cohort-OS automatically via webhooks.
## Prerequisites
Cal.com must be configured with the correct webhook URL and cohort ID. This is set up by system administrators.
## How It Works
Cal.com sends webhook events to Cohort-OS when bookings are created, rescheduled, or cancelled. The system matches bookings to applications by the applicant's email address.
### Booking Created
When an applicant books an interview:
1. The system looks up the application by matching the booking email to the applicant's contact email.
2. If a match is found, the interview schedule data (date, time, meeting link) is saved to the application.
3. The application status updates automatically -- for example, from `waiting_first_round` to `first_round_scheduled`.
4. The current stage is synced to match the new status.
### Booking Rescheduled
When an interview is rescheduled, the schedule data (date, time, meeting link) is updated on the application. The application status is not changed.
### Booking Cancelled
When an interview is cancelled:
* The schedule data is cleared from the application.
* If the application is currently in a `*_scheduled` status, it reverts to the corresponding `waiting_*` status.
* Applications in other statuses keep their current status.
## Orphaned Interviews
If a Cal.com booking email does not match any application in the cohort, the booking becomes an "orphaned interview." When this happens:
* The booking is saved to an orphaned interviews table.
* Org admins receive an email alert with the booking details.
* An **Orphaned** warning badge appears on the **Applications** tab filter bar with a count of unmatched bookings.
To resolve orphaned interviews, click the **Orphaned** badge to open the matching modal. From there you can manually match the booking to the correct application.
## Interview Status on Applications
The **Pipeline** column on the **Applications** tab shows interview scheduling information:
* Applications in `waiting_first_round` or `waiting_second_round` show they are awaiting scheduling.
* Applications in `first_round_scheduled` or `second_round_scheduled` show the scheduled interview date and time.
* Past interview dates appear with strikethrough text.
:::info
Webhook events are deduplicated using the Cal.com event ID. Processing the same event twice has no effect. Applications in terminal statuses (`accepted`, `waitlist`, `declined`) do not have their status changed by new bookings.
:::

View file

@ -0,0 +1,68 @@
---
title: Interviews
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide/Interviews
parentDocument: Reviewer Guide
outlineId: 11cf13b6-f25d-4106-9b79-4a62b5be1fdc
updatedAt: '2026-03-16T10:41:14.384Z'
createdBy: Jennie R.F.
---
Interview stages follow the same scoring workflow as application review stages, with additional features for preparing, taking notes, and working with scheduled interview times.
## The Dashboard for Interview Stages
Your **My Reviews** dashboard at `/reviews` handles both application reviews and interview assignments in a single view. Interview stage cards show additional information:
* **Stage label** -- displayed as "1st Interview" or "2nd Interview" instead of the raw stage name
* **Interview date** -- shown next to the stage name if an interview has been scheduled
* **"Unscheduled"** -- shown if the interview has not been booked yet
* **Due date** -- if a due date is set, it appears on cards where the interview has already occurred. Overdue items show the date in red with an "OVERDUE" label.
Interview cards sort by urgency alongside non-interview cards. Overdue interview reviews appear at the very top. Unscheduled interviews appear lower since there is nothing to act on yet.
## Preparing for an Interview
Before the interview takes place, you can open the scoring interface to prepare. Click **Prepare** on the card. Inside, you have access to:
* **Application tab** -- review the applicant's full submission
* **Reviews tab** -- read completed reviews from earlier stages to understand how the applicant has been evaluated so far
* **Script tab** -- view the interview questions configured for this stage. Use these to guide your conversation.
* **Notes tab** -- start writing private notes before, during, or after the interview
You can score criteria and save drafts during preparation, but you cannot submit the review until after the scheduled interview time has passed.
:::tip
Open the scoring interface on a second screen during the interview itself. The **Script** tab gives you your question guide, and the **Notes** tab lets you capture observations in real time.
:::
## Taking Interview Notes
The **Notes** tab provides a private text area tied to the specific stage. Your notes auto-save as you type. Only you can see these notes -- they are not visible to other reviewers or the applicant.
Notes are stored separately from the review itself. Even if you have not started scoring, you can use the notes tab to capture thoughts during or after the interview.
## Scoring After the Interview
Once the interview time has passed, the scoring workflow is identical to a regular application review. Score each criterion, select your recommendation, write your overall notes, and submit.
For second-round interview stages, an additional **Assessments** tab appears in the left column, showing self-assessment results from the applicant's team members. This gives you context on how the team views its own strengths and challenges.
## What Happens After You Submit
After submission:
1. Your review is locked (scores and notes become read-only)
2. You are redirected back to the dashboard
3. The assignment moves to the **completed** section
4. Your recommendation can still be changed if the application remains at this stage
The cohort administrator is notified when enough reviewers have completed their reviews for a stage. They will then review the consensus scores and make a decision on each application.
:::info
Interviews are scheduled through Cal.com by the cohort administrator. You do not book interviews yourself. If your dashboard shows "Unscheduled" for an interview assignment, the booking has not been created yet -- no action is needed from you.
:::

View file

@ -0,0 +1,64 @@
---
title: Locking Reviews & Consensus
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Locking Reviews & Consensus
parentDocument: Admin Guide
outlineId: 4447e500-7cb5-4d0b-8575-d397454af22f
updatedAt: '2026-03-16T10:37:00.182Z'
createdBy: Jennie R.F.
---
Locking is the mechanism that finalizes review scores for a stage and prepares an application for advancement or decision-making. Consensus data gives you a summary of how reviewers scored and what they recommended.
## Prerequisites
You need cohort admin or higher access. Reviews must be completed before locking.
## When to Lock
Lock a stage when the minimum number of reviewers have submitted their scores. The minimum is configured per stage (default: 2). You can see review progress in the **Reviews** column on the **Applications** tab -- it shows completed/assigned counts.
The lock icon in the **Pipeline** column indicates readiness:
* **Open lock (dimmed)** -- Not enough reviews completed yet. Hover to see how many more are needed.
* **Open lock (visible)** -- Minimum reviewers met. Ready to lock.
* **Closed lock** -- Stage is locked.
## How to Lock
### Single Application
Click the lock icon in the **Pipeline** column, or click **Lock** in the **Actions** column. A confirmation modal appears. Click **Confirm** to lock.
### Bulk Locking
Select multiple applications using the checkboxes, then choose **Lock Stages** from the **Bulk Actions** dropdown. Only applications that meet the minimum reviewer threshold will be locked. The dropdown shows how many are eligible.
## What Locking Does
* Prevents reviewers from editing their scores for this stage.
* Records who locked the stage and when.
* Automatically transitions the application status:
* Locking the final stage sets the status to `needs_final_decision`.
* Locking the first-round stage sets the status to `needs_first_decision`.
## Viewing Consensus
Once reviews are completed, you can view consensus data for any application. The **Consensus** column on the **Applications** tab shows recommendation tallies (e.g., "2x Advance, 1x Hold"). Hover over the consensus cell to see a popover with each reviewer's name, recommendation, and score percentage.
For detailed consensus, open an application and view its review consensus page. This shows:
* **Per-reviewer breakdown** -- Each reviewer's total score, percentage, recommendation, and notes. In blind review mode, reviewer identities are hidden from other reviewers (admins always see names).
* **Per-criterion comparison** -- Average, min, max, and spread for each scoring criterion across all reviewers. Criteria with high spread (disagreement) are flagged.
* **Aggregate stats** -- Average score, average percentage, recommendation counts, and whether all reviews are complete and locked.
## Unlocking
If you need to allow a reviewer to make changes after locking, click the closed lock icon on a locked application. A confirmation modal appears. Click **Confirm** to unlock. This reverts the stage to its pre-locked state.
You can also unlock in bulk via the **Bulk Actions** dropdown.
:::warning
Unlocking a stage allows reviewers to edit their scores again. The application status does not automatically revert -- you may need to manage status transitions manually.
:::

View file

@ -0,0 +1,70 @@
---
title: Logging In & Your Profile
collection: Hub User Guide
path: Hub User Guide/Logging In & Your Profile
parentDocument: null
outlineId: 7b8477d7-dd5a-4863-af3e-db9749a9d30a
updatedAt: '2026-03-16T10:36:56.299Z'
createdBy: Jennie R.F.
---
This page covers how to access Cohort-OS for the first time, how to sign in on return visits, and how to manage your profile and security settings.
## First-Time Setup
When an admin adds you to the team, you receive an invite email with a setup link. That link takes you to the **Set Your Password** page, where you choose a password (minimum 6 characters) and confirm it. Submitting the form sets your password and logs you in automatically -- you land on your **My Reviews** dashboard.
:::warning
Setup links expire. If your link no longer works, ask your admin to resend the invite, or use the forgot-password flow described below.
:::
## Signing In
Go to the login page and enter your email and password, then click **Sign in**. After a successful login you are redirected to your **My Reviews** dashboard.
If you enter incorrect credentials, an error message appears at the top of the form.
## Forgot Password
If you cannot remember your password:
1. Click **Forgot your password?** on the login page.
2. Enter your email address and click **Send Reset Link**.
3. Check your inbox for a reset email. Click the link in the email.
4. On the **Set Password** page, enter and confirm your new password (minimum 6 characters), then click **Reset Password**.
5. You are logged in automatically and redirected to your dashboard.
:::info
The reset confirmation always says a link was sent, even if no account exists with that email. This is intentional -- it prevents revealing which email addresses have accounts.
:::
## Your Profile
Open your profile by clicking your name (with your ghostie avatar) in the top-right corner of the navigation bar, or by navigating directly to **Settings**.
### Profile Section
The **Profile** section shows your current information in a compact list:
* **Avatar** -- Your ghostie. Click **\[CHANGE\]** to open the avatar picker (see below).
* **Name** -- Click **\[EDIT\]** to update inline. Press Enter or click **\[SAVE\]** to confirm.
* **Email** -- Click **\[EDIT\]** to update inline. Works the same as name.
* **Role** -- Read-only. Displays your current role (e.g., reviewer, orgadmin).
* **Organization** -- Read-only. Shows your active organization.
* **Member Since** -- Read-only. The month and year your account was created.
### Choosing a Ghostie Avatar
Click **\[CHANGE\]** next to your avatar to open the ghostie picker. You can select from six expressions -- Sweet, Mild, Exasperated, WTF, Disbelieving, and Double Take -- and pick a color from six presets or use the custom color picker. A live preview updates as you make selections. Click **\[SAVE\]** to apply your new avatar, or **\[CANCEL\]** to discard changes.
### Changing Your Password
The **Security** section lets you change your password. If you already have a password set, enter your current password first, then your new password and confirmation. If you do not have a password set yet (e.g., you were added to the system before password setup was available), you can set one directly without entering a current password. Passwords must be at least 6 characters.
### Display Settings
The **Display** section lets you switch between **Light**, **Dark**, and **System** themes.

View file

@ -0,0 +1,69 @@
---
title: Making Decisions
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Making Decisions
parentDocument: Admin Guide
outlineId: 4e21d083-ac98-426f-ad7a-16bd409e0af2
updatedAt: '2026-03-16T10:37:00.639Z'
createdBy: Jennie R.F.
---
The **Decisions** tab provides a ranked view of applicants and tools to record accept, waitlist, or decline decisions. It appears when the cohort reaches the `decide` stage.
## Prerequisites
You need cohort admin or higher access. The cohort must be in `decide`, `onboard`, `active`, or `closed` status.
## The Decision Dashboard
The **Decisions** tab shows:
* **Statistics cards** -- Total applications, accepted count, waitlisted count, and declined count.
* **Ranked Applicants table** -- All applications with review scores, sorted by average score (highest first). Each row shows rank, studio name, contact name, email, average score percentage, review count, and current decision status.
Applications appear here once they have review scores or are already in a terminal status (`accepted`, `waitlist`, `declined`).
## Making a Decision
### Single Application
Click the dropdown menu (three dots) on any row. Choose **Accept**, **Waitlist**, or **Decline**. The decision is recorded immediately and the application status updates.
You can also click **View Reviews** to open the application detail page and review scores before deciding.
### Bulk Decisions
1. Select applications using the checkboxes in the leftmost column. Use the header checkbox to select all.
2. Click **Accept**, **Waitlist**, or **Decline** in the header buttons. The count of selected applications is shown on each button.
3. Decisions are applied to all selected applications. If some applications cannot transition to the chosen status (e.g., already decided), those are skipped and you receive a partial update notification.
## Decision Records
Each decision creates a record in the Decision model with:
* **Decision** -- `accepted`, `waitlist`, or `declined`.
* **Decided at** -- Timestamp of the decision.
* **Rationale** -- Optional text explaining the reasoning.
* **Rationale code** -- Optional code for categorizing the reason.
* **Committee notes** -- Optional notes from committee discussion.
:::info
Decisions enforce valid status transitions. You cannot re-decide a `declined` application to `accepted` without first changing its status. Waitlisted applications can transition to `accepted` or `declined`.
:::
## Workflow
The typical decision workflow is:
1. Lock all reviews for the relevant stage (see Locking Reviews & Consensus).
2. Review consensus data and scores on the **Decisions** tab.
3. Hold a committee discussion if needed.
4. Record decisions individually or in bulk.
5. Move the cohort to `onboard` status when all decisions are finalized. If all decisions are made through the bulk decision endpoint, the cohort may automatically transition to `onboard`.
## After Decisions
Once decisions are recorded, the **Email Tasks** tab shows which applicants need to be notified. Accepted applicants will appear on the **Onboarding** tab when the cohort reaches `onboard` status.

View file

@ -0,0 +1,58 @@
---
title: Managing Applications
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Managing Applications
parentDocument: Admin Guide
outlineId: 65211144-91e6-4be9-93f7-f2279e789acc
updatedAt: '2026-03-16T10:36:58.550Z'
createdBy: Jennie R.F.
---
The **Applications** tab is the primary workspace for managing submitted applications. It provides search, filtering, grouping, inline actions, and bulk operations.
## Prerequisites
You need cohort admin or higher access. Reviewers are redirected to their interview dashboard instead.
## Viewing Applications
Applications appear in a table with the following columns: **Studio** (applicant name), **Pipeline** (current stage, lock status, interview date, self-assessment status), **Reviews** (completed/assigned count), **Consensus** (reviewer recommendation tallies), **Score** (cumulative percentage), and inline **Actions**.
Click a studio name to open the full application detail page, which shows all answers, review history, and status timeline.
## Searching and Filtering
* **Search** -- Type in the search box to filter by studio name or email.
* **Status** -- Filter by application status (e.g., `new`, `accepted`, `declined`).
* **Stage** -- Filter by current review stage (Application Review, First Round, Second Round).
* **Reviews** -- Filter by review progress: Not Assigned, Not Started, In Progress, or Complete.
* **Province** -- Filter by applicant location. When Ontario is selected, a **GTA only** toggle appears.
* **Hide declined** -- On by default. Toggle to show or hide declined applications.
Active filters appear as removable tags below the filter bar. Click **Clear** to reset all filters. Filter state persists in your browser across sessions.
## Sorting and Grouping
Use the **Sort** dropdown to order applications by name, score, date, pipeline stage, interview date, or consensus (mixed first). Use the **Group by** dropdown to group by status, stage, review progress, or consensus. Groups can be expanded or collapsed individually or all at once.
## Inline Actions
The **Pipeline** column shows lock icons. A ready-to-lock icon appears when the minimum reviewer threshold is met. Click it to open a confirmation modal to lock the stage. Once locked, you can advance or decline the application from the **Actions** column.
## Bulk Operations
Select applications using the checkboxes, then open the **Bulk Actions** dropdown:
* **Change to Declined** -- Decline all selected applications.
* **Reset to New** -- Reset selected applications to `new` status.
* **Lock/Unlock Stages** -- Lock or unlock stages in bulk.
* **Advance to Next Stage** -- Advance locked applications to the next stage.
* **Skip to Stage** -- Jump applications to a specific stage, bypassing intermediate ones.
* **Assign Reviewers** -- Open the reviewer assignment modal for selected applications.
## Orphaned Interviews
If Cal.com bookings could not be matched to an application, an **Orphaned** warning button appears in the filter bar. Click it to open the matching modal and manually associate bookings with applications.
## The Timeline Tab
Org admins and super admins see an additional **Timeline** tab on the cohort page. This shows a per-cohort activity feed including review submissions, status changes, decisions, and other cohort-level events.

View file

@ -0,0 +1,76 @@
---
title: Navigating the App
collection: Hub User Guide
path: Hub User Guide/Navigating the App
parentDocument: null
outlineId: 42cdfea8-3953-4077-9ba3-46ed753326ba
updatedAt: '2026-03-16T10:36:56.737Z'
createdBy: Jennie R.F.
---
This page explains the top navigation bar, what each section contains, and how navigation differs by role.
## The Navigation Bar
The top navigation bar is a black bar across the top of every page. From left to right, it contains:
1. **Organization name** (or org switcher)
2. **My Reviews** link
3. **Cohorts** dropdown (admins only)
4. **Manage** dropdown (admins only)
5. **User area** (your name, avatar, theme toggle, logout)
### Organization Name and Switcher
The far-left of the navigation bar shows your current organization name. If you belong to multiple organizations, this becomes a dropdown. Click it to see all your organizations, with a checkmark next to the active one. Select a different organization to switch context -- the page reloads with data from the selected org.
:::info
If you only belong to one organization, the org name displays as static text with no dropdown.
:::
### My Reviews
The **My Reviews** link appears for all roles. It takes you to your reviewer dashboard at `/reviews`, which shows all applications assigned to you for scoring. If you have incomplete reviews, a count appears in parentheses next to the link.
This is also where you land after logging in.
### Cohorts Dropdown
The **Cohorts** dropdown is visible to cohort admins and org admins -- reviewers do not see it. It lists all cohorts in your current organization. Clicking a cohort name takes you to that cohort's management page, where you can access tabs for **Applications**, **Reviewers**, **Assignments**, **Decisions**, **Email Tasks**, **Form**, and more.
If no cohorts exist yet, **Cohorts** appears as a plain link to the cohorts index page.
### Manage Dropdown
The **Manage** dropdown is visible to org admins (and above). It provides access to administrative tools:
* **Team Members** -- Invite users, assign roles, manage access to specific cohorts. Found at `/manage/users`.
* **Configure Stages** -- Set up review stages for your cohorts, including reviewer minimums/maximums and blind review settings. Found at `/admin/stages`.
* **Activity Log** -- A feed of all actions across your organization: logins, review submissions, status changes, decisions, and more. Found at `/manage/activity`.
:::tip
Superadmins see two additional items in the **Manage** dropdown: **Users** (system-wide user management) and **Organizations** (org-level administration). These are system-level tools not covered in this guide.
:::
### User Area
The right side of the navigation bar shows:
* **Your name and ghostie avatar** -- Click to go to **Settings**, where you can edit your profile, change your password, and pick a new avatar.
* **Theme toggle** -- A small button to switch between light and dark mode.
* **\[LOGOUT\]** -- Signs you out and returns you to the login page.
## What Each Role Sees
| Navigation element | Reviewer | Cohort Admin | Org Admin |
|--------------------|----------|--------------|-----------|
| My Reviews | Yes | Yes | Yes |
| Cohorts dropdown | No | Yes | Yes |
| Manage dropdown | No | No | Yes |
| Settings (via user area) | Yes | Yes | Yes |
Cohort admins see the **Cohorts** dropdown but not the **Manage** dropdown. Org admins see both. Reviewers see only **My Reviews** and their user area.

View file

@ -0,0 +1,56 @@
---
title: Onboarding
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Onboarding
parentDocument: Admin Guide
outlineId: 0745a851-aceb-43c1-a22d-92f764e8e709
updatedAt: '2026-03-16T10:45:16.983Z'
createdBy: Jennie R.F.
---
The **Onboarding** tab tracks post-acceptance tasks for accepted participants. It appears when the cohort reaches the `onboard` stage.
## Prerequisites
You need cohort admin or higher access. The cohort must be in `onboard`, `active`, or `closed` status. At least one application must have `accepted` status.
## Viewing Participants
The **Onboarding** tab displays accepted participants in a card layout with two sections:
* **Participants list** (left, two-thirds width) -- Each participant card shows their studio name, contact name, email, and a completion badge showing their progress percentage. A progress bar provides a visual indicator.
* **Onboarding Stats** sidebar (right, one-third width) -- Shows total accepted count, overall progress percentage with a progress bar, and the count of pending tasks across all participants.
If no applications have been accepted yet, the tab shows an empty state message.
## Checklist Items
Each participant has an onboarding checklist. Items are displayed as checkboxes within each participant card. Checking an item records who completed it and when. Completed items show a strikethrough label and the completion date.
To toggle a checklist item, click the checkbox next to it. The change saves automatically.
:::info
Onboarding checklist items are defined per application. The specific items depend on how the onboarding template is configured for the cohort.
:::
## Notes
Each participant card has a notes section at the bottom. Click **Add notes** (or **Edit** if notes exist) to open an inline editor. Type your notes and click **Save**. Notes are free-form text for any onboarding-related information.
## Progress Tracking
Progress is calculated as the percentage of completed checklist items out of total items. The color of the completion badge changes based on progress:
* 100% -- green (complete)
* 75%+ -- blue
* 50%+ -- yellow
* Below 50% -- red
The sidebar shows the overall progress averaged across all participants and the total count of pending (unchecked) tasks.
:::tip
Use the notes field to record any special requirements or follow-up items for individual participants. This keeps onboarding context in one place.
:::

View file

@ -0,0 +1,58 @@
---
title: Recommendations
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide/Recommendations
parentDocument: Reviewer Guide
outlineId: 612693f3-7a23-4d13-8a58-0850d14c0594
updatedAt: '2026-03-16T10:37:06.235Z'
createdBy: Jennie R.F.
---
After scoring all criteria for an application, you select a recommendation. This is your overall judgment on what should happen next with the applicant.
## The Four Options
You choose one recommendation from a dropdown on the **Final Recommendation** screen:
* **Advance to Next Stage** -- you believe this applicant should move forward in the process. Use this when the application or interview performance meets or exceeds the bar for the current stage.
* **Hold for Further Review** -- you are not ready to recommend advancing or declining. Use this when you see potential but have concerns that need committee discussion, or when the applicant falls in a borderline zone where additional input from other reviewers would help.
* **Decline Application** -- you believe this applicant should not continue. Use this when the application clearly does not meet the criteria for the cohort, or when significant concerns emerged during the review.
* **Undecided** -- you have not formed a recommendation yet. This is the default state. You must change it to one of the other three options before you can submit your review.
:::warning
You cannot submit your review while the recommendation is set to **Undecided**. All criteria must be scored and a recommendation selected before the **Submit Review** button becomes active.
:::
## The Recommendation Screen
When you click **Proceed to Recommendation** after scoring, you see:
* **Your total score** and percentage in the top right
* **Recommendation dropdown** -- select your choice here
* **Score summary** -- a breakdown of each criterion with your score
* **Overall notes** -- a free-text area for your overall assessment of the applicant
The score summary is read-only at this point (go back to scoring to change individual scores). The recommendation and overall notes can be edited freely before submission.
## Changing Your Recommendation After Submission
Once you submit, your scores and notes are locked. However, if the review is for the application's current stage, you can still change your recommendation. On the read-only review screen, click the **Change** button next to your current recommendation. A modal asks you to:
1. Select a new recommendation
2. Provide a reason for the change
The change is recorded with a timestamp and your reason. A full history of recommendation changes is visible below the current recommendation.
:::info
Recommendation changes are only available while the application is still at the stage you reviewed. Once the application advances to a later stage, your earlier recommendation becomes fully locked.
:::
## How Recommendations Factor Into Decisions
Your recommendation is one input among potentially several reviewers. Cohort administrators see all reviewer recommendations alongside consensus scores when making decisions on the **Decisions** tab. A pattern of "advance" recommendations supports moving an applicant forward, while mixed recommendations (some "advance," some "hold") typically trigger committee discussion.
The final decision -- accept, waitlist, or decline -- is made by the cohort admin, not determined automatically by reviewer recommendations.

View file

@ -0,0 +1,10 @@
---
title: Reference
collection: Hub User Guide
path: Hub User Guide/Reference
parentDocument: null
outlineId: 2d3cf156-9198-4967-863d-3bc3d2897610
updatedAt: '2026-03-16T10:37:02.769Z'
createdBy: Jennie R.F.
---

View file

@ -0,0 +1,10 @@
---
title: Reviewer Guide
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide
parentDocument: null
outlineId: ab0823e9-9a8f-49cd-9bc8-6e7ec9b219ae
updatedAt: '2026-03-16T10:37:04.562Z'
createdBy: Jennie R.F.
---

View file

@ -0,0 +1,86 @@
---
title: Rubric & Scoring Details
collection: Hub User Guide
path: Hub User Guide/Reference/Rubric & Scoring Details
parentDocument: Reference
outlineId: 2b76b72d-6a6b-4019-8527-629958f1c411
updatedAt: '2026-03-16T10:37:03.752Z'
createdBy: Jennie R.F.
---
This page covers how scoring works end to end in Cohort-OS -- from how rubrics are structured, through per-criterion scoring by individual reviewers, to how consensus scores are calculated across the review panel.
## Rubric Structure
Rubrics are defined in Flywheel (the external assessment engine) and linked to each review stage. A rubric contains:
* **Criteria** -- The individual dimensions being evaluated (e.g., "Mission Alignment", "Governance Readiness"). Each criterion has a name, description, and maximum point value.
* **Thresholds** (or levels) -- Descriptive scoring bands within each criterion. Each threshold has a title, a description of what that performance level looks like, and a point range. Reviewers select the threshold that best matches the applicant.
* **Max points** -- Each criterion defines its own maximum score (commonly 5 points). The rubric's total possible score is the sum of all criteria max points.
Each review stage in a cohort is linked to a specific Flywheel rubric via the stage's configuration. Different stages can use different rubrics.
## How Reviewers Score
Reviewers use the **guided scoring interface** to evaluate applications criterion by criterion. For each criterion:
1. Read the criterion description and the applicant's relevant answers (shown side by side).
2. Select a threshold/level that matches the applicant's response. Selecting a threshold automatically assigns the corresponding point value.
3. Optionally add notes specific to that criterion.
4. All criteria must be scored before proceeding to the recommendation step.
After scoring all criteria, the reviewer sees their total score (sum of all criterion scores) and percentage (total divided by maximum possible), then selects a recommendation and writes overall notes.
:::info
If a rubric does not have thresholds configured in Flywheel, reviewers see simple numeric score buttons (1 through the max points) instead of descriptive threshold options.
:::
## Individual Review Scores
Each completed review stores:
* **Per-criterion scores** -- The point value and max value for each criterion, plus optional notes.
* **Total score** -- Sum of all criterion scores.
* **Percentage** -- Total score divided by total max points, expressed as a percentage.
* **Recommendation** -- One of `advance`, `reject`, `hold`, or `undecided`.
* **Overall notes** -- Free-text assessment from the reviewer.
A review starts as `draft` (editable, can be saved and returned to) and becomes `completed` when submitted. Submitting locks the scores and notes. The recommendation can still be changed after submission with a recorded reason.
## Consensus Calculation
When multiple reviewers have completed their reviews for the same application and stage, the consensus view aggregates their scores. The consensus endpoint calculates:
* **Average score** -- Mean of all reviewers' total scores.
* **Average percentage** -- Mean of all reviewers' percentage scores.
* **Per-criterion averages** -- For each criterion, the mean score across all reviewers.
* **Score spread** -- The difference between the highest and lowest score for each criterion. A criterion is **flagged** when the spread is 2 or more points, or 40% or more of the max points. Flagged criteria indicate significant disagreement between reviewers.
* **Recommendation tally** -- Count of advance, hold, reject, and undecided recommendations.
:::warning
Only completed reviews with `review` permission are included in consensus calculations. View-only assignments and draft reviews are excluded.
:::
## Grade Bands
Cohorts can define grade bands that map percentage ranges to letter grades:
| Band | Range | Label |
|------|-------|-------|
| A | 90-100% | Exceptional |
| B | 80-89% | Strong |
| C | 70-79% | Good |
| D | 60-69% | Acceptable |
| F | 0-59% | Needs Work |
Grade bands are configured per cohort in the Flywheel integration settings. They provide a quick way to categorize application strength during committee discussions.
## Blind Review
When **blind review** is enabled on a stage (the default), reviewer identities are hidden from other reviewers viewing the consensus. Reviewers appear as "Reviewer 1", "Reviewer 2", etc. Admins always see full reviewer names and emails regardless of the blind review setting.
Blind review affects the consensus view only -- it does not change what reviewers see while scoring. Each reviewer always scores independently without seeing other reviewers' scores until their own review is submitted.

View file

@ -0,0 +1,72 @@
---
title: Scoring an Application
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide/Scoring an Application
parentDocument: Reviewer Guide
outlineId: edea1723-8a83-4298-87d1-e47e62068c3b
updatedAt: '2026-03-16T10:42:44.293Z'
createdBy: Jennie R.F.
---
The guided scoring interface walks you through each rubric criterion one at a time, then asks for your overall recommendation. This is where you do the actual work of evaluating an application.
## Opening the Scoring Interface
Click any active card on your **My Reviews** dashboard (or its **Score Now** / **Continue** button). This opens the guided scoring page at `/review/[cohortSlug]/guided`.
The page has a two-column layout:
* **Left column** -- background information about the application
* **Right column** -- the scoring form
## Left Column: Background Tabs
The left column has several tabs depending on the stage:
* **Application** -- the applicant's submitted answers, attachments, and links. Eligibility questions are filtered out. File attachments can be downloaded; URLs open in a new tab.
* **Reviews** -- completed reviews from earlier stages (hidden on the first stage). If blind review is enabled, reviewer identities are hidden.
* **Assessments** -- self-assessment results from team members (second-round stages only).
* **Script** -- interview questions configured for this stage, rendered from markdown. Use this to guide your interview conversation.
* **Notes** -- a private text area for your own interview notes. These auto-save and are tied to the specific stage. Only you can see them.
## Right Column: Scoring
By default, you score one criterion at a time in **focused mode**. A **Show all** toggle in the upper right switches to a view where every criterion is visible at once.
In focused mode, each criterion shows:
1. **Criterion number and name** (e.g., "Criterion 1/5 -- Mission Alignment")
2. **Description** of what to evaluate
3. **Score options** -- either threshold cards with descriptions (click to select), named levels, or numeric buttons depending on how the rubric is configured
4. **Notes field** -- optional per-criterion notes
Use the **Next Criterion** and **Previous** buttons to navigate. You can also use the **Jump to...** dropdown to skip to any criterion. A criterion must be scored before you can advance to the next one.
### Keyboard Shortcuts
In focused mode, you can score without touching your mouse:
* **Up/Down arrows** -- navigate between threshold or level options
* **Enter** -- select the focused option and advance to the next criterion
* **Left/Right arrows** -- move to the previous or next criterion
* **Number keys (1-9)** -- set a numeric score directly (when simple score buttons are shown)
## Saving and Submitting
Your work auto-saves after you make a change. A "Saving..." / "Saved" indicator appears near the top right. You can also click **Save Draft** manually at any time.
Once all criteria are scored, click **Proceed to Recommendation** to move to the final screen. This shows your score summary, lets you select a recommendation, and provides space for overall notes.
Click **Submit Review** when you are ready. A confirmation dialog explains that submission locks your scores and notes. After confirming with **Submit & Lock**, your review is complete and you are redirected to the dashboard.
:::warning
After submission, your scores and notes are locked. You can still change your recommendation if the review is for the application's current stage, but scores cannot be edited.
:::
:::tip
For interview stages, the **Submit Review** button is disabled until after the interview has occurred. You can still prepare by reviewing the application and scoring criteria beforehand.
:::

View file

@ -0,0 +1,51 @@
---
title: Self-Assessments
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Self-Assessments
parentDocument: Admin Guide
outlineId: ec9b6894-fc9b-4e12-b70c-f067591755b0
updatedAt: '2026-03-16T10:45:44.998Z'
createdBy: Jennie R.F.
---
Self-assessments are confidential surveys sent to individual team members of applicant studios. They are typically used during later interview stages to gather independent perspectives from each team member.
## Prerequisites
You need cohort admin or higher access. The application must be at a stage where self-assessments are relevant (typically `waiting_second_round` or `second_round_scheduled`).
## How Self-Assessments Work
Self-assessments use token-based access. Each team member receives a unique link that does not require a login. The link takes them to a form at `/c/{cohort-slug}/assessment/{token}` where they complete the survey independently.
Responses are confidential -- they are not shared with the applicant's teammates. The form includes sections like About You, Capacity, Your Role, Co-op Readiness, and Looking Ahead. Questions may include conditional logic where follow-up questions appear based on previous answers.
## Sending Self-Assessments
Self-assessment invitations are managed from the application detail page. An admin initiates the process by entering the names and emails of each team member. Each member receives an email with their unique assessment link.
## Tracking Completion
The **Applications** tab shows self-assessment status in the **Pipeline** column for applications in second-round stages:
* **"Self-assessment not sent"** (highlighted) -- No assessments have been initiated for this application.
* **"Self-assessment: X/Y"** -- X of Y team members have completed their assessment.
The `SelfAssessmentStatus` component on the application detail page provides a detailed view:
* A progress bar showing completion percentage.
* Each member listed with their status: **Completed**, **Pending**, or **Cancelled**.
* Completed members can be expanded to view their responses.
## Resending Emails
If a team member has not received or has lost their assessment link, you can resend the email from the application detail page. The resend updates the `lastResentAt` timestamp for that member.
## Cancelling a Member
If a team member should no longer complete the assessment (e.g., they left the studio), you can cancel their invitation. Cancelled members are excluded from the completion count and their status shows as **Cancelled**.
:::info
Self-assessment responses are stored directly on the application document under `selfAssessment.members[].responses`. Each member's token, sent date, completion date, and last access time are tracked for audit purposes.
:::

View file

@ -0,0 +1,67 @@
---
title: Setting Up a Cohort
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Setting Up a Cohort
parentDocument: Admin Guide
outlineId: 8411087d-c82f-41cb-bb1a-8c48df950248
updatedAt: '2026-03-16T10:36:57.703Z'
createdBy: Jennie R.F.
---
This page covers how to configure an existing cohort. Cohort creation is handled by system administrators -- once a cohort exists, you configure it from the cohort page.
## Prerequisites
You need cohort admin, org admin, or super admin access.
## Cohort Lifecycle
Every cohort moves through six stages. The current stage controls which tabs are visible and what actions are available.
* `draft` -- Setup mode. Reviewers do not see this cohort. Configure settings, stages, and the application form here.
* `review` -- Reviewers can score assigned applications. The **Decisions** tab is not yet visible.
* `decide` -- Review is considered complete. The **Decisions** tab appears so you can accept, waitlist, or decline applicants.
* `onboard` -- Decisions are finalized. The **Onboarding** tab appears for accepted participants.
* `active` -- The cohort program is running. All admin tabs remain accessible.
* `closed` -- The cohort is closed. Applications stop being accepted automatically. All data is preserved in read-only mode.
You can move forward, skip stages, or move backward. No data is lost when moving between stages -- earlier-stage tabs and features are simply hidden or shown.
## Changing the Cohort Stage
1. Open the cohort page and click the **Settings** button (gear icon) in the top right.
2. In the **Cohort Stage** section on the left, click the target stage in the pipeline.
3. A confirmation modal shows what will happen. Review the implications and click **Confirm**.
:::warning
Moving to `closed` automatically disables application intake. Moving backward hides tabs associated with later stages but preserves all data.
:::
## Configuring Settings
Click **Settings** to open the full-screen settings modal. From here you can:
* **Cohort Name** -- Edit the display name.
* **Cohort Slug** -- Edit the URL slug. This is locked once the cohort leaves `draft` status.
* **Application Opens / Closes** -- Set the date range for the application window.
* **Notification Emails** -- View which org admins receive new-application alerts. Manage recipients from **Team Settings**.
* **Survey** -- Manage the survey version and scoring profile used for application questions.
* **Organizations** -- Select which organizations can access this cohort.
Click **Save Configuration** when done.
## Accepting Applications
In the settings modal under **Application Intake**, toggle **Accepting Applications** on or off. This controls whether the public apply form accepts new submissions. The toggle is disabled when the cohort is in `closed` status.
## Archiving a Cohort
In the settings modal under **Danger Zone**, click **Archive Cohort**. Archiving hides the cohort from the main list and stops accepting applications. All data is preserved. You can unarchive a cohort from the archived banner that appears at the top of its page.
:::info
Archiving is reversible. Click **Unarchive** on the archived banner to restore the cohort.
:::

View file

@ -0,0 +1,85 @@
---
title: Team & Settings
collection: Hub User Guide
path: Hub User Guide/Admin Guide/Team & Settings
parentDocument: Admin Guide
outlineId: 3ffd4182-c58f-49e8-b90e-e9d4a95aeacb
updatedAt: '2026-03-16T10:45:14.839Z'
createdBy: Jennie R.F.
---
Team management and the activity feed are accessed from the **Manage** dropdown in the top navigation. These pages let you invite users, assign roles, and monitor what is happening across your organization.
## Prerequisites
You need org admin or super admin access.
## Team Members
Navigate to **Manage** > **Team Members** to view and manage your organization's users. The page shows a searchable table with each user's name, email, role, and status.
### User Statuses
* **Active** -- User has accepted their invite and set up their password.
* **Invite pending** -- User has been created but has not set up their password yet.
* **Inactive** -- User has been deactivated and cannot log in.
### Inviting a New User
1. Click **Add Team Member**.
2. Choose **Create New** (default) or toggle to **Add Existing** if the user already exists in another organization.
3. For new users, enter their name, email, and role. Click **Create User**.
4. The user receives an email with a link to set up their password.
### Roles
* **Reviewer** -- Can view and score applications they are explicitly assigned to.
* **Cohort Admin** -- Can manage cohort settings and make decisions.
* **Org Admin** -- Can manage organization settings, team members, and all cohorts.
* **Super Admin** -- Full system access (only assignable by super admins).
### Managing Users
Click the dropdown menu (three dots) on any user row for options:
* **View Profile** -- Open the user's profile page.
* **Edit** -- Change the user's role.
* **Re-invite User** -- Send a new password setup link to the user's email.
* **Deactivate/Activate** -- Toggle the user's active status. Deactivated users cannot log in.
* **Remove from Organization** -- Permanently remove the user from your organization.
:::warning
Removing a user from the organization is permanent. Their completed reviews are preserved, but they lose all access.
:::
## Activity Log
Navigate to **Manage** > **Activity Log** to view the organization-wide activity feed. Events are grouped by day with the most recent events first.
Each event shows:
* **Category badge** -- Color-coded by type (auth, user management, cohort management, reviews, decisions).
* **Event description** -- What happened.
* **User and timestamp** -- Who performed the action and when.
* **Details** -- Additional context such as changed fields, status transitions, or avatar changes.
### Filtering
Use the filter dropdown in the header to narrow events by category:
* All Actions
* Auth (Login/Logout)
* User Management
* Cohort Management
* Reviews
* Decisions
Click **Load more** at the bottom to see older events. The feed loads 50 events at a time.
:::info
The activity log records over 50 distinct action types including logins, user creation, cohort updates, review submissions, decision pushes, stage locks, and more. All actions include the user who performed them and relevant details.
:::

View file

@ -0,0 +1,61 @@
---
title: Understanding Rubrics
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide/Understanding Rubrics
parentDocument: Reviewer Guide
outlineId: 9aa4493a-4596-4e7a-a3fb-0f276656528d
updatedAt: '2026-03-16T10:37:05.843Z'
createdBy: Jennie R.F.
---
Rubrics define the criteria you score applications against. They are created and managed in Flywheel, the external assessment engine, and loaded automatically when you open the scoring interface.
## What a Rubric Contains
Each rubric has a title and a list of **criteria**. A criterion is a single dimension you evaluate -- for example, "Mission Alignment" or "Team Dynamics." Every criterion includes:
* **Name and title** -- what the criterion is called
* **Description** -- what you should be evaluating
* **Maximum points** -- the highest score possible for this criterion
* **Weight** -- how heavily this criterion counts in the overall score (used in consensus calculations)
## Score Levels and Thresholds
Depending on how the rubric is configured in Flywheel, scoring guidance appears in one of three formats:
### Thresholds
Each criterion has named scoring bands with a title, description, and score range. For example:
* **Exceptional** (4-5): "Strong and nuanced understanding of their specific challenges..."
* **Competent** (2-3): "Basic recognition of some of their main challenges..."
* **Developing** (1): "Unaware of or defensive about their challenges..."
When you click a threshold, the score is set to the midpoint of that range.
### Named Levels
Each level has a fixed score value, a name, and a description. You click the level that best matches the applicant. The score is assigned directly from the level definition.
### Numeric Scores
When no thresholds or levels are configured, you see simple numbered buttons from 1 to the maximum. You select the number that best reflects your assessment.
## How Criteria Map to Application Questions
Each review stage has a rubric assigned through its Flywheel configuration. The scoring profile for a cohort defines a **map** that connects rubric criterion keys to application question keys. For example, a criterion called `gov_readiness` might map to the question `Q.GOV_PLAN`, meaning the applicant's answer to that question is what you should consider when scoring that criterion.
In practice, you see this mapping reflected in the scoring interface: the left column shows the applicant's answers, and the right column shows the corresponding criterion to score. The interface is designed so you can read the relevant answer and score it in the same view.
## Per-Stage Rubrics
Different review stages can use different rubrics. An application review stage might focus on written application quality, while an interview stage might use criteria like "Openness to Feedback" or "Cohort Fit." The rubric is loaded automatically based on the stage configuration -- you do not need to select one.
:::info
Rubrics are managed by administrators in Flywheel. If a rubric is missing or misconfigured, you will see a "Rubric Not Available" message when you try to open the scoring interface. Contact your cohort administrator if this happens.
:::
## Score Calculation
Your total score is the sum of all individual criterion scores. The percentage is calculated as total score divided by the maximum possible total. This score, along with your recommendation, feeds into the consensus view that administrators use when making decisions.

View file

@ -0,0 +1,67 @@
---
title: Your Dashboard
collection: Hub User Guide
path: Hub User Guide/Reviewer Guide/Your Dashboard
parentDocument: Reviewer Guide
outlineId: 976f1747-e097-4cea-a3e4-ecf804ff6b99
updatedAt: '2026-03-16T10:43:49.075Z'
createdBy: Jennie R.F.
---
The **My Reviews** dashboard is your home base. It shows every application assigned to you, organized by urgency so the most important items surface first.
## Getting There
Click **My Reviews** in the top navigation bar. This takes you to `/reviews`. A badge on the nav link shows how many assignments need your attention.
## Card Grid Layout
Active assignments appear as a card grid -- one, two, or three columns depending on your screen width. If you have assignments across multiple cohorts, cards are grouped under cohort headers. When you only have one cohort, the header is suppressed.
Each card shows:
* **Studio name** -- the applicant's studio or team name
* **Progress bar** -- small block indicators showing how far through the review stages this application has progressed
* **Stage name** -- which stage this assignment belongs to (e.g., "1st Interview", "2nd Interview"), plus the interview date if applicable
* **Status text** -- contextual information like "In progress" for drafts, "View only" for read-only assignments, or pipeline status
* **Action button** -- the primary action available to you
## How Cards Are Sorted
Cards are sorted by urgency, not alphabetically. The order is:
1. **Overdue** -- interview stages past their due date (shown in red)
2. **In progress** -- reviews you have started but not submitted (drafts)
3. **Ready to score** -- non-interview stages waiting for your input
4. **Post-interview ready** -- interview stages where the interview has already occurred
5. **Scheduled** -- upcoming interviews (future date)
6. **Awaiting scheduling** -- interview stages with no date set yet
7. **View only** -- assignments where you have read-only access
8. **Pipeline** -- applications not yet at this stage (shown dimmed)
## Action Buttons
The button on each card reflects the most relevant action:
* **Score Now** -- the application is ready for you to score
* **Continue** -- you have a draft in progress
* **Prepare** -- an interview stage where you can review materials before the interview
* **View** -- view-only access to the application
Clicking the card or its action button opens the guided scoring interface.
## Search
When you have 10 or more active assignments, a search field appears at the top. Type a studio name to filter the visible cards.
## Completed Reviews
Below the active grid, a **completed** section lists reviews you have already submitted. Click the toggle to expand or collapse it. If you have no active assignments, completed reviews display automatically.
Each completed row shows the studio name, stage, your score percentage, and the date you submitted. Click **View** to revisit a completed review in read-only mode.
:::info
The count shown next to "My Reviews" in the navigation bar reflects actionable assignments only -- it excludes view-only and pipeline items.
:::

View file

@ -0,0 +1,35 @@
# Overview
Cohort-OS is a web application for running cohort programs end-to-end. It collects applications through a public form, coordinates peer review with rubric-based scoring, records accept/waitlist/decline decisions, and tracks onboarding tasks.
## The Four Phases
Every cohort moves through four lifecycle phases, reflected by its status:
1. **Apply** -- The cohort accepts applications through a public form. Applicants fill out questions from a shared question pool, and submissions appear in the **Applications** tab. The cohort status during this phase is `draft` or `active` with application acceptance turned on.
2. **Review** -- Reviewers score applications against a rubric using a guided scoring interface. Admins assign reviewers to specific applications, configure review stages (first round, second round), and monitor progress. The cohort status is `review`.
3. **Decide** -- Admins lock completed reviews, view consensus scores and percentiles, and record decisions (accept, waitlist, or decline) with rationale codes and committee notes. The cohort status is `decide`.
4. **Onboard** -- Accepted applicants move into onboarding, where admins track checklist items and completion. The cohort status is `onboard`.
After onboarding wraps up, a cohort can be moved to `active` (program is running) or `closed` (program is complete). Cohorts can also be archived at any point.
## Roles
Your role determines what you see and what you can do.
- **Cohort admin** (includes org admins) -- Full access to cohort management: applications, reviewer assignments, review stages, decisions, email tasks, onboarding, and team settings. Cohort admins see the **Cohorts** dropdown and the **Manage** dropdown in the top navigation bar.
- **Reviewer** -- Access to assigned applications for scoring and interviews. Reviewers see **My Reviews** in the top navigation bar, which links to a dashboard of their current assignments across all cohorts.
Both roles can access **Settings** to manage their profile and password.
## Key Terms
- **Stage** -- A review round within a cohort (e.g., first round, second round). Each stage has its own reviewer assignments, scoring, and settings.
- **Rubric** -- A set of scoring criteria provided by Flywheel, the external assessment engine. Reviewers score each criterion on a defined scale.
- **Consensus** -- The aggregated view of all reviewer scores for an application in a given stage, including averages and percentiles.
- **Ghostie** -- Your avatar in Cohort-OS. You pick an expression and a color, and it renders as a small ghost illustration next to your name throughout the app.
- **Flywheel** -- The external assessment engine that provides rubrics, scoring logic, surveys, and reporting.

View file

@ -0,0 +1,58 @@
# Logging In & Your Profile
This page covers how to access Cohort-OS for the first time, how to sign in on return visits, and how to manage your profile and security settings.
## First-Time Setup
When an admin adds you to the team, you receive an invite email with a setup link. That link takes you to the **Set Your Password** page, where you choose a password (minimum 6 characters) and confirm it. Submitting the form sets your password and logs you in automatically -- you land on your **My Reviews** dashboard.
:::warning
Setup links expire. If your link no longer works, ask your admin to resend the invite, or use the forgot-password flow described below.
:::
## Signing In
Go to the login page and enter your email and password, then click **Sign in**. After a successful login you are redirected to your **My Reviews** dashboard.
If you enter incorrect credentials, an error message appears at the top of the form.
## Forgot Password
If you cannot remember your password:
1. Click **Forgot your password?** on the login page.
2. Enter your email address and click **Send Reset Link**.
3. Check your inbox for a reset email. Click the link in the email.
4. On the **Set Password** page, enter and confirm your new password (minimum 6 characters), then click **Reset Password**.
5. You are logged in automatically and redirected to your dashboard.
:::info
The reset confirmation always says a link was sent, even if no account exists with that email. This is intentional -- it prevents revealing which email addresses have accounts.
:::
## Your Profile
Open your profile by clicking your name (with your ghostie avatar) in the top-right corner of the navigation bar, or by navigating directly to **Settings**.
### Profile Section
The **Profile** section shows your current information in a compact list:
- **Avatar** -- Your ghostie. Click **[CHANGE]** to open the avatar picker (see below).
- **Name** -- Click **[EDIT]** to update inline. Press Enter or click **[SAVE]** to confirm.
- **Email** -- Click **[EDIT]** to update inline. Works the same as name.
- **Role** -- Read-only. Displays your current role (e.g., reviewer, orgadmin).
- **Organization** -- Read-only. Shows your active organization.
- **Member Since** -- Read-only. The month and year your account was created.
### Choosing a Ghostie Avatar
Click **[CHANGE]** next to your avatar to open the ghostie picker. You can select from six expressions -- Sweet, Mild, Exasperated, WTF, Disbelieving, and Double Take -- and pick a color from six presets or use the custom color picker. A live preview updates as you make selections. Click **[SAVE]** to apply your new avatar, or **[CANCEL]** to discard changes.
### Changing Your Password
The **Security** section lets you change your password. If you already have a password set, enter your current password first, then your new password and confirmation. If you do not have a password set yet (e.g., you were added to the system before password setup was available), you can set one directly without entering a current password. Passwords must be at least 6 characters.
### Display Settings
The **Display** section lets you switch between **Light**, **Dark**, and **System** themes.

View file

@ -0,0 +1,64 @@
# Navigating the App
This page explains the top navigation bar, what each section contains, and how navigation differs by role.
## The Navigation Bar
The top navigation bar is a black bar across the top of every page. From left to right, it contains:
1. **Organization name** (or org switcher)
2. **My Reviews** link
3. **Cohorts** dropdown (admins only)
4. **Manage** dropdown (admins only)
5. **User area** (your name, avatar, theme toggle, logout)
### Organization Name and Switcher
The far-left of the navigation bar shows your current organization name. If you belong to multiple organizations, this becomes a dropdown. Click it to see all your organizations, with a checkmark next to the active one. Select a different organization to switch context -- the page reloads with data from the selected org.
:::info
If you only belong to one organization, the org name displays as static text with no dropdown.
:::
### My Reviews
The **My Reviews** link appears for all roles. It takes you to your reviewer dashboard at `/reviews`, which shows all applications assigned to you for scoring. If you have incomplete reviews, a count appears in parentheses next to the link.
This is also where you land after logging in.
### Cohorts Dropdown
The **Cohorts** dropdown is visible to cohort admins and org admins -- reviewers do not see it. It lists all cohorts in your current organization. Clicking a cohort name takes you to that cohort's management page, where you can access tabs for **Applications**, **Reviewers**, **Assignments**, **Decisions**, **Email Tasks**, **Form**, and more.
If no cohorts exist yet, **Cohorts** appears as a plain link to the cohorts index page.
### Manage Dropdown
The **Manage** dropdown is visible to org admins (and above). It provides access to administrative tools:
- **Team Members** -- Invite users, assign roles, manage access to specific cohorts. Found at `/manage/users`.
- **Configure Stages** -- Set up review stages for your cohorts, including reviewer minimums/maximums and blind review settings. Found at `/admin/stages`.
- **Activity Log** -- A feed of all actions across your organization: logins, review submissions, status changes, decisions, and more. Found at `/manage/activity`.
:::tip
Superadmins see two additional items in the **Manage** dropdown: **Users** (system-wide user management) and **Organizations** (org-level administration). These are system-level tools not covered in this guide.
:::
### User Area
The right side of the navigation bar shows:
- **Your name and ghostie avatar** -- Click to go to **Settings**, where you can edit your profile, change your password, and pick a new avatar.
- **Theme toggle** -- A small button to switch between light and dark mode.
- **[LOGOUT]** -- Signs you out and returns you to the login page.
## What Each Role Sees
| Navigation element | Reviewer | Cohort Admin | Org Admin |
|---|---|---|---|
| My Reviews | Yes | Yes | Yes |
| Cohorts dropdown | No | Yes | Yes |
| Manage dropdown | No | No | Yes |
| Settings (via user area) | Yes | Yes | Yes |
Cohort admins see the **Cohorts** dropdown but not the **Manage** dropdown. Org admins see both. Reviewers see only **My Reviews** and their user area.

View file

@ -0,0 +1,55 @@
# Setting Up a Cohort
This page covers how to configure an existing cohort. Cohort creation is handled by system administrators -- once a cohort exists, you configure it from the cohort page.
## Prerequisites
You need cohort admin, org admin, or super admin access.
## Cohort Lifecycle
Every cohort moves through six stages. The current stage controls which tabs are visible and what actions are available.
- `draft` -- Setup mode. Reviewers do not see this cohort. Configure settings, stages, and the application form here.
- `review` -- Reviewers can score assigned applications. The **Decisions** tab is not yet visible.
- `decide` -- Review is considered complete. The **Decisions** tab appears so you can accept, waitlist, or decline applicants.
- `onboard` -- Decisions are finalized. The **Onboarding** tab appears for accepted participants.
- `active` -- The cohort program is running. All admin tabs remain accessible.
- `closed` -- The cohort is closed. Applications stop being accepted automatically. All data is preserved in read-only mode.
You can move forward, skip stages, or move backward. No data is lost when moving between stages -- earlier-stage tabs and features are simply hidden or shown.
## Changing the Cohort Stage
1. Open the cohort page and click the **Settings** button (gear icon) in the top right.
2. In the **Cohort Stage** section on the left, click the target stage in the pipeline.
3. A confirmation modal shows what will happen. Review the implications and click **Confirm**.
:::warning
Moving to `closed` automatically disables application intake. Moving backward hides tabs associated with later stages but preserves all data.
:::
## Configuring Settings
Click **Settings** to open the full-screen settings modal. From here you can:
- **Cohort Name** -- Edit the display name.
- **Cohort Slug** -- Edit the URL slug. This is locked once the cohort leaves `draft` status.
- **Application Opens / Closes** -- Set the date range for the application window.
- **Notification Emails** -- View which org admins receive new-application alerts. Manage recipients from **Team Settings**.
- **Survey** -- Manage the survey version and scoring profile used for application questions.
- **Organizations** -- Select which organizations can access this cohort.
Click **Save Configuration** when done.
## Accepting Applications
In the settings modal under **Application Intake**, toggle **Accepting Applications** on or off. This controls whether the public apply form accepts new submissions. The toggle is disabled when the cohort is in `closed` status.
## Archiving a Cohort
In the settings modal under **Danger Zone**, click **Archive Cohort**. Archiving hides the cohort from the main list and stops accepting applications. All data is preserved. You can unarchive a cohort from the archived banner that appears at the top of its page.
:::info
Archiving is reversible. Click **Unarchive** on the archived banner to restore the cohort.
:::

View file

@ -0,0 +1,47 @@
# Customizing the Apply Form
The **Form** tab on the cohort page lets you customize the public application form's appearance and messaging. Changes are reflected on the live form at `/c/{cohort-slug}/apply`.
## Prerequisites
You need cohort admin or higher access. The cohort must have a survey configured in its settings (questions come from Flywheel).
## Live Preview
The top of the **Form** tab shows a live preview of the application form. As you edit fields below, the preview updates immediately. The preview renders actual survey questions from Flywheel, grouped into sections: Eligibility Requirements, Personal Information, Studio Information, Application Questions, and Supporting Materials.
At the bottom of the preview, the confirmation message is shown as it will appear to applicants after submission.
## Content & Messaging
- **Form Title** -- The main heading on the application form.
- **Badge Text** -- A small label displayed above the title (e.g., "Now Accepting Applications").
- **Subtitle** -- A brief description shown below the title.
- **Confirmation Message** -- The message applicants see after submitting their application.
- **Support Email** -- Contact email displayed on the form for applicant questions.
## Color Theming
Click **Customize Colors** in the preview header to open the color palette. You can configure:
- **Primary color** -- The accent color used for buttons, links, and highlights. Choose from 18 color options including black/white, red, blue, indigo, and more.
- **Neutral color** -- The base gray tone used for backgrounds and borders. Options include gray, slate, zinc, neutral, and stone.
- **Theme** -- Choose Light, Dark, or System (follows the visitor's device preference).
The badge color automatically matches the primary accent color.
## Next Steps Messaging
These fields appear on the confirmation page after submission:
- **Review Process** -- Tell applicants how long review takes.
- **Decision Notification** -- When applicants will hear back.
- **Onboarding Information** -- When the program starts if accepted.
## Saving and Previewing
Click **Save Changes** at the bottom of the page to persist your customization. Click **Preview** in the header to open the live public form in a new tab.
:::tip
Save your changes before previewing. The preview button opens the actual public form URL, which uses the last saved customization.
:::

View file

@ -0,0 +1,51 @@
# Managing Applications
The **Applications** tab is the primary workspace for managing submitted applications. It provides search, filtering, grouping, inline actions, and bulk operations.
## Prerequisites
You need cohort admin or higher access. Reviewers are redirected to their interview dashboard instead.
## Viewing Applications
Applications appear in a table with the following columns: **Studio** (applicant name), **Pipeline** (current stage, lock status, interview date, self-assessment status), **Reviews** (completed/assigned count), **Consensus** (reviewer recommendation tallies), **Score** (cumulative percentage), and inline **Actions**.
Click a studio name to open the full application detail page, which shows all answers, review history, and status timeline.
## Searching and Filtering
- **Search** -- Type in the search box to filter by studio name or email.
- **Status** -- Filter by application status (e.g., `new`, `accepted`, `declined`).
- **Stage** -- Filter by current review stage (Application Review, First Round, Second Round).
- **Reviews** -- Filter by review progress: Not Assigned, Not Started, In Progress, or Complete.
- **Province** -- Filter by applicant location. When Ontario is selected, a **GTA only** toggle appears.
- **Hide declined** -- On by default. Toggle to show or hide declined applications.
Active filters appear as removable tags below the filter bar. Click **Clear** to reset all filters. Filter state persists in your browser across sessions.
## Sorting and Grouping
Use the **Sort** dropdown to order applications by name, score, date, pipeline stage, interview date, or consensus (mixed first). Use the **Group by** dropdown to group by status, stage, review progress, or consensus. Groups can be expanded or collapsed individually or all at once.
## Inline Actions
The **Pipeline** column shows lock icons. A ready-to-lock icon appears when the minimum reviewer threshold is met. Click it to open a confirmation modal to lock the stage. Once locked, you can advance or decline the application from the **Actions** column.
## Bulk Operations
Select applications using the checkboxes, then open the **Bulk Actions** dropdown:
- **Change to Declined** -- Decline all selected applications.
- **Reset to New** -- Reset selected applications to `new` status.
- **Lock/Unlock Stages** -- Lock or unlock stages in bulk.
- **Advance to Next Stage** -- Advance locked applications to the next stage.
- **Skip to Stage** -- Jump applications to a specific stage, bypassing intermediate ones.
- **Assign Reviewers** -- Open the reviewer assignment modal for selected applications.
## Orphaned Interviews
If Cal.com bookings could not be matched to an application, an **Orphaned** warning button appears in the filter bar. Click it to open the matching modal and manually associate bookings with applications.
## The Timeline Tab
Org admins and super admins see an additional **Timeline** tab on the cohort page. This shows a per-cohort activity feed including review submissions, status changes, decisions, and other cohort-level events.

View file

@ -0,0 +1,49 @@
# Configuring Review Stages
Review stages define the evaluation pipeline for your cohort. Each stage has its own rubric, reviewer requirements, and settings. You configure stages from **Manage** > **Configure Stages** in the top navigation -- not from within the cohort view.
## Prerequisites
You need cohort admin or higher access. The cohort must already exist with stages created by a system administrator.
## Accessing Stage Configuration
1. Click **Manage** in the top navigation bar.
2. Select **Configure Stages**.
3. Choose a cohort from the dropdown.
The page shows configuration cards for three stage types: **Application Stage**, **First Round Interviews**, and **Second Round Interviews**.
## Stage Types
- **Application Reviews** (`application-reviews`) -- The initial written application review. Reviewers score the submitted application form against a rubric.
- **First Round** (`first-round`) -- First-round interview scoring. Reviewers evaluate interview performance using a separate rubric.
- **Second Round** (`second-round`) -- Final-round interview scoring. Uses its own rubric for the final evaluation.
## Settings Per Stage
Each stage card offers the following configuration:
### Scoring Rubric
Select the Flywheel rubric used for scoring this stage. Application stage rubrics score the written application; interview rubrics score interview performance. Rubrics are managed in Flywheel and appear here as a dropdown.
### Minimum Reviewers
Set the minimum number of completed reviews required before a stage can be locked. The default is 2. This threshold is enforced when locking -- you cannot lock a stage until this many reviewers have submitted scores.
### Interview Question Reference (Interview Stages Only)
For first-round and second-round stages, you can enter interview questions or talking points. This content is available to reviewers as a popup reference during scoring. Use `##` for section headers.
## Saving
Each stage has its own **Save Configuration** button. Click it after making changes. Minimum reviewer changes save immediately when you change the value.
## Stage Statuses
Each stage has a status: `draft`, `active`, `completed`, or `archived`. These are managed by the system as the cohort progresses. Stage statuses are separate from the cohort lifecycle status.
:::info
The Application stage also includes survey version management. This controls which version of the Flywheel survey is used for application questions and ensures scoring consistency.
:::

View file

@ -0,0 +1,52 @@
# Assigning Reviewers
Reviewer assignment is a two-step process: first add reviewers to the cohort, then assign them to specific applications. All assignment is manual -- there is no auto-assignment.
## Prerequisites
You need cohort admin or higher access.
## Step 1: Add Reviewers to the Cohort
The **Reviewers** tab on the cohort page shows all reviewers currently assigned to this cohort along with their assignment counts.
1. Click **Add Reviewers**.
2. A modal opens with two tabs:
- **Select Existing** -- Choose from users already in your organization. Select one or more users from the dropdown and click **Add Reviewers**.
- **Add New User** -- Create a new user account. Enter their name, email, and role (Reviewer, Cohort Admin, or Org Admin). Click **Create & Add as Reviewer**. They will receive a password setup email.
3. After adding, you are prompted to optionally assign them to applications immediately (Step 2 below). You can skip this and assign later.
## Step 2: Assign Reviewers to Applications
Once reviewers are on the cohort, assign them to specific applications:
### From the Reviewers Tab
Click the assignment count next to a reviewer's name to open the **Manage Assignments** modal. This shows:
- **Current Assignments** -- Applications already assigned to this reviewer, with permission level and scoring status. You can toggle between `review` and `view` permissions, or unassign applications.
- **Add Applications** -- Select additional applications to assign. Choose a permission level and click **Assign**.
### From the Applications Tab (Bulk)
Select applications using the checkboxes on the **Applications** tab, then choose **Assign Reviewers** from the **Bulk Actions** dropdown. Select the reviewers and permission level, then confirm.
## Permission Levels
- `review` -- The reviewer can view the application and submit scores. This is the default.
- `view` -- The reviewer can see the application but cannot score it. Useful for committee members or observers.
Permission can be changed at any time from the **Manage Assignments** modal. Locked reviews cannot have their permission changed.
## Removing Assignments
- **Unassign from application** -- In the **Manage Assignments** modal, click the X button next to an assignment. Locked reviews cannot be unassigned.
- **Remove from cohort** -- On the **Reviewers** tab, click the trash icon next to a reviewer to remove them from the cohort entirely.
:::warning
Removing a reviewer from the cohort does not delete their completed reviews. The reviews remain in the system but the reviewer loses access.
:::
:::info
When a reviewer is assigned to an application, that application appears on their review dashboard at `/reviews`. Unassigning removes it from their dashboard.
:::

View file

@ -0,0 +1,54 @@
# Interview Scheduling
Interview scheduling is handled through Cal.com integration. When applicants book interviews through Cal.com, the booking data flows into Cohort-OS automatically via webhooks.
## Prerequisites
Cal.com must be configured with the correct webhook URL and cohort ID. This is set up by system administrators.
## How It Works
Cal.com sends webhook events to Cohort-OS when bookings are created, rescheduled, or cancelled. The system matches bookings to applications by the applicant's email address.
### Booking Created
When an applicant books an interview:
1. The system looks up the application by matching the booking email to the applicant's contact email.
2. If a match is found, the interview schedule data (date, time, meeting link) is saved to the application.
3. The application status updates automatically -- for example, from `waiting_first_round` to `first_round_scheduled`.
4. The current stage is synced to match the new status.
### Booking Rescheduled
When an interview is rescheduled, the schedule data (date, time, meeting link) is updated on the application. The application status is not changed.
### Booking Cancelled
When an interview is cancelled:
- The schedule data is cleared from the application.
- If the application is currently in a `*_scheduled` status, it reverts to the corresponding `waiting_*` status.
- Applications in other statuses keep their current status.
## Orphaned Interviews
If a Cal.com booking email does not match any application in the cohort, the booking becomes an "orphaned interview." When this happens:
- The booking is saved to an orphaned interviews table.
- Org admins receive an email alert with the booking details.
- An **Orphaned** warning badge appears on the **Applications** tab filter bar with a count of unmatched bookings.
To resolve orphaned interviews, click the **Orphaned** badge to open the matching modal. From there you can manually match the booking to the correct application.
## Interview Status on Applications
The **Pipeline** column on the **Applications** tab shows interview scheduling information:
- Applications in `waiting_first_round` or `waiting_second_round` show they are awaiting scheduling.
- Applications in `first_round_scheduled` or `second_round_scheduled` show the scheduled interview date and time.
- Past interview dates appear with strikethrough text.
:::info
Webhook events are deduplicated using the Cal.com event ID. Processing the same event twice has no effect. Applications in terminal statuses (`accepted`, `waitlist`, `declined`) do not have their status changed by new bookings.
:::

View file

@ -0,0 +1,55 @@
# Locking Reviews & Consensus
Locking is the mechanism that finalizes review scores for a stage and prepares an application for advancement or decision-making. Consensus data gives you a summary of how reviewers scored and what they recommended.
## Prerequisites
You need cohort admin or higher access. Reviews must be completed before locking.
## When to Lock
Lock a stage when the minimum number of reviewers have submitted their scores. The minimum is configured per stage (default: 2). You can see review progress in the **Reviews** column on the **Applications** tab -- it shows completed/assigned counts.
The lock icon in the **Pipeline** column indicates readiness:
- **Open lock (dimmed)** -- Not enough reviews completed yet. Hover to see how many more are needed.
- **Open lock (visible)** -- Minimum reviewers met. Ready to lock.
- **Closed lock** -- Stage is locked.
## How to Lock
### Single Application
Click the lock icon in the **Pipeline** column, or click **Lock** in the **Actions** column. A confirmation modal appears. Click **Confirm** to lock.
### Bulk Locking
Select multiple applications using the checkboxes, then choose **Lock Stages** from the **Bulk Actions** dropdown. Only applications that meet the minimum reviewer threshold will be locked. The dropdown shows how many are eligible.
## What Locking Does
- Prevents reviewers from editing their scores for this stage.
- Records who locked the stage and when.
- Automatically transitions the application status:
- Locking the final stage sets the status to `needs_final_decision`.
- Locking the first-round stage sets the status to `needs_first_decision`.
## Viewing Consensus
Once reviews are completed, you can view consensus data for any application. The **Consensus** column on the **Applications** tab shows recommendation tallies (e.g., "2x Advance, 1x Hold"). Hover over the consensus cell to see a popover with each reviewer's name, recommendation, and score percentage.
For detailed consensus, open an application and view its review consensus page. This shows:
- **Per-reviewer breakdown** -- Each reviewer's total score, percentage, recommendation, and notes. In blind review mode, reviewer identities are hidden from other reviewers (admins always see names).
- **Per-criterion comparison** -- Average, min, max, and spread for each scoring criterion across all reviewers. Criteria with high spread (disagreement) are flagged.
- **Aggregate stats** -- Average score, average percentage, recommendation counts, and whether all reviews are complete and locked.
## Unlocking
If you need to allow a reviewer to make changes after locking, click the closed lock icon on a locked application. A confirmation modal appears. Click **Confirm** to unlock. This reverts the stage to its pre-locked state.
You can also unlock in bulk via the **Bulk Actions** dropdown.
:::warning
Unlocking a stage allows reviewers to edit their scores again. The application status does not automatically revert -- you may need to manage status transitions manually.
:::

View file

@ -0,0 +1,58 @@
# Making Decisions
The **Decisions** tab provides a ranked view of applicants and tools to record accept, waitlist, or decline decisions. It appears when the cohort reaches the `decide` stage.
## Prerequisites
You need cohort admin or higher access. The cohort must be in `decide`, `onboard`, `active`, or `closed` status.
## The Decision Dashboard
The **Decisions** tab shows:
- **Statistics cards** -- Total applications, accepted count, waitlisted count, and declined count.
- **Ranked Applicants table** -- All applications with review scores, sorted by average score (highest first). Each row shows rank, studio name, contact name, email, average score percentage, review count, and current decision status.
Applications appear here once they have review scores or are already in a terminal status (`accepted`, `waitlist`, `declined`).
## Making a Decision
### Single Application
Click the dropdown menu (three dots) on any row. Choose **Accept**, **Waitlist**, or **Decline**. The decision is recorded immediately and the application status updates.
You can also click **View Reviews** to open the application detail page and review scores before deciding.
### Bulk Decisions
1. Select applications using the checkboxes in the leftmost column. Use the header checkbox to select all.
2. Click **Accept**, **Waitlist**, or **Decline** in the header buttons. The count of selected applications is shown on each button.
3. Decisions are applied to all selected applications. If some applications cannot transition to the chosen status (e.g., already decided), those are skipped and you receive a partial update notification.
## Decision Records
Each decision creates a record in the Decision model with:
- **Decision** -- `accepted`, `waitlist`, or `declined`.
- **Decided at** -- Timestamp of the decision.
- **Rationale** -- Optional text explaining the reasoning.
- **Rationale code** -- Optional code for categorizing the reason.
- **Committee notes** -- Optional notes from committee discussion.
:::info
Decisions enforce valid status transitions. You cannot re-decide a `declined` application to `accepted` without first changing its status. Waitlisted applications can transition to `accepted` or `declined`.
:::
## Workflow
The typical decision workflow is:
1. Lock all reviews for the relevant stage (see Locking Reviews & Consensus).
2. Review consensus data and scores on the **Decisions** tab.
3. Hold a committee discussion if needed.
4. Record decisions individually or in bulk.
5. Move the cohort to `onboard` status when all decisions are finalized. If all decisions are made through the bulk decision endpoint, the cohort may automatically transition to `onboard`.
## After Decisions
Once decisions are recorded, the **Email Tasks** tab shows which applicants need to be notified. Accepted applicants will appear on the **Onboarding** tab when the cohort reaches `onboard` status.

View file

@ -0,0 +1,55 @@
# Email Tasks
The **Email Tasks** tab tracks which applicants need to be notified and provides a record of completed notifications. It appears as a tab on the cohort page, with a badge count showing pending items.
## Prerequisites
You need cohort admin or higher access.
## Notification Types
The system tracks six notification types based on application status:
- **1st Round Scheduling** -- Applicants in `waiting_first_round` or `first_round_scheduled` who have not been sent a scheduling link.
- **2nd Round Scheduling** -- Applicants in `waiting_second_round` or `second_round_scheduled` who have not been sent a scheduling link.
- **Acceptance** -- Applicants with `accepted` status who have not been notified.
- **Waitlist** -- Applicants with `waitlist` status who have not been notified.
- **Decline** -- Applicants with `declined` status who have not been notified.
## Pending Notifications
The top section shows applications awaiting notification. Each row displays the studio name, contact email, current status, notification type, and a **Mark Done** button.
Use the type filter dropdown in the header to narrow the view to a specific notification type. The filter shows counts for each type.
## Marking Notifications as Done
Notification sending happens outside of Cohort-OS (you compose and send emails manually). When you have sent a notification:
1. Click **Mark Done** on the corresponding row.
2. A modal appears with:
- **Date & time sent** -- Pre-filled with the current time. Adjust if the email was sent earlier.
- **Sent by** -- Pre-filled with your name. Change if someone else sent the notification.
3. Click **Confirm** to record the notification.
The application moves from the pending list to the completed list. The badge count on the **Email Tasks** tab updates accordingly.
## Completed Notifications
The bottom section shows all recorded notifications. Each row displays the studio name, contact email, notification type, date notified, and who marked it as sent.
### Editing a Notification
Click the edit icon on any completed notification to update the sent date or the person who sent it. Click **Save** to confirm changes.
### Deleting a Notification
In the edit modal, click **Delete** to remove a notification record. This moves the application back to the pending list.
## Tab Badge Count
The **Email Tasks** tab shows a badge with the count of pending notifications. This updates in real time as you mark items done or as application statuses change.
:::tip
Use the type filter to work through notifications by category. For example, filter to "Acceptance" after making batch accept decisions, and work through the list in one pass.
:::

View file

@ -0,0 +1,42 @@
# Self-Assessments
Self-assessments are confidential surveys sent to individual team members of applicant studios. They are typically used during later interview stages to gather independent perspectives from each team member.
## Prerequisites
You need cohort admin or higher access. The application must be at a stage where self-assessments are relevant (typically `waiting_second_round` or `second_round_scheduled`).
## How Self-Assessments Work
Self-assessments use token-based access. Each team member receives a unique link that does not require a login. The link takes them to a form at `/c/{cohort-slug}/assessment/{token}` where they complete the survey independently.
Responses are confidential -- they are not shared with the applicant's teammates. The form includes sections like About You, Capacity, Your Role, Co-op Readiness, and Looking Ahead. Questions may include conditional logic where follow-up questions appear based on previous answers.
## Sending Self-Assessments
Self-assessment invitations are managed from the application detail page. An admin initiates the process by entering the names and emails of each team member. Each member receives an email with their unique assessment link.
## Tracking Completion
The **Applications** tab shows self-assessment status in the **Pipeline** column for applications in second-round stages:
- **"Self-assessment not sent"** (highlighted) -- No assessments have been initiated for this application.
- **"Self-assessment: X/Y"** -- X of Y team members have completed their assessment.
The `SelfAssessmentStatus` component on the application detail page provides a detailed view:
- A progress bar showing completion percentage.
- Each member listed with their status: **Completed**, **Pending**, or **Cancelled**.
- Completed members can be expanded to view their responses.
## Resending Emails
If a team member has not received or has lost their assessment link, you can resend the email from the application detail page. The resend updates the `lastResentAt` timestamp for that member.
## Cancelling a Member
If a team member should no longer complete the assessment (e.g., they left the studio), you can cancel their invitation. Cancelled members are excluded from the completion count and their status shows as **Cancelled**.
:::info
Self-assessment responses are stored directly on the application document under `selfAssessment.members[].responses`. Each member's token, sent date, completion date, and last access time are tracked for audit purposes.
:::

View file

@ -0,0 +1,45 @@
# Onboarding
The **Onboarding** tab tracks post-acceptance tasks for accepted participants. It appears when the cohort reaches the `onboard` stage.
## Prerequisites
You need cohort admin or higher access. The cohort must be in `onboard`, `active`, or `closed` status. At least one application must have `accepted` status.
## Viewing Participants
The **Onboarding** tab displays accepted participants in a card layout with two sections:
- **Participants list** (left, two-thirds width) -- Each participant card shows their studio name, contact name, email, and a completion badge showing their progress percentage. A progress bar provides a visual indicator.
- **Onboarding Stats** sidebar (right, one-third width) -- Shows total accepted count, overall progress percentage with a progress bar, and the count of pending tasks across all participants.
If no applications have been accepted yet, the tab shows an empty state message.
## Checklist Items
Each participant has an onboarding checklist. Items are displayed as checkboxes within each participant card. Checking an item records who completed it and when. Completed items show a strikethrough label and the completion date.
To toggle a checklist item, click the checkbox next to it. The change saves automatically.
:::info
Onboarding checklist items are defined per application. The specific items depend on how the onboarding template is configured for the cohort.
:::
## Notes
Each participant card has a notes section at the bottom. Click **Add notes** (or **Edit** if notes exist) to open an inline editor. Type your notes and click **Save**. Notes are free-form text for any onboarding-related information.
## Progress Tracking
Progress is calculated as the percentage of completed checklist items out of total items. The color of the completion badge changes based on progress:
- 100% -- green (complete)
- 75%+ -- blue
- 50%+ -- yellow
- Below 50% -- red
The sidebar shows the overall progress averaged across all participants and the total count of pending (unchecked) tasks.
:::tip
Use the notes field to record any special requirements or follow-up items for individual participants. This keeps onboarding context in one place.
:::

View file

@ -0,0 +1,73 @@
# Team & Settings
Team management and the activity feed are accessed from the **Manage** dropdown in the top navigation. These pages let you invite users, assign roles, and monitor what is happening across your organization.
## Prerequisites
You need org admin or super admin access.
## Team Members
Navigate to **Manage** > **Team Members** to view and manage your organization's users. The page shows a searchable table with each user's name, email, role, and status.
### User Statuses
- **Active** -- User has accepted their invite and set up their password.
- **Invite pending** -- User has been created but has not set up their password yet.
- **Inactive** -- User has been deactivated and cannot log in.
### Inviting a New User
1. Click **Add Team Member**.
2. Choose **Create New** (default) or toggle to **Add Existing** if the user already exists in another organization.
3. For new users, enter their name, email, and role. Click **Create User**.
4. The user receives an email with a link to set up their password.
### Roles
- **Reviewer** -- Can view and score applications they are explicitly assigned to.
- **Cohort Admin** -- Can manage cohort settings and make decisions.
- **Org Admin** -- Can manage organization settings, team members, and all cohorts.
- **Super Admin** -- Full system access (only assignable by super admins).
### Managing Users
Click the dropdown menu (three dots) on any user row for options:
- **View Profile** -- Open the user's profile page.
- **Edit** -- Change the user's role.
- **Re-invite User** -- Send a new password setup link to the user's email.
- **Deactivate/Activate** -- Toggle the user's active status. Deactivated users cannot log in.
- **Remove from Organization** -- Permanently remove the user from your organization.
:::warning
Removing a user from the organization is permanent. Their completed reviews are preserved, but they lose all access.
:::
## Activity Log
Navigate to **Manage** > **Activity Log** to view the organization-wide activity feed. Events are grouped by day with the most recent events first.
Each event shows:
- **Category badge** -- Color-coded by type (auth, user management, cohort management, reviews, decisions).
- **Event description** -- What happened.
- **User and timestamp** -- Who performed the action and when.
- **Details** -- Additional context such as changed fields, status transitions, or avatar changes.
### Filtering
Use the filter dropdown in the header to narrow events by category:
- All Actions
- Auth (Login/Logout)
- User Management
- Cohort Management
- Reviews
- Decisions
Click **Load more** at the bottom to see older events. The feed loads 50 events at a time.
:::info
The activity log records over 50 distinct action types including logins, user creation, cohort updates, review submissions, decision pushes, stage locks, and more. All actions include the user who performed them and relevant details.
:::

View file

@ -0,0 +1,66 @@
# Application Status Reference
Every application moves through a defined set of statuses as it progresses from submission to final outcome. This page documents each status, what it means, and the valid transitions between them.
## Status Definitions
**`new`** -- Application recently submitted. This is the starting status for every application that comes in through the apply form.
**`waiting_first_round`** -- Waiting to be scheduled for a first-round interview. The application has been advanced from its initial review stage into the first-round interview stage but does not yet have an interview booked.
**`first_round_scheduled`** -- First-round interview has been scheduled. Set automatically when a Cal.com booking webhook matches the applicant's email, or when an admin manually changes the status.
**`needs_first_decision`** -- First round complete, awaiting a decision. Set automatically when an admin locks the first-round stage (all completed reviews must be locked and the minimum reviewer threshold met).
**`waiting_second_round`** -- Waiting to be scheduled for a second-round interview. The committee decided to advance the application past the first round into a second-round stage.
**`second_round_scheduled`** -- Second-round interview has been scheduled. Same mechanism as first round -- triggered by a Cal.com webhook match or manual status change.
**`needs_final_decision`** -- Second round complete, awaiting final decision. Set automatically when the final stage is locked.
**`accepted`** -- Application accepted into the cohort. Terminal status.
**`waitlist`** -- Application waitlisted. Can still transition to `accepted` or `declined`.
**`declined`** -- Application declined. Terminal status.
## Valid Transitions
Each status can only move to specific next statuses. The system enforces these rules and rejects invalid transitions.
- `new``waiting_first_round`, `first_round_scheduled`, `declined`
- `waiting_first_round``first_round_scheduled`, `waiting_second_round`, `declined`
- `first_round_scheduled``needs_first_decision`, `waiting_first_round`, `declined`
- `needs_first_decision``waiting_second_round`, `accepted`, `waitlist`, `declined`
- `waiting_second_round``second_round_scheduled`, `accepted`, `waitlist`, `declined`
- `second_round_scheduled``needs_final_decision`, `waiting_second_round`, `declined`
- `needs_final_decision``accepted`, `waitlist`, `declined`
- `accepted` → (none -- terminal)
- `waitlist``accepted`, `declined`
- `declined` → (none -- terminal)
## What Triggers Transitions
**Automatic transitions:**
- **Stage locking** sets `needs_first_decision` or `needs_final_decision` depending on whether the locked stage is the final stage or a first-round stage.
- **Advancing to next stage** sets `waiting_first_round` or `waiting_second_round` based on the next stage's type. If an interview is already scheduled for that round, the status skips directly to `first_round_scheduled` or `second_round_scheduled`.
- **Cal.com webhooks** move an application from `waiting_first_round` to `first_round_scheduled` (or the second-round equivalent) when a booking is created.
**Manual transitions:**
- Admins can change any application's status through the **Applications** tab, provided the transition is valid.
- Bulk status changes are available for processing multiple applications at once.
- The **Decisions** tab records accept, waitlist, or decline decisions and updates the status accordingly.
:::info
An application can be declined from almost any active status. The `waitlist` status is the only non-terminal outcome status -- waitlisted applications can still be accepted or declined later.
:::
## Status Groups
The system groups statuses for filtering:
- **Active:** `new`, `waiting_first_round`, `first_round_scheduled`, `needs_first_decision`, `waiting_second_round`, `second_round_scheduled`, `needs_final_decision`
- **Decision pending:** `needs_first_decision`, `needs_final_decision`
- **Terminal:** `accepted`, `waitlist`, `declined`

View file

@ -0,0 +1,74 @@
# Rubric & Scoring Details
This page covers how scoring works end to end in Cohort-OS -- from how rubrics are structured, through per-criterion scoring by individual reviewers, to how consensus scores are calculated across the review panel.
## Rubric Structure
Rubrics are defined in Flywheel (the external assessment engine) and linked to each review stage. A rubric contains:
- **Criteria** -- The individual dimensions being evaluated (e.g., "Mission Alignment", "Governance Readiness"). Each criterion has a name, description, and maximum point value.
- **Thresholds** (or levels) -- Descriptive scoring bands within each criterion. Each threshold has a title, a description of what that performance level looks like, and a point range. Reviewers select the threshold that best matches the applicant.
- **Max points** -- Each criterion defines its own maximum score (commonly 5 points). The rubric's total possible score is the sum of all criteria max points.
Each review stage in a cohort is linked to a specific Flywheel rubric via the stage's configuration. Different stages can use different rubrics.
## How Reviewers Score
Reviewers use the **guided scoring interface** to evaluate applications criterion by criterion. For each criterion:
1. Read the criterion description and the applicant's relevant answers (shown side by side).
2. Select a threshold/level that matches the applicant's response. Selecting a threshold automatically assigns the corresponding point value.
3. Optionally add notes specific to that criterion.
4. All criteria must be scored before proceeding to the recommendation step.
After scoring all criteria, the reviewer sees their total score (sum of all criterion scores) and percentage (total divided by maximum possible), then selects a recommendation and writes overall notes.
:::info
If a rubric does not have thresholds configured in Flywheel, reviewers see simple numeric score buttons (1 through the max points) instead of descriptive threshold options.
:::
## Individual Review Scores
Each completed review stores:
- **Per-criterion scores** -- The point value and max value for each criterion, plus optional notes.
- **Total score** -- Sum of all criterion scores.
- **Percentage** -- Total score divided by total max points, expressed as a percentage.
- **Recommendation** -- One of `advance`, `reject`, `hold`, or `undecided`.
- **Overall notes** -- Free-text assessment from the reviewer.
A review starts as `draft` (editable, can be saved and returned to) and becomes `completed` when submitted. Submitting locks the scores and notes. The recommendation can still be changed after submission with a recorded reason.
## Consensus Calculation
When multiple reviewers have completed their reviews for the same application and stage, the consensus view aggregates their scores. The consensus endpoint calculates:
- **Average score** -- Mean of all reviewers' total scores.
- **Average percentage** -- Mean of all reviewers' percentage scores.
- **Per-criterion averages** -- For each criterion, the mean score across all reviewers.
- **Score spread** -- The difference between the highest and lowest score for each criterion. A criterion is **flagged** when the spread is 2 or more points, or 40% or more of the max points. Flagged criteria indicate significant disagreement between reviewers.
- **Recommendation tally** -- Count of advance, hold, reject, and undecided recommendations.
:::warning
Only completed reviews with `review` permission are included in consensus calculations. View-only assignments and draft reviews are excluded.
:::
## Grade Bands
Cohorts can define grade bands that map percentage ranges to letter grades:
| Band | Range | Label |
|------|-------|-------|
| A | 90-100% | Exceptional |
| B | 80-89% | Strong |
| C | 70-79% | Good |
| D | 60-69% | Acceptable |
| F | 0-59% | Needs Work |
Grade bands are configured per cohort in the Flywheel integration settings. They provide a quick way to categorize application strength during committee discussions.
## Blind Review
When **blind review** is enabled on a stage (the default), reviewer identities are hidden from other reviewers viewing the consensus. Reviewers appear as "Reviewer 1", "Reviewer 2", etc. Admins always see full reviewer names and emails regardless of the blind review setting.
Blind review affects the consensus view only -- it does not change what reviewers see while scoring. Each reviewer always scores independently without seeing other reviewers' scores until their own review is submitted.

View file

@ -0,0 +1,43 @@
# Glossary
Key terms used throughout Cohort-OS and this documentation.
---
**Blind review** -- A stage setting that hides reviewer identities from other reviewers when viewing consensus scores. Reviewers appear as "Reviewer 1", "Reviewer 2", etc. Admins always see full names regardless of this setting. Enabled by default on all stages.
**Cohort** -- A group of applicants going through a selection process together. Each cohort has its own application form, review stages, reviewers, and decisions. Cohorts move through lifecycle statuses: `draft`, `review`, `decide`, `onboard`, `active`, `closed`.
**Cohort admin** -- A user with the `cohortadmin` role. Can manage applications, assign reviewers, configure stages, make decisions, and send notifications for cohorts they have access to. Cannot manage organization-level settings or users.
**Consensus** -- The aggregated view of all completed reviews for a single application within a stage. Shows average scores per criterion, overall averages, score spread, and a tally of reviewer recommendations. Used by admins and committees to inform decisions.
**Criterion** -- A single dimension of evaluation within a rubric (e.g., "Mission Alignment", "Governance Readiness"). Each criterion has a name, description, maximum point value, and optional scoring thresholds that describe what each performance level looks like.
**Decision** -- The final outcome recorded for an application: `accepted`, `waitlist`, or `declined`. Decisions include a rationale, rationale code, and optional committee notes. Recorded in the **Decisions** tab.
**Flywheel** -- The external assessment engine that Cohort-OS integrates with. Flywheel provides rubrics (scoring criteria and thresholds), surveys (application form question sets), and reporting capabilities. Rubrics are authored in Flywheel and linked to review stages in Cohort-OS.
**Ghostie** -- The avatar system in Cohort-OS. Each user chooses a ghostie -- a small ghost illustration defined by an expression (sweet, mild, exasperated, wtf, disbelieving, double-take) and a body color (from presets or a custom color picker). Ghosties appear on user profile circles throughout the app.
**Guided scoring** -- The primary review interface where reviewers score applications criterion by criterion. In focused mode, one criterion is shown at a time with the applicant's answers alongside. In "show all" mode, every criterion is visible at once. After scoring all criteria, reviewers proceed to the recommendation step.
**Org** -- Short for organization. The top-level entity that owns cohorts, users, and settings. Users can belong to multiple orgs via org memberships. All data is scoped to an org.
**Orgadmin** -- A user with the `orgadmin` role for an organization. Full management access: can manage users, cohorts, settings, and view the activity feed. Can do everything a cohort admin can, plus organization-level administration.
**Orphaned interview** -- A Cal.com booking that could not be automatically matched to an application. This happens when the email on the booking does not match any applicant's contact email in the cohort. Orphaned interviews appear in a modal accessible from the **Applications** tab, where an admin can manually match them to the correct application or dismiss them.
**Permission (view/review)** -- The access level assigned to a reviewer for a specific application. `review` permission allows the reviewer to score the application and submit a review. `view` permission gives read-only access to the application data without the ability to score. View-only reviews are excluded from consensus calculations.
**Rationale code** -- A short code attached to a decision that categorizes the reasoning (e.g., `manual_accept`). Used alongside the free-text rationale to provide structured data about why a decision was made.
**Recommendation** -- A reviewer's suggested outcome for an application, selected after scoring. Four options: **advance** (move to the next stage), **reject** (decline the application), **hold** (flag for further committee discussion), **undecided** (no recommendation yet). Recommendations inform but do not determine the final decision. Reviewers can change their recommendation after submission by providing a reason.
**Reviewer** -- A user assigned to evaluate applications. Reviewers are first added to a cohort's reviewer pool, then assigned to specific applications with either `view` or `review` permission. Reviewers access their assignments through the review dashboard and score applications using the guided scoring interface.
**Rubric** -- A structured scoring framework defined in Flywheel and linked to a review stage. Contains multiple criteria, each with descriptive thresholds/levels. Rubrics standardize evaluation so all reviewers assess applications against the same dimensions.
**Self-assessment** -- A survey sent to members of an applicant's team (e.g., studio members). Each team member receives a unique token-based link that does not require login. Admins can track completion status, resend emails, and view responses. Self-assessment data is available to second-round reviewers during scoring.
**Stage** -- A phase within a cohort's review process. Common stage types are `application-reviews` (initial paper review), `first-round` (first interview round), and `second-round` (second interview round). Each stage has its own rubric, reviewer settings (minimum/maximum reviewers, blind review, self-review), and status. Stages are ordered sequentially and applications advance through them.

View file

@ -0,0 +1,57 @@
# Your Dashboard
The **My Reviews** dashboard is your home base. It shows every application assigned to you, organized by urgency so the most important items surface first.
## Getting There
Click **My Reviews** in the top navigation bar. This takes you to `/reviews`. A badge on the nav link shows how many assignments need your attention.
## Card Grid Layout
Active assignments appear as a card grid -- one, two, or three columns depending on your screen width. If you have assignments across multiple cohorts, cards are grouped under cohort headers. When you only have one cohort, the header is suppressed.
Each card shows:
- **Studio name** -- the applicant's studio or team name
- **Progress bar** -- small block indicators showing how far through the review stages this application has progressed
- **Stage name** -- which stage this assignment belongs to (e.g., "1st Interview", "2nd Interview"), plus the interview date if applicable
- **Status text** -- contextual information like "In progress" for drafts, "View only" for read-only assignments, or pipeline status
- **Action button** -- the primary action available to you
## How Cards Are Sorted
Cards are sorted by urgency, not alphabetically. The order is:
1. **Overdue** -- interview stages past their due date (shown in red)
2. **In progress** -- reviews you have started but not submitted (drafts)
3. **Ready to score** -- non-interview stages waiting for your input
4. **Post-interview ready** -- interview stages where the interview has already occurred
5. **Scheduled** -- upcoming interviews (future date)
6. **Awaiting scheduling** -- interview stages with no date set yet
7. **View only** -- assignments where you have read-only access
8. **Pipeline** -- applications not yet at this stage (shown dimmed)
## Action Buttons
The button on each card reflects the most relevant action:
- **Score Now** -- the application is ready for you to score
- **Continue** -- you have a draft in progress
- **Prepare** -- an interview stage where you can review materials before the interview
- **View** -- view-only access to the application
Clicking the card or its action button opens the guided scoring interface.
## Search
When you have 10 or more active assignments, a search field appears at the top. Type a studio name to filter the visible cards.
## Completed Reviews
Below the active grid, a **completed** section lists reviews you have already submitted. Click the toggle to expand or collapse it. If you have no active assignments, completed reviews display automatically.
Each completed row shows the studio name, stage, your score percentage, and the date you submitted. Click **View** to revisit a completed review in read-only mode.
:::info
The count shown next to "My Reviews" in the navigation bar reflects actionable assignments only -- it excludes view-only and pipeline items.
:::

View file

@ -0,0 +1,60 @@
# Scoring an Application
The guided scoring interface walks you through each rubric criterion one at a time, then asks for your overall recommendation. This is where you do the actual work of evaluating an application.
## Opening the Scoring Interface
Click any active card on your **My Reviews** dashboard (or its **Score Now** / **Continue** button). This opens the guided scoring page at `/review/[cohortSlug]/guided`.
The page has a two-column layout:
- **Left column** -- background information about the application
- **Right column** -- the scoring form
## Left Column: Background Tabs
The left column has several tabs depending on the stage:
- **Application** -- the applicant's submitted answers, attachments, and links. Eligibility questions are filtered out. File attachments can be downloaded; URLs open in a new tab.
- **Reviews** -- completed reviews from earlier stages (hidden on the first stage). If blind review is enabled, reviewer identities are hidden.
- **Assessments** -- self-assessment results from team members (second-round stages only).
- **Script** -- interview questions configured for this stage, rendered from markdown. Use this to guide your interview conversation.
- **Notes** -- a private text area for your own interview notes. These auto-save and are tied to the specific stage. Only you can see them.
## Right Column: Scoring
By default, you score one criterion at a time in **focused mode**. A **Show all** toggle in the upper right switches to a view where every criterion is visible at once.
In focused mode, each criterion shows:
1. **Criterion number and name** (e.g., "Criterion 1/5 -- Mission Alignment")
2. **Description** of what to evaluate
3. **Score options** -- either threshold cards with descriptions (click to select), named levels, or numeric buttons depending on how the rubric is configured
4. **Notes field** -- optional per-criterion notes
Use the **Next Criterion** and **Previous** buttons to navigate. You can also use the **Jump to...** dropdown to skip to any criterion. A criterion must be scored before you can advance to the next one.
### Keyboard Shortcuts
In focused mode, you can score without touching your mouse:
- **Up/Down arrows** -- navigate between threshold or level options
- **Enter** -- select the focused option and advance to the next criterion
- **Left/Right arrows** -- move to the previous or next criterion
- **Number keys (1-9)** -- set a numeric score directly (when simple score buttons are shown)
## Saving and Submitting
Your work auto-saves every 1.5 seconds after you make a change. A "Saving..." / "Saved" indicator appears near the top right. You can also click **Save Draft** manually at any time.
Once all criteria are scored, click **Proceed to Recommendation** to move to the final screen. This shows your score summary, lets you select a recommendation, and provides space for overall notes.
Click **Submit Review** when you are ready. A confirmation dialog explains that submission locks your scores and notes. After confirming with **Submit & Lock**, your review is complete and you are redirected to the dashboard.
:::warning
After submission, your scores and notes are locked. You can still change your recommendation if the review is for the application's current stage, but scores cannot be edited.
:::
:::tip
For interview stages, the **Submit Review** button is disabled until after the interview has occurred. You can still prepare by reviewing the application and scoring criteria beforehand.
:::

View file

@ -0,0 +1,52 @@
# Understanding Rubrics
Rubrics define the criteria you score applications against. They are created and managed in Flywheel, the external assessment engine, and loaded automatically when you open the scoring interface.
## What a Rubric Contains
Each rubric has a title and a list of **criteria**. A criterion is a single dimension you evaluate -- for example, "Mission Alignment" or "Team Dynamics." Every criterion includes:
- **Name and title** -- what the criterion is called
- **Description** -- what you should be evaluating
- **Maximum points** -- the highest score possible for this criterion
- **Weight** -- how heavily this criterion counts in the overall score (used in consensus calculations)
## Score Levels and Thresholds
Depending on how the rubric is configured in Flywheel, scoring guidance appears in one of three formats:
### Thresholds
Each criterion has named scoring bands with a title, description, and score range. For example:
- **Exceptional** (4-5): "Strong and nuanced understanding of their specific challenges..."
- **Competent** (2-3): "Basic recognition of some of their main challenges..."
- **Developing** (1): "Unaware of or defensive about their challenges..."
When you click a threshold, the score is set to the midpoint of that range.
### Named Levels
Each level has a fixed score value, a name, and a description. You click the level that best matches the applicant. The score is assigned directly from the level definition.
### Numeric Scores
When no thresholds or levels are configured, you see simple numbered buttons from 1 to the maximum. You select the number that best reflects your assessment.
## How Criteria Map to Application Questions
Each review stage has a rubric assigned through its Flywheel configuration. The scoring profile for a cohort defines a **map** that connects rubric criterion keys to application question keys. For example, a criterion called `gov_readiness` might map to the question `Q.GOV_PLAN`, meaning the applicant's answer to that question is what you should consider when scoring that criterion.
In practice, you see this mapping reflected in the scoring interface: the left column shows the applicant's answers, and the right column shows the corresponding criterion to score. The interface is designed so you can read the relevant answer and score it in the same view.
## Per-Stage Rubrics
Different review stages can use different rubrics. An application review stage might focus on written application quality, while an interview stage might use criteria like "Openness to Feedback" or "Cohort Fit." The rubric is loaded automatically based on the stage configuration -- you do not need to select one.
:::info
Rubrics are managed by administrators in Flywheel. If a rubric is missing or misconfigured, you will see a "Rubric Not Available" message when you try to open the scoring interface. Contact your cohort administrator if this happens.
:::
## Score Calculation
Your total score is the sum of all individual criterion scores. The percentage is calculated as total score divided by the maximum possible total. This score, along with your recommendation, feeds into the consensus view that administrators use when making decisions.

View file

@ -0,0 +1,49 @@
# Recommendations
After scoring all criteria for an application, you select a recommendation. This is your overall judgment on what should happen next with the applicant.
## The Four Options
You choose one recommendation from a dropdown on the **Final Recommendation** screen:
- **Advance to Next Stage** -- you believe this applicant should move forward in the process. Use this when the application or interview performance meets or exceeds the bar for the current stage.
- **Hold for Further Review** -- you are not ready to recommend advancing or declining. Use this when you see potential but have concerns that need committee discussion, or when the applicant falls in a borderline zone where additional input from other reviewers would help.
- **Decline Application** -- you believe this applicant should not continue. Use this when the application clearly does not meet the criteria for the cohort, or when significant concerns emerged during the review.
- **Undecided** -- you have not formed a recommendation yet. This is the default state. You must change it to one of the other three options before you can submit your review.
:::warning
You cannot submit your review while the recommendation is set to **Undecided**. All criteria must be scored and a recommendation selected before the **Submit Review** button becomes active.
:::
## The Recommendation Screen
When you click **Proceed to Recommendation** after scoring, you see:
- **Your total score** and percentage in the top right
- **Recommendation dropdown** -- select your choice here
- **Score summary** -- a breakdown of each criterion with your score
- **Overall notes** -- a free-text area for your overall assessment of the applicant
The score summary is read-only at this point (go back to scoring to change individual scores). The recommendation and overall notes can be edited freely before submission.
## Changing Your Recommendation After Submission
Once you submit, your scores and notes are locked. However, if the review is for the application's current stage, you can still change your recommendation. On the read-only review screen, click the **Change** button next to your current recommendation. A modal asks you to:
1. Select a new recommendation
2. Provide a reason for the change
The change is recorded with a timestamp and your reason. A full history of recommendation changes is visible below the current recommendation.
:::info
Recommendation changes are only available while the application is still at the stage you reviewed. Once the application advances to a later stage, your earlier recommendation becomes fully locked.
:::
## How Recommendations Factor Into Decisions
Your recommendation is one input among potentially several reviewers. Cohort administrators see all reviewer recommendations alongside consensus scores when making decisions on the **Decisions** tab. A pattern of "advance" recommendations supports moving an applicant forward, while mixed recommendations (some "advance," some "hold") typically trigger committee discussion.
The final decision -- accept, waitlist, or decline -- is made by the cohort admin, not determined automatically by reviewer recommendations.

View file

@ -0,0 +1,56 @@
# Interviews
Interview stages follow the same scoring workflow as application review stages, with additional features for preparing, taking notes, and working with scheduled interview times.
## The Dashboard for Interview Stages
Your **My Reviews** dashboard at `/reviews` handles both application reviews and interview assignments in a single view. Interview stage cards show additional information:
- **Stage label** -- displayed as "1st Interview" or "2nd Interview" instead of the raw stage name
- **Interview date** -- shown next to the stage name if an interview has been scheduled
- **"Unscheduled"** -- shown if the interview has not been booked yet
- **Due date** -- if a due date is set, it appears on cards where the interview has already occurred. Overdue items show the date in red with an "OVERDUE" label.
Interview cards sort by urgency alongside non-interview cards. Overdue interview reviews appear at the very top. Unscheduled interviews appear lower since there is nothing to act on yet.
## Preparing for an Interview
Before the interview takes place, you can open the scoring interface to prepare. Click **Prepare** on the card. Inside, you have access to:
- **Application tab** -- review the applicant's full submission
- **Reviews tab** -- read completed reviews from earlier stages to understand how the applicant has been evaluated so far
- **Script tab** -- view the interview questions configured for this stage (rendered from markdown). Use these to guide your conversation.
- **Notes tab** -- start writing private notes before, during, or after the interview
You can score criteria and save drafts during preparation, but you cannot submit the review until after the scheduled interview time has passed.
:::tip
Open the scoring interface on a second screen during the interview itself. The **Script** tab gives you your question guide, and the **Notes** tab lets you capture observations in real time.
:::
## Taking Interview Notes
The **Notes** tab provides a private text area tied to the specific stage. Your notes auto-save as you type (with a short delay). Only you can see these notes -- they are not visible to other reviewers or the applicant.
Notes are stored separately from the review itself. Even if you have not started scoring, you can use the notes tab to capture thoughts during or after the interview.
## Scoring After the Interview
Once the interview time has passed, the scoring workflow is identical to a regular application review. Score each criterion, select your recommendation, write your overall notes, and submit.
For second-round interview stages, an additional **Assessments** tab appears in the left column, showing self-assessment results from the applicant's team members. This gives you context on how the team views its own strengths and challenges.
## What Happens After You Submit
After submission:
1. Your review is locked (scores and notes become read-only)
2. You are redirected back to the dashboard
3. The assignment moves to the **completed** section
4. Your recommendation can still be changed if the application remains at this stage
The cohort administrator is notified when enough reviewers have completed their reviews for a stage. They will then review the consensus scores and make a decision on each application.
:::info
Interviews are scheduled through Cal.com by the cohort administrator. You do not book interviews yourself. If your dashboard shows "Unscheduled" for an interview assignment, the booking has not been created yet -- no action is needed from you.
:::

View file

@ -1,13 +0,0 @@
---
title: ''
collection: Impact
path: Impact/
parentDocument: null
outlineId: a799ff50-a100-4a62-a0ab-8d57647e77c2
updatedAt: '2026-03-01T19:36:38.224Z'
createdBy: Jennie R.F.
---
[Structures for Impact](/doc/f63f29ef-7e41-4eb7-9238-140030db2caa)
\

View file

@ -0,0 +1,164 @@
---
title: Ontario Funding Landscape
collection: Ontario Hub
path: Ontario Hub/Ontario Funding Landscape
parentDocument: null
outlineId: e8c241c0-7c0f-45c8-919f-e4b3affc8c42
updatedAt: '2026-03-16T16:06:00.507Z'
createdBy: Jennie R.F.
---
- [ ] Explain ideas for experimental R&D
---
OIDMTC details, Ontario Creates funding programs, any provincial co-op incentives, GTA-specific resources. Feeds into Session 6 content.
## OIDMTC 40% labour credit
[Ontario Interactive Digital Media Tax Credit](https://ontariocreates.ca/tax-incentives/oidmtc) is an extremely valuable incentive for ON studios.
(Note: studios can claim 3 years from fiscal year end retroactively!!)
It provides:
* a refundable tax credit of 40% on eligible Ontario labour expenditures for studios that develop and self-publish their own games (weirdly called "non-specified products")
* up to $100,000 in marketing and distribution expenditures per product
* Fee-for-service work earns a 35% credit.
*There is no annual cap on eligible labour expenditures*
Administered through Ontario Creates (which issues a certificate of eligibility) and the CRA (which processes thje credit on the T2).
Studios must apply within 18 months of the tax year **in which a product was completed**
Administration fee is 0.15% of eligibile expenditures ($1,000-$10,000)
There are 4 streams, with different requirements.
1. Non-specified (own IP, self-published)
1. 40%
2. 80/25 rule; product must be completed; revenmue stream required.
2. Specified (fee-for-service)
1. 35%
2. 80/25 rule; arm's-length purchaser; product completed
3. Qualifying digital game corporation
1. 35%
2. Min $1M Ontario labour over 36 months; product need not be completed
4. Specialized digital game corporation
1. Min $500K Ontario labour/year; 80%+ payroll or 90%+ revenue from games; annual filing
### The 80/25 rule
* 80% of total dev labour must be performed in Ontario
* 25% must be paid as wages to employees of the claiming corporation (**not contractors**)
* This second requirement is particularly important for co-ops. Worker-members ***must be on payroll*** receiving T4 slips to count toward the 25% employee test. If a co-op treats its members primarily as independent contractors, it will likely fail this threshold.
* \
### Stacking OIDMTC
#### SR&ED
Studios can claim OIDMTC and SR&ED credits in the same year, but *cannot claim the **same** labour expenditures under both programs.*
Approach ideas:
* Allocate experimental R&D work\[^1\] to SR&ED and regular game dev work to OIDMTC
SR&ED provides a 35% refundable credit for CCPCs on the first $6 million in qualifyinf expenditures.
Studios should also claim the Ontario Innovation Tax Credit (OITC), which provides an additional 10% refundable credit on SR&ED expenditures in Ontario
Now includes capital expenditures - for equipment
SR&ED credits are not considered "government assistance" that would reduce OIDMTC eligible expenditures
#### NRC IRAP
NRC IRAP provides non-repayable grants for innovative R&D *projects* with clear milestones (vs sr&ed which is for broader annual activities). You must be able to cover 20% of wage costs and 50% of contractor costs.
You can claim IRAP on a subset of your work, then claim tax credits on everything
**Important:** Requires pre-approval before project start. Cannot be claimed retroactively. Not all work qualififies. It needs to be "innovative" risky R&D. You really need to build trust with your assigned Industrial Technology Advisor.
#### Canada Media Fund
Experimental Stream:
* Development funding: Up to $15,000
* Production funding: Up to $250,000
* Marketing funding: Up to $30,000
Approach:
1. Use CMF for production costs (voice acting, music licensing, marketing, equipment rental)
2. Use OIDMTC/SR&ED for internal labour costs
##### Eligibility
* Canadian ownership, control, and key personnel
* Innovative and experimental
* Cultural or educational value
* Public distribution  requires digital distributor
* Provide very detailed project plan and budget
#### Provincial grants
##### Ontario Creates
* Interactive Digital Media Fund (IDM Fund): Up to $250,000 for development/production
* Market Development Programs: Travel, marketing, partnership support
\
##### Ontario Arts Council
#### Toronto Arts Council
#### Youth employment programs
### Eligibility
* For-profit cooperative corporations can likely claim it.
* legislation requires only "Canadian corporation" status without specifying incorporation type, ***though co-ops must ensure for-profit structure, employee treatment of worker-members (for the 25% employee test), and should seek professional confirmation given no explicit co-op guidance exists in the program.***
## Functional cooperative legal framework
* federal patronage dividend deductions
## Dense GTA support ecosystem
* Ontario Creates funding up to $300K
* But EVERYTHING designed with standard corporations in mind
* Coops are not *excluded,* but they are never explicitly addressed. This creates ambiguity that studios need to resolve with program admins and tax/accountants.
* \
---
## What (Ontario) coops should do
* Coops should incorporate as *for-profit* under CCA if they want to take advantage of any of these opportunities.
* Track EVERY expense carefully from the first day of development - start nOW!!
* Put workers on payroll  must be *employees* not contractors to claim credits
* Join the OCA and CWCF for development support and the Tenacity Works loan fund
* Do Futures Forward training to unlock Ontario Creates (IF less than 3 years in IDM)
* Begin claiming OIDMTC from the first eligible tax year
* Carefully separate SR&ED-eligible experimental work
* Get in touch with program admins at OC EARLYT to confirm eligibility before applying
* \
\[^1\]: Work must address technological uncertainty through systematic investigation

View file

@ -1,40 +0,0 @@
---
title: Schedule
collection: Programs
path: Programs/Schedule
parentDocument: null
outlineId: 2cd2b4d9-2027-4cb1-9767-2c7eff97e8b1
updatedAt: '2026-03-01T18:58:57.212Z'
createdBy: Jennie R.F.
---
# Program Schedule
## Stage 1 (Months 1-2)
| Activity | Time | Frequency |
|----------|------|-----------|
| Workshops and discussions | 2 hours | Weekly |
| Peer Support meetings | 1 hour | Weekly |
| Asynchronous check-ins via Slack | 30 min - 1 hour | Weekly |
| Networking and social events | 1 hour | Monthly |
| Studio development work (outside meetings) | 2-3 hours | Weekly |
| Financial support | $5,000 | |
## Self-assessment
| Activity | Time | Frequency |
|----------|------|-----------|
| Evaluate team alignment and program fit | 1 hour | One time |
| Draft plan for Stage 2 | 2 hours | One time |
| Facilitated collective evaluation | 1 hour | One time |
## Stage 2 (Months 3-6)
| Activity | Time | Frequency |
|----------|------|-----------|
| Peer Support meetings | 1 hour | Weekly |
| Asynchronous check-ins via Slack | 30 min | Weekly |
| Peer-led workshops and discussions | 2 hours | Every other week |
| Peer workshop preparation and delivery | 4 hours | One time |
| Studio development work (outside of meetings) | 2-3 hours | Weekly |
| Financial support | $20,000 | |

View file

@ -0,0 +1,149 @@
---
title: Accountability & Responsibility
collection: Vault Archive
path: Vault Archive/Accountability & Responsibility
parentDocument: null
outlineId: 4b28da50-5760-4649-9e7b-a0f220b52059
updatedAt: '2026-03-09T16:30:33.979Z'
createdBy: Jennie R.F.
---
*A cohort discussion facilitated by* @[Henry Faber](mention://e5cf6913-a0e1-4126-9844-ff388fd3d37b/user/0f9c4c8c-a25a-4ec9-b198-c95546c571d5)
## Why We're Talking About This
People use "accountability" and "responsibility" interchangeably. That's interesting and important to unpack - because historically structured studios have confused these words too, and often on purpose, to encourage a type of connection to the company that is unwarranted.
Cohort participants brought up additional reasons to dig into this:
* Corporate versus co-op structures, and how little power in decision-making most people actually have in corporate settings
* Assumptions versus articulated, defined processes
* This is the mechanism by which things happen - where the rubber hits the road
* Concepts like accountability operate differently inside specific structures
## How It Works in Historically Structured Studios
This is actually the root of most of the confusion.
Companies in a capitalist structure require capital to get going. There's either an owner who has the money, investors, or some combination. Small companies and big-hearted people might be doing this because they're passionate about the product, the audience, or collaborating with people. But the minute they go into this structure, power dynamics and money step into play.
Capitalism is completely inequitable. It's really all about burning the oxygen for its own growth and not thinking about anyone else - because that's what companies incorporated in North America are legally defined to do.
In a traditional structure:
**Owner/Investors → CEO → Directors → Management → Workers**
The CEO has one job: steward the company and have its actions be accountable to the owners and investors. Good or ill, whatever happens, it comes down on the CEO's shoulders. If something goes wrong below the CEO, the CEO is accountable for it. (We know that's not entirely true in practice, but that's how it's supposed to work.)
Directors have their areas of specialty or focus. Management sits below them. Workers below that. Any of these folks can have shares, but unless they've put in money themselves, they're always subservient to the people who have - and to those who have put in the most.
Directors, management, and workers have responsibilities in service of the studio or corporation. From getting coffee for a meeting to setting up a giant audience engagement plan with cross-marketing - still just responsibilities. Different levels of intensity, different pay scales, different perks and privileges. But at the end of the day, if these things fail due to an overall bad plan, the CEO is the one accountable to the investors.
People can shirk responsibilities, not be aligned, not have the right tools offered to them, have health issues or systemic barriers. That's why there are policies and performance reviews - those things separate how the job is being done based on expectations set by the job description. But none of that changes what the CEO is ultimately accountable for: serving the best interest of the owner and investors.
This is why a CEO can cut a game entirely. After analysing the costs, it doesn't matter how long the workers worked on it. The CEO's accountability isn't to get the game out there - it's to make sure the investors receive money. They look at the numbers, look at the costs for launching and post-launch support, and say: if we cut this now and the workers, our bottom line goes way up and our investors will be happy.
So if you're a worker and you're not really accountable to any of this, you might be working in a way where you think you're accountable for something and you're not. This is where things get obfuscated. "You're accountable to management." "We're accountable to the budget." No - you're responsible for executing the budget. That's part of your job. But you're not actually accountable for it in these traditional structures.
## How It Works in a Co-op
In a worker co-op, the members are driving what's happening. They come together to set the goals and values for the organisation. Directors have legal obligations and responsibilities to make sure the co-op is compliant. Depending on structure, there's management and workers - but unless they're contracted out, they're all considered members.
In a collective where one member gets one vote for decision-making, the accountability lies on everybody. This encourages a flattening of organisational layers and an emphasis on group decision-making.
People still have responsibilities within the co-op. But the difference is that your responsibilities affect how you get your work done in a way that makes you accountable to your other members about how you do that work.
A good example: if an emergency comes up and you can't do a task you're responsible for, and you have the capacity to find someone else to do it or inform people as soon as possible or mitigate how it might affect others - that's understanding you're accountable to your fellow members, even as you pass on the responsibility. In a traditional structure, they throw money at that problem and deploy structural mechanisms that try not to make any one person feel too essential - while also extracting everything they can when that person is available. It's a lot different when you're in a co-op or collective structure. Or at least it should be.
## What Does Being Accountable to Values Mean?
This one matters to us because it's a core piece of how GammaSpace operates: we are accountable to each other for upholding our values. That's a strong thing that requires exhaustive discussion and work-through, even when - especially when - it's extremely hard.
One participant described it as "the buck stops here." If something doesn't stand up to a value, it needs to be re-evaluated and changed until it does. The value is the benchmark the thing is being held up to. When we're accountable to our values, the things we do - our responsibilities, how we treat each other - are measured against those values and used as the watermark of: are we doing it?
Another participant called it a much more relationship-focused approach. And it is. Being accountable to values means people are not treated as disposable - not treated as Human Resources with a capital H, capital R. That's not to say people can't be removed or a position made obsolete, but it means you are treating others and are treated as though being a person has value.
This extends further. When someone says "my time as Henry is very valuable to the community," that's lovely to hear. But the next step is: I feel the same way about everybody. I want them to feel the same way. If people aren't feeling valued - if they feel like their contributions aren't being taken seriously or that it wouldn't matter if they disappeared - what can we do about that? That's about being accountable to our values fully and having processes baked into our structure to support it.
One participant put it well: "When all we have is each other, the relationship between values, capacity, and resources is incredibly important." This gets more complicated the moment the collective is responsible and accountable to each of its members for providing living wage, psychological safety, opportunities for growth, and understanding what the limits of scale are based on everyone's goals.
## The Tricky Part: Not Disposable, But With Consequences
"The balance between making sure people aren't treated as disposable and also making sure going against values has consequences is a tricky thing."
This came up as a key tension. One participant connected it to rehabilitation and prison abolitionist thinking - the idea that instead of immediately booting someone out for saying something the group didn't like, you take the approach that people should have the bandwidth and the opportunity to self-reflect, change, or inform the group that they're changing. Maybe the group can change with them. The key to preventing disposability is letting people reflect on themselves and a little bit on each other to inform those self-reflections.
But then: what happens when introspection doesn't happen? We're all on different levels of our journey. There are situations where introspection is warranted but isn't triggered. What do you do?
The obvious line is if someone outright refuses introspection - that's clear grounds for dismissal. If someone is refusing to engage in good faith and doesn't want to be held accountable, then yes. But it gets complicated when someone is trying to self-reflect and just isn't very good at it yet. It's a skill, and some people are going to be worse at it than others.
There's also a time component. Sometimes someone can't introspect for six months because they're dealing with something else entirely. And then they come back and say, "Actually, I was dealing with this, and now I've realised there's an issue." How do you hold space for that? These are relationship questions - people to people.
This is where your ability to have conversations, recognise when conflict resolution is possible or not, access external supports, and assess whether something is interpersonal or structural all come into play. These aren't questions with tidy answers, but the more we're aware this happens and reflect on it, the more likely we are to have the kinds of conversations that help us through.
## Words Don't Mean the Same Thing
It can't just be a document. It has to be a process - one that is repeated, referred to, and breaks down the language to get a little closer. It'll never be 100%.
For example, at GammaSpace, when we say "challenging systemic norms," a smaller group had a pretty good idea of what that meant. As the membership has grown, we've had to do extensive why, what, and how work on that so we all actually get on the same page.
You can't just say "don't be an asshole" in your code of conduct. What does that mean? This group might have some ideas - say please and thank you, don't interrupt, don't swear in the channel unnecessarily. But that's not enough, because some people might've dealt with a different type of asshole and didn't realise they were being one, or that their response to one was also problematic. That's why it has to be broken down.
And it can't come from the traditional structure where culture is "created" - because if the CEO is accountable to shareholders and it's all about making money, how could the culture not be in service of that? Or at least one-sided - an expectation of everyone below the CEO, but not the CEO themselves?
Companies say "come as you are" but what they mean is: come as someone ready to make us money ASAP. And that has allowed people to be assholes in the culture because they appear to get things done or bully people into producing. "We're a family here" - don't we all know how that goes! My family has some issues. Please don't be like my family.
## Stewardship, Not Leadership
Here's a somewhat trick question: is accountability baked into leadership?
In a collective where we're all accountable to each other, is it actually leadership? At GammaSpace, when someone bottom-lines a project or helps organise something in a lead capacity, we refer to it as stewardship. That word is important because stewardship is about understanding the capacity, expertise, and will to take on a collection of tasks and responsibilities on behalf of the co-op. Anyone could do it, assuming they feel they can handle the requirements - intensity, commitment level, skills required. None of those are inherently accountability things in themselves. The accountability part comes from the values and processes the collective has defined together: transparency, reporting, equitable treatment.
One participant pushed back: accountability on the business end and on the project end are entirely different and disconnected. Worker co-op structures don't automatically have accountability baked into project leadership - because the co-op model doesn't know what you're trying to do in your day-to-day operations. That always needs bespoke conversations and structures.
Henry pushed further: but what does that have to do with accountability in a collective setting? If we're all accountable to our values, it's up to everyone to uphold them. The accountability is about whatever values and processes the group has defined together. The responsibilities within a project are a different category. Splitting these hairs matters - because historically structured studios trick and exploit people by conflating the two.
It comes down to processes and tools. Can we divide this up in a way that recognises our commitment to each other and how that commitment informs our work?
## The Roommate Problem
One participant brought up the roommate problem - something discussed early in the co-op development process. When you live with a roommate, everything they do drives you up the wall. The dishes, the mess, the chaos. But when you live with a partner you love, those same annoyances exist within a context of understanding. You know who they are as a person, and within the relationship you develop steps, processes, and conversations to deal with the tissues on the table.
The difference is that your roommate is just there. The person you love is someone you've built trust and communication with. You keep track of the fact that they're human and not just an output machine. In a co-op, you need different aspects of relationship to keep it human: play, working time, honest communication. And all of that requires honesty - you can't come into a conversation not telling anyone what's up and then expect them to know.
## When You Butt Up Against Capitalism
Even as a co-op, you engage with capitalism. You engage with platform holders, showcases, clients. When you have a transactional relationship with a client, they care about one thing: can the people they hired get this done?
In that situation, you're responsible for doing the job. You can say when you disagree, but the ultimate decision is up to the client. What you're accountable for runs back to your own co-op: are they paying us properly? Are they violating the contract in ways that affect our ability to maintain relationships with our members?
A co-op can amplify its values in these relationships - charge a co-op administration fee, explain what it means, offer to present together about how working with a co-op gets things done. But at the end of the day, if it's not written in the task list, it doesn't change the transactional relationship.
One participant noted the gap between being accountable for doing your job and being accountable for the quality of the end result - and how that absence of linkage is a feature of the studio system, not a bug. Transposing yourself into a traditional studio, you can feel the empathy for what workers deal with daily.
## Accountability Performed Versus Accountability Lived
Another participant raised something sharp: in studios they work with, well-intentioned people will have a nice conversation about a problem, all agree it needs to stop, and then factions form behind the scenes to keep the disagreement going. The appearance of accountability sometimes undermines actual accountability. What demonstrates accountability beyond saying "I feel accountable" and acting it out?
This is especially true when people who are trying to make changes from within a traditional structure take on accountability even when it's beyond their control. That creates an unbearable, untenable, unsustainable weight on someone trying to fix the problem from inside the house. It can't be done in a healthy way. That's why it's important, if at all possible, to release yourself from that and work on a different way of doing things.
## Carrying Capacity
One of the key concepts that has come into vocabulary through co-op development is carrying capacity. The carrying capacity of a community or organisation is defined by its ability and its engagement with the practice of having difficult conversations - conversations that use the tools, words, and processes you've worked on together collectively.
This is fundamental because it's the only way to start separating and getting through the weeds of accountability, responsibility, personal action, structural change, power dynamics, and the only way to start making the change you want to see in the world.
## Practical Tools
[Why-What-How](/doc/ff5419e1-cfec-48fb-b988-67b8faaad067) **framework**: Start with why you're talking about something. Then what would you do to structure or address it. Then how would you do those whats. This creates a roadmap of tools and processes that come from breaking down your own language together.
**Layers of Effect**: How do decisions affect who you're talking about? What are the primary positives? The primary negatives you can imagine? What about those affected one layer out? And one layer out from that? How do you measure that over time?
**Reporting and transparency**: Transparency of reporting is essential, whether that's a co-working channel with value flow and emojis, a monthly meeting about outcomes, a combination - whatever fits. The reporting models behaviour. People may not read every detailed post, but having something to go back to and reference matters. And if someone says "if only I'd known about this sooner" when they were tagged a week ago - the correct answer in a healthy situation is: I'm responsible for catching up, or I'm responsible for coming to you and saying I'm overwhelmed, can you walk me through this?
**Self-evaluation**: The individual acts of understanding your own capacities, supports, and time - and being able to evaluate those on a regular basis - are things each person should be responsible for in reporting to their collective.
**Conflict resolution processes**: Recognising when conflict resolution is possible or not, having external supports available, assessing whether something is interpersonal or structural, and looking at how the collective is being affected by the whole thing.
## Summary
Accountability is for big decisions that affect the actual values and goals of an organisation. Responsibility is for tasks in service of those goals. In a traditional structure, these get conflated on purpose to extract more from workers. In a co-op or collective, the work is to keep them distinct while recognising that your responsibilities do affect how you're accountable to your fellow members. The tools for navigating all of this are relational: processes, language, frameworks, and - above all - the ongoing practice of having difficult conversations together.

View file

@ -1,10 +1,10 @@
--- ---
title: Actionable Steam Metrics title: Actionable Steam Metrics
collection: Strategy collection: Vault Archive
path: Strategy/Actionable Steam Metrics path: Vault Archive/Actionable Steam Metrics
parentDocument: null parentDocument: null
outlineId: fc5930fd-64b1-4771-9fb7-681828b23be0 outlineId: fc5930fd-64b1-4771-9fb7-681828b23be0
updatedAt: '2026-03-01T18:21:37.664Z' updatedAt: '2026-03-09T16:30:49.655Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Actionable Steam metrics # Actionable Steam metrics

View file

@ -1,10 +1,10 @@
--- ---
title: Actionable Values title: Actionable Values
collection: Studio Development collection: Vault Archive
path: Studio Development/Actionable Values path: Vault Archive/Actionable Values
parentDocument: null parentDocument: null
outlineId: c0d1b387-2eb5-431f-a709-8fe40449ec0d outlineId: c0d1b387-2eb5-431f-a709-8fe40449ec0d
updatedAt: '2026-03-01T18:21:38.120Z' updatedAt: '2026-03-09T16:30:43.741Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Actionable Values # Actionable Values

View file

@ -1,10 +1,10 @@
--- ---
title: Business Planning title: Business Planning
collection: Strategy collection: Vault Archive
path: Strategy/Business Planning path: Vault Archive/Business Planning
parentDocument: null parentDocument: null
outlineId: 74c28b8b-1d17-42a7-9201-6edd2c869816 outlineId: 74c28b8b-1d17-42a7-9201-6edd2c869816
updatedAt: '2026-03-01T18:21:38.710Z' updatedAt: '2026-03-09T16:30:51.272Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Business planning for impact # Business planning for impact

View file

@ -1,10 +1,10 @@
--- ---
title: Canada Council Funding title: Canada Council Funding
collection: Strategy collection: Vault Archive
path: Strategy/Canada Council Funding path: Vault Archive/Canada Council Funding
parentDocument: null parentDocument: null
outlineId: a26fb944-2e29-4807-b0fb-ce3a129417ab outlineId: a26fb944-2e29-4807-b0fb-ce3a129417ab
updatedAt: '2026-03-01T18:59:18.710Z' updatedAt: '2026-03-09T16:30:52.926Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Fund your game with Canada Council # Fund your game with Canada Council

View file

@ -1,10 +1,10 @@
--- ---
title: CMF Quick Tips title: CMF Quick Tips
collection: Strategy collection: Vault Archive
path: Strategy/CMF Quick Tips path: Vault Archive/CMF Quick Tips
parentDocument: null parentDocument: null
outlineId: f1746305-b496-44a0-8b21-4ba8b134566b outlineId: f1746305-b496-44a0-8b21-4ba8b134566b
updatedAt: '2026-03-01T18:21:39.610Z' updatedAt: '2026-03-09T16:30:54.442Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# CMF Application Tips & Info # CMF Application Tips & Info

View file

@ -1,10 +1,10 @@
--- ---
title: Co-op Structure title: Co-op Structure
collection: Studio Development collection: Vault Archive
path: Studio Development/Co-op Structure path: Vault Archive/Co-op Structure
parentDocument: null parentDocument: null
outlineId: 61ee592c-4496-4931-9fae-3c5a146915de outlineId: 61ee592c-4496-4931-9fae-3c5a146915de
updatedAt: '2026-03-01T18:21:40.009Z' updatedAt: '2026-03-09T16:30:41.720Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Co-op Structure and Value Flow # Co-op Structure and Value Flow

View file

@ -1,10 +1,10 @@
--- ---
title: Decisions and Conflict title: Decisions and Conflict
collection: Studio Development collection: Vault Archive
path: Studio Development/Decisions and Conflict path: Vault Archive/Decisions and Conflict
parentDocument: null parentDocument: null
outlineId: 9e296af5-6a96-4c71-b81c-ad5f22862a91 outlineId: 9e296af5-6a96-4c71-b81c-ad5f22862a91
updatedAt: '2026-03-01T18:21:40.531Z' updatedAt: '2026-03-09T16:30:40.043Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Decisions, Conflict, and Prioritization # Decisions, Conflict, and Prioritization

View file

@ -1,10 +1,10 @@
--- ---
title: Expo Tips title: Expo Tips
collection: Strategy collection: Vault Archive
path: Strategy/Expo Tips path: Vault Archive/Expo Tips
parentDocument: null parentDocument: null
outlineId: 7c188785-66cf-415e-861d-2a267d732890 outlineId: 7c188785-66cf-415e-861d-2a267d732890
updatedAt: '2026-03-01T18:21:41.037Z' updatedAt: '2026-03-09T16:30:55.955Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Rocket Adrift Tips for In-Person Expositions # Rocket Adrift Tips for In-Person Expositions

View file

@ -1,10 +1,10 @@
--- ---
title: Financial modelling title: Financial modelling
collection: Strategy collection: Vault Archive
path: Strategy/Financial modelling path: Vault Archive/Financial modelling
parentDocument: null parentDocument: null
outlineId: 06990e9a-120e-4944-9899-f76ddbcbe92a outlineId: 06990e9a-120e-4944-9899-f76ddbcbe92a
updatedAt: '2026-03-01T18:55:43.817Z' updatedAt: '2026-03-09T16:30:57.425Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Navigating Capital # Navigating Capital

View file

@ -1,10 +1,10 @@
--- ---
title: Game Discovery Toolkit title: Game Discovery Toolkit
collection: Strategy collection: Vault Archive
path: Strategy/Game Discovery Toolkit path: Vault Archive/Game Discovery Toolkit
parentDocument: null parentDocument: null
outlineId: d677f658-ce68-4384-b988-5454aa7caae7 outlineId: d677f658-ce68-4384-b988-5454aa7caae7
updatedAt: '2026-03-01T18:21:42.433Z' updatedAt: '2026-03-09T16:30:59.343Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Summary: The Complete Game Discovery Toolkit # Summary: The Complete Game Discovery Toolkit

View file

@ -1,10 +1,10 @@
--- ---
title: Impact Measurement title: Impact Measurement
collection: Impact collection: Vault Archive
path: Impact/Impact Measurement path: Vault Archive/Impact Measurement
parentDocument: null parentDocument: null
outlineId: bb38d941-cfac-4255-b279-65fc3c03c7da outlineId: bb38d941-cfac-4255-b279-65fc3c03c7da
updatedAt: '2026-03-01T18:24:36.274Z' updatedAt: '2026-03-09T16:31:40.122Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Developing your impact measurement framework # Developing your impact measurement framework

View file

@ -1,10 +1,10 @@
--- ---
title: Market Analysis title: Market Analysis
collection: Strategy collection: Vault Archive
path: Strategy/Market Analysis path: Vault Archive/Market Analysis
parentDocument: null parentDocument: null
outlineId: b920b385-ea72-4ea3-ae97-794a54cb1881 outlineId: b920b385-ea72-4ea3-ae97-794a54cb1881
updatedAt: '2026-03-01T18:21:43.529Z' updatedAt: '2026-03-09T16:31:00.995Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Market Analysis # Market Analysis

View file

@ -1,10 +1,10 @@
--- ---
title: Pitching to Publishers title: Pitching to Publishers
collection: Strategy collection: Vault Archive
path: Strategy/Pitching to Publishers path: Vault Archive/Pitching to Publishers
parentDocument: null parentDocument: null
outlineId: 6ae36b62-0f9c-4363-aff0-f4a5c9756e9f outlineId: 6ae36b62-0f9c-4363-aff0-f4a5c9756e9f
updatedAt: '2026-03-01T18:21:43.965Z' updatedAt: '2026-03-09T16:31:02.443Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Pitching to Publishers # Pitching to Publishers

View file

@ -1,10 +1,10 @@
--- ---
title: Process Development title: Process Development
collection: Studio Development collection: Vault Archive
path: Studio Development/Process Development path: Vault Archive/Process Development
parentDocument: null parentDocument: null
outlineId: 05d75e6b-a613-4b74-b564-61bbd395916b outlineId: 05d75e6b-a613-4b74-b564-61bbd395916b
updatedAt: '2026-03-01T18:21:44.439Z' updatedAt: '2026-03-09T16:30:38.236Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Collaboration, Process Development and Tools # Collaboration, Process Development and Tools

View file

@ -1,10 +1,10 @@
--- ---
title: Publisher Contract Review title: Publisher Contract Review
collection: Strategy collection: Vault Archive
path: Strategy/Publisher Contract Review path: Vault Archive/Publisher Contract Review
parentDocument: null parentDocument: null
outlineId: a30f5d72-cedb-4d5f-a76f-ae477482efda outlineId: a30f5d72-cedb-4d5f-a76f-ae477482efda
updatedAt: '2026-03-01T18:21:44.885Z' updatedAt: '2026-03-09T16:31:02.995Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Whitethorn Games Contract Review # Whitethorn Games Contract Review

View file

@ -1,10 +1,10 @@
--- ---
title: Results Flow title: Results Flow
collection: Impact collection: Vault Archive
path: Impact/Results Flow path: Vault Archive/Results Flow
parentDocument: null parentDocument: null
outlineId: 63517541-9fbe-462a-82b7-2320bb1215e5 outlineId: 63517541-9fbe-462a-82b7-2320bb1215e5
updatedAt: '2026-03-01T18:23:20.054Z' updatedAt: '2026-03-09T16:31:37.795Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
## Creating a results flow for an indie game studio ## Creating a results flow for an indie game studio

View file

@ -1,10 +1,10 @@
--- ---
title: Self-Assessment title: Self-Assessment
collection: Programs collection: Vault Archive
path: Programs/Self-Assessment path: Vault Archive/Self-Assessment
parentDocument: null parentDocument: null
outlineId: 9fd5d3da-5c29-41bf-9e63-3a119d10a959 outlineId: 9fd5d3da-5c29-41bf-9e63-3a119d10a959
updatedAt: '2026-03-01T18:21:47.182Z' updatedAt: '2026-03-09T16:31:28.571Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
[Self-Assessment Document](https://docs.google.com/document/d/15og3YqFdMO3o3zr-fbYgwPHgbQnevbxZ7SVCcbjRhc8/edit?usp=drive_link) - go to the File menu and select *Make a Copy*. [Self-Assessment Document](https://docs.google.com/document/d/15og3YqFdMO3o3zr-fbYgwPHgbQnevbxZ7SVCcbjRhc8/edit?usp=drive_link) - go to the File menu and select *Make a Copy*.

View file

@ -1,10 +1,10 @@
--- ---
title: Stages of Coop Development title: Stages of Coop Development
collection: Programs collection: Vault Archive
path: Programs/Stages of Coop Development path: Vault Archive/Stages of Coop Development
parentDocument: null parentDocument: null
outlineId: 5b97da75-ccb3-4951-89a7-c7649fe03a4b outlineId: 5b97da75-ccb3-4951-89a7-c7649fe03a4b
updatedAt: '2026-03-01T19:37:17.995Z' updatedAt: '2026-03-09T16:31:20.659Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Stages of Cooperative Development # Stages of Cooperative Development

View file

@ -1,10 +1,10 @@
--- ---
title: Structures for Impact title: Structures for Impact
collection: Strategy collection: Vault Archive
path: Strategy/Structures for Impact path: Vault Archive/Structures for Impact
parentDocument: null parentDocument: null
outlineId: f63f29ef-7e41-4eb7-9238-140030db2caa outlineId: f63f29ef-7e41-4eb7-9238-140030db2caa
updatedAt: '2026-03-01T18:21:48.136Z' updatedAt: '2026-03-09T16:31:04.212Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Choosing an impactful business structure # Choosing an impactful business structure

View file

@ -1,10 +1,10 @@
--- ---
title: Telling Your Story title: Telling Your Story
collection: Studio Development collection: Vault Archive
path: Studio Development/Telling Your Story path: Vault Archive/Telling Your Story
parentDocument: null parentDocument: null
outlineId: 2e2c01bf-f6b8-4d03-8a44-835893d884bd outlineId: 2e2c01bf-f6b8-4d03-8a44-835893d884bd
updatedAt: '2026-03-01T18:21:48.695Z' updatedAt: '2026-03-09T16:30:35.971Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# Telling Your Story # Telling Your Story

View file

@ -1,10 +1,10 @@
--- ---
title: TikTok Meeting Notes title: TikTok Meeting Notes
collection: Strategy collection: Vault Archive
path: Strategy/TikTok Meeting Notes path: Vault Archive/TikTok Meeting Notes
parentDocument: null parentDocument: null
outlineId: 39fd382a-f253-467e-8174-869afabc2860 outlineId: 39fd382a-f253-467e-8174-869afabc2860
updatedAt: '2026-03-01T18:52:14.746Z' updatedAt: '2026-03-09T16:31:05.466Z'
createdBy: Jennie R.F. createdBy: Jennie R.F.
--- ---
# TikTok Marketing Meeting: July 3, 2023 # TikTok Marketing Meeting: July 3, 2023