Goals:
- Fill our space with great weird stuff
- Give developers a space to share their work with the world
Meeting - 9/3/2025
<aside>
💡
AGENDA:
- Review the current different curation routes in our expo hall, and how things went in 2025
- Build list of what we see as problems with that flow
- Set expectations for what we think we can do different in 2026, and what we might want to explore in 2027 and beyond
END OF MEETING GOALS:
- Understand how our new goals might shift layout/curation needs for the 2026 cycle (i.e. reducing game count / floor space allocations) - Complete ✅
- Assign points of contacts for any other groups we want to get involved in the curation process (i.e. schools, NPOs, etc) - mostly ✅
- Agree on which problems / ideas we don't want to commit time to this year, or that require more than 3 months of run up time - We sure did ✅
Attendees: Matt, Flan, Emily, Travis, Lauren, Socks, Nate weed-wacking
</aside>
Current Processes
IA
- General submission - Initial vet and cleanup by DHS, ranked by small team (5 peeps), Matt handles ties, length waitlist
- Contractors - Long term legal contracts, we give them $ they fill a space (and follow up with us YoY to adjust things)
- Guests - Newer way to get games, we work with the Guests dept who helps with stuff like travel costs and visas; unique each year (generally, right now Robin is a bit of an exception because the wall is good)
MIVS
- General Submissions
- Preflight cleanup by DHs, confirms links work, etc
- Assigns items to specific judges, each game judged by multiple people; make sure that each team has plays
- Judges score on a simple rubric (show readiness, gameplay, enjoyment)
- Top accepted, bottom cut, some middle ground for waitlists
- Quickplay
- Open tables on site where attendees can set up their own machines
Problems:
- Repeat games
- People like seeing how games have grown
- But also we like to have new games
- Both teams currently adjust scores based on consecutive years at mag
- Normalizing judge feedback
- Some judges are just mean
- IA rubric isn’t nuanced enough
- Variety of gameplay
- Certain genres just score better
- Certain genres just don’t work well in a mass-judge-format