This is part of a series on the 2022 Mystery Hunt. Spoilers ahead.
One of the things we tried to think about “fixing” with our Hunt was what to do about the scavenger hunt.
Mystery Hunt tends to have one (or more) scavenger hunt-esque puzzles every year – it’s a great way to break up all the puzzle content with something different, and it’s also nice to have as a swap-out in case one of your technical puzzles breaks down.
For Bookspace, we knew we wanted one of the New You City answers to be the very meta COME UP WITH ANOTHER ITEM FOR THIS YEAR’S SCAVENGER HUNT LIST after an earlier attempt to re-think the scavenger hunt became the task on the other side of My Dinner With Big Boi. That meant having at least 26 items on that list (because of the nature of the round), and Jen McTeague, Wil Zambole, and I tried to figure out a way to do the scavenger hunt that would be:
- fun for teams to complete
- fun for our team to evaluate
- not require massive teams to show us 150 items in a massive two-hour long Zoom
- not require us to dedicate massive amounts of team resources to evaluating everyone’s submissions.
We almost succeeded on those. More on that later.
A Taxonomy of Scavenger Hunt Types
Mystery Hunt scavenger hunts tend to come in three flavors:
- bring us things
- do things for us
- bring us things AND do things for us
I tend to dislike “bring us things” scavenger hunts because I don’t want to have to go home and leave our main puzzle HQ during hunt if I can help it. For example, I checked out of 2018’s “I Wanna Be The Very Best” because it required us getting a LOT of things. I also checked out of it because we had opened it late in the hunt and almost used a free answer on it because we were concerned we wouldn’t be able to get everything in a reasonable amount of time. This was before we knew that round’s puzzles each had an “evolved” version and that a part two was around the corner.
I tend to LOVE “do things for us” scavenger hunts because they’re very silly and you can get away with a lot of 2AM I-haven’t-slept-in-two-days bullshit because the only people who have slept less than you are the team running things, who are also going to be kind of slap-happy. Part two of the 2018 Hunt’s scavenger hunt, Older and Wiser, is a great example of this, and I have fond memories of directing our team’s general direction on this by grabbing a cardboard box from our HQ’s food room, making a sign for it that said “this is definitely an x-ray machine”, and demonstrating for my team that if I stood in the box and drew an “x-ray”, it fit all the qualities that needed to have.
An hour later, we had constructed everything we needed from paper, tape, sharpie, and flop sweat, and led the team running things down the hallway of MIT we had taken over through a cavalcade of things like a “quilt” made of Lightning McQueen fan art, a LaTeX-formatted dissertation, and ended with a bunch of us heading outside to fill up our “clown car”.
I also really loved the 2019 Taskmaster scavenger hunt for this reason – it was really soothing to have someone come into our classrooms, loudly announce “who wants to come assemble a paper chain for 10 minutes”, and go do that. Fun fact: at the time we worked on that I did not realize that Taskmaster was an actual television program the puzzle was clearly aping.
Anyways, back to our scavenger hunt.
Knowing that this was in the self-help round and keeping the principles we wanted in mind, I pitched an initial version of what became book reports:
- tasks would be based on self-help book titles
- 26 potential would be presented, since the list that fed into the meta would be the list of books used in the puzzle
- teams needed to complete at least 3 and no more than 10 tasks, to reduce the scope of what we’d need to evaluate
- tasks would have a 5, 10, and 15 point option
- teams would need to complete enough tasks to make their team size
In getting together a test version, some changes were made — max score needed was set to 100, since the 10-task limit meant that was the max score possible, and asking large teams to complete 10 of the harder tasks seemed only fair. We also opened up suggesting books to the team, and got many of the delightful versions of things that ended up in the final puzzle.
One thing that felt tricky was making sure the 10-point version of a task felt like an adequate increase in effort from the 5-point version. In many cases, we made it an add-on to the 5 point task that hopefully felt like a “well, as long as I’m doing this anyways” sort of a deal.
Teams seemed to generally enjoy the book reports, and our team of evaluators (once we had enough of them fully trained between Friday and Saturday) seemed to have a lot of fun going through everyone’s submissions. Mission Accomplished?
In getting things set up to evaluate submissions, we had the task portion of things open up as part of the intro to Round 2, The Ministry. This was to make sure as many teams as possible got to see this, and also to ensure that a team wasn’t getting to the scavenger hunt late on Saturday and feeling like they were blocked on it because they didn’t have time to do the tasks. I still like this decision, though I completely missed some of the effects this would have on evaluating scavenger hunt submissions.
Jen and I both seemed to think that even with this change in where the puzzle deployed, that we still wouldn’t be getting submissions until Saturday morning, when we’d be ready to start running that interaction. Instead, the more accessible nature of things (if your 15-person team only needs to complete three 5-point tasks, you’re going to do them) meant that we were getting submissions as early as Friday night, with the interaction with teams left as a “TBD Friday afternoon once Jen is on site” in our documentation for the scavenger hunt. Oops! Our tasks were definitely proving fun, but the inbox started filling up with submissions to be evaluated while Jen was still getting to campus.
Luckily, I had built out a spreadsheet template to make evaluation quick and easy. I started processing those for the teams that had submitted, and Jen got thrown into a little bit of the deep end once they got on site and needed to run calls with the first 3-4 teams. We got it worked out, and eventually got multiple people running the interaction, but it created way more work than expected, and I think if I had sat down for literally 15 more minutes I would have caught it.
Wrapping this Up
I think any scavenger hunt puzzle that’s run while the Hunt is still fully online is going to hit issues like this.
Running evaluation as a Zoom is going to mean a higher volume of submissions than past years where it’s needed to be run on site. It takes a LOT of resources (that also likely need to be running interactions, answering emails, and providing hints) away from Hunt running, and even with our reduction in overall scope, running each team evaluation took more time for the Zoom than expected and meant we were constantly fielding requests from teams wondering when their evaluation would be, an hour or two after their initial submission, which I didn’t love as one of the people keeping an eye on things at HQ for a few shifts.
It’s hard to know what the 2023 Hunt will look like (and I hope we’ll be on campus), but if we’re all online again, it might be time for a temporary retirement of the scavenger hunt. There are some built-in features of the nature of scavenger hunts that quickly become unscalable when you have an increase of submissions of the type that the increase in Mystery Hunt participation the last few years has created.