6:15 if it is the emperor, then it's modelled on the original Papier mache appearance in Empire Strikes Back from before he was cast, which would be a lovely homage! No idea who else it could be. I'm guessing the cat from Alien will appear at some point... although it would be amusing (but unlikely) if the cats turned out to be there for no explicable reason. Or maybe they're the kidnappers? 😮
I was the same! My wife had to solve it for me. Partly because, as far as I’m concerned, some of those lightsabers are simply _not_ pointing at a character, but miles above their heads.
Very nice to check out your solve once my own is done! This is the third year I'm doing the Exit calendar, and it's really something I look forward to all year! What I've done since the beginning is ranking (1-10) each puzzle with a grade for "Quality/Cleverness/Fun" and another one for "Level of difficulty". The average of the first grade should of course be as high as possible, while I think the second one should land on 8 or just below it. I started with the Golden Book, which ended up with 7.4 (quality/fun) and 6.2 (difficulty). The Silent Storm did not do as well! 6.2 and 4.5! This one has started quite promisingly!
I look forward to it all year too, so it's nice to be hear from so many similarly minded people! I don't grade the puzzles myself, but your ratings are very interesting!
Ha, yes. There are usually a couple of puzzles I need to take hints on, but I'm hoping that this year might be different... although why I am tempting fate I don't know! :)
This puzzle was OK, not one of their best. As Dr Gareth is often replying to these comments maybe I could ask him about a rash on my knee? Doctors absolutely love helping people with random medical problems! Unless he’s one of those “not actually a doctor” type of doctor. Perhaps he’s got a PhD in puzzling! Yes, that must be it. 😂
Yes, I have a Ph.D in large vocabulary language models, although I did this long enough ago that it predates the current use of LLMs for generative text. While you could generate plausible text with the models I worked on during my Ph.D, it would just meander randomly without any particular meaning. It's fascinating what is achievable now.
@@mikestoner2898One of the key issues 'back in the day' was the relatively limited amount of training data. Companies today can benefit from vast quantities of data gathered from considerable human interaction with many different devices, apps, sites and so forth, which wasn't available when I was studying. Although there were (what I thought then were) large databases, they were primarily text from newspapers and news sites, so it certainly didn't cover the broad range of topics and styles of text you would need for convincing general text-like generation. Additionally, the devices were slower and storage was more expensive. In any case, the target then was purely to use the models to improve automatic speech recognition - which they did.
6:15 if it is the emperor, then it's modelled on the original Papier mache appearance in Empire Strikes Back from before he was cast, which would be a lovely homage! No idea who else it could be.
I'm guessing the cat from Alien will appear at some point... although it would be amusing (but unlikely) if the cats turned out to be there for no explicable reason. Or maybe they're the kidnappers? 😮
Yes, perhaps they are your CApTurers...
A fairly straightforward puzzle today, definitely the simplest so far but still pretty fun!
I was glad to have one I could do cleanly, since it's a bit embarrassing if I keep getting stuck! :) :) :)
Yay, I'm glad to see a pretty straightforward puzzle. No tricky gimmicks today.
Me too! The more complex the puzzle and therefore video, the longer it takes to record/edit too and I didn't have much time yesterday! :)
I ended up taking a hint because I was dead set on the sabers pointing in the direction of the picture, not the pointing to the person.
Yes, I think that was a bit confusing. It was lucky I tried the purple first, because the other two were much less instantly clear.
Some riddles are so easy that they become difficult. It can be so frustrating.
I was the same! My wife had to solve it for me. Partly because, as far as I’m concerned, some of those lightsabers are simply _not_ pointing at a character, but miles above their heads.
Very nice to check out your solve once my own is done! This is the third year I'm doing the Exit calendar, and it's really something I look forward to all year! What I've done since the beginning is ranking (1-10) each puzzle with a grade for "Quality/Cleverness/Fun" and another one for "Level of difficulty". The average of the first grade should of course be as high as possible, while I think the second one should land on 8 or just below it. I started with the Golden Book, which ended up with 7.4 (quality/fun) and 6.2 (difficulty). The Silent Storm did not do as well! 6.2 and 4.5! This one has started quite promisingly!
I look forward to it all year too, so it's nice to be hear from so many similarly minded people! I don't grade the puzzles myself, but your ratings are very interesting!
It was a refreshing change to solve this without using the muscle in my brain😂..im hoping it not the calm before the storm...
Ha, yes. There are usually a couple of puzzles I need to take hints on, but I'm hoping that this year might be different... although why I am tempting fate I don't know! :)
Today's was a quick one. The trick for us here is to not complicate riddles 😂😅
This is true! It's very easy to look for complexity that isn't there.
This puzzle was OK, not one of their best.
As Dr Gareth is often replying to these comments maybe I could ask him about a rash on my knee? Doctors absolutely love helping people with random medical problems! Unless he’s one of those “not actually a doctor” type of doctor. Perhaps he’s got a PhD in puzzling! Yes, that must be it. 😂
Yes, I have a Ph.D in large vocabulary language models, although I did this long enough ago that it predates the current use of LLMs for generative text. While you could generate plausible text with the models I worked on during my Ph.D, it would just meander randomly without any particular meaning. It's fascinating what is achievable now.
@ Wow, that is awesome, you were definitely ahead of the curve!
@@mikestoner2898One of the key issues 'back in the day' was the relatively limited amount of training data. Companies today can benefit from vast quantities of data gathered from considerable human interaction with many different devices, apps, sites and so forth, which wasn't available when I was studying. Although there were (what I thought then were) large databases, they were primarily text from newspapers and news sites, so it certainly didn't cover the broad range of topics and styles of text you would need for convincing general text-like generation. Additionally, the devices were slower and storage was more expensive. In any case, the target then was purely to use the models to improve automatic speech recognition - which they did.