Sunday, December 30, 2018

7 thoughts on Keyforge

I bought KeyForge decks for several friends this Christmas, and played a couple games yesterday. If you like TCGs, I recommend spending $20 to buy two decks and trying it with a friend! Here are my other thoughts.

*** 1 ***

Let's just call it Keyforge, not KeyForge. Internal caps just... feel like they're going to make something feel dated later, and they're annoying to type :) Relatedly, "Æmber"? Really?? I need to be able to type and pronounce this word. I'm pretty sure it's pronounced "amber", because "I gain 3 amber" sounds a lot better than "I gain 3 ember".

*** 2 ***

Components needed to play: I just bought individual decks instead of the starter box set. This is a great feeling -- none of us have a big bulky box lying around, we each just have our decks! I can bring a deck to someone's house trivially, just in case someone wants to play! I was a little worried about not having the tokens in the box set, but found that during play, we used the following:
  • 3 or 4 yellow dice to track Aember (usually you just need 1 per player)
  • 2 purple dice to track Keys forged  (1 per player)
  • A handful of blue dice to track damage on creatures
  • A handful of other markers or dice to track stun on creatures and other misc effects
So, basically, I'd show up to the first few games with a medium-sized collection of dice and markers, and see what you use.

*** 3 ***

To me, Keyforge doesn't feel very similar to Magic, Hearthstone, and other Magic-like games -- it feels about as far from them in game-space as something like the Pokemon TCG (which I also recommend trying out if you like TCGs). In order, the things that I'm guessing make it feel not necessarily better or worse, but definitely different:
  • No mana system: instead, you choose a House (think color) to play, then play / activate / discard as many cards from that House as you'd like. If you're sitting with a hand of cards and thinking "I have 3 mana this turn, next turn I'll have 4, what can I afford to play and what will I be playing later?", it's going to feel pretty similar to Magic / Hearthstone / etc. Removing mana changes the feeling of decisions (e.g. I'm not thinking about how to use mana efficiently), means that decks don't need to be composed with mana curves in mind, means that "must keep this card in my hand for later" is much less common, that you can often play many cards per turn, etc.
  • Not "reduce opponent's health to 0": at first, I thought "collect Aember to win" would be really different. Then, I noticed that it's structurally similar: imagine that each Aember is 1 damage, and each Key is one "shield"; now you're doing 6 damage to break a shield, and have to break 3 shields to win. Not so different, right? However, collecting Aember still feels different to me. I think this is because a creature's combat power doesn't determine its Aember-gathering rate, because Keys are meaningful chunks that allow Aember gain / loss / stealing to be more dynamic while still ratcheting the game forward, and (most subtly) because the cards treat Aember more as a resource than as a damage-substitute.
  • Not "draw a card each turn": instead, you draw back up to 6. I think this gives a very different game feel, and keeps card advantage from being a major consideration.
I really appreciate that Keyforge feels very un-Magic; I'm frankly kind of bored of games that feel like Magic, and am very happy to see more of the space of card games explored. (Another plug for Pokemon TCG if you enjoy thinking about game design -- I played it just because a friend liked it, and was like "holy wow, there are a ton of cool ideas here, why is this space so unexplored?!")

*** 4 ***

It feels like I'll play a lot more of unique-deck games (UDGs) than I historically have TCGs. The basic proposition is: "Pay $10. Now you have a deck that's all your own, it's at least playable and probably competitive with all your friends' decks, and you can play a game in ~20 minutes." In Magic or Pokemon, I personally have problems (emphasis on "personally", because obviously all of these things that I don't enjoy are super-fun to a huge group of people, and Magic / Pokemon are objectively great games):
  • Constructed: "Buy a bunch of cards (for significantly more than $10) and build a deck. It's probably going to take a lot of time to figure out how to do this; in theory that's fun, but in reality, you don't enjoy it that much, and you're not great at buying cards or building decks. If you want your deck to be unique, expect to take a lot more time and some additional money. Now play against your friends, and hope that they built decks with similar philosophies, otherwise one of your decks is probably going to be much better than the other. If you want a new deck, it's at least $20 + time, worry, and effort. End up with a collection that makes you feel vaguely guilty for not using it."
  • Limited: "Draft with friends. This takes all day, and you'll never play with the decks again. Also, you're not good at drafting; consider getting good at that, but realize that you don't actually care to get good enough to have a great time. Leave the cards with your friend, because they might actually use them."
(In reality, I've ended up playing mostly Wizard's Tower because that's the best version of Magic for me.)

Looking at these problems, maybe you can see why my main way of engaging with Magic is to read about it online and talk to friends about it; when I actually play Magic, the experience typically falls short of what I was hoping for. I'm hoping that UDGs like Keyforge will be more playable for me. 

(I would buy decks of a Pokemon UDG in a heartbeat -- I think it'd be an incredible improvement over the TCG on a ton of dimensions, and much better for young kids!)

*** 5 ***

On the other hand, I'm guessing that I won't spend nearly as much time reading, listening to podcasts, watching videos, etc. about Keyforge compared to Magic. Magic's deck-building core generates an incredible market of ideas; the difficulties that makes me dislike playing constructed Magic are food for the distributed Magic Brain that I love to watch and talk to!

*** 6 ***

I really enjoyed Richard Garfield's explanation of what he's hoping Keyforge will be. It's in the instruction manual, but I thought I'd include it at the end of this post. Perversely, it seems like Garfield kind of intends Keyforge to not have a Collective Brain on the level of Magic -- if there's no secondary market for cards, maybe it'll preserve a world where I don't see cards until they're played against me? I'm excited that Keyforge might be a purposefully obfuscated game; it can't be spoiler-proof, but it can remove the incentive for reading spoilers. (I'm personally not planning to go read spoilers of all the cards, and regret seeing some cards online.)

*** 7 ***

I'm hoping that homebrew formats take off, and it seems like there's a lot of potential for this -- e.g., take a pool of decks, mix them together, draft or construct decks, then sleeve up and fight. Each card's back identifies the deck that it's part of, so putting your decks together should be reasonably easy; the "default state" of a collection is a functional collection of standard decks, instead of a big mess of cards that could be made into decks or draft packs with a bunch of effort.

***

Conclusion: Keyforge is exciting! Maybe it'll be a fad like Pokemon Go, and maybe it'll have more lasting power. In any case, I'd recommend making the minimal investment to try it out.

***

Monday, September 10, 2018

Adversarial epistemology (in politics)

I watched Obama's speech last Friday afternoon. As usual, I really enjoyed it -- it's nice to remember what it was like to have a president, and I like the rambling / wonky style he can indulge now that he doesn't have to be so on-message. Link.

He suggested that some Adversary wants people to believe that they can't affect politics enough to make efforts worthwhile; that's good for the Adversary because it's already in power and this lets it keep power. I hadn't thought of this, and it's a problem for me, because I don't think I can affect politics enough to make efforts beyond informed voting / possibly donation worthwhile. (I think Obama was mostly telling people "Adversary wants to deceive you into thinking voting is not worthwhile", but once you take the Adversarial hypothesis, it's not clear that it should be limited to voting.)

This is a game-theoretic problem I haven't seen much analysis of: when an Adversary can partially manipulate your perception of the world, how do you adjust your epistemics to make a good decision? This looks like an intermediate case between Descartes' Demon and a plane dropping leaflets that say "Surrender!" -- I know more about the Adversary than Descartes did, but less than a soldier who's seen the leaflets drop.

A few scattered thoughts on how to deal with this situation:

Cui bono? To identify a hidden actor from some actions / outcomes, you can ask "who benefits?" and guess that the beneficiary may be the hidden actor. In the adversarial epistemics case, my actions are partially determined by the Adversary, so I can look at the actions and beliefs of mine and ask "who benefits?" (Obama is partially using cui bono to support the adversarial hypothesis in the first place. "Don't think voting will work? Who benefits from that?")

Attack model: We can try to model the Adversary's capabilities and motives, and use this to figure out what our defense options are. For example, how specifically can the Adversary target me? Presumably they are playing against some classes that I'm part of, not directly against me, and presumably they are spending more effort attacking classes that will benefit them more per dollar. Maybe this gives me a way to trust some info to be "cleaner" than other info?

Epistemics vs morale: I'd naively guess that it's easier to attack morale than to attack epistemics. If there's an Adversary, I think there's more reason to trust spreadsheet-style calculative models based on historical data ("I should help in way X because its track record / expected value is good") over gut-level models ("I should help because it feels intuitively likely to work").

Distraction: It's plausibly easier to discourage people by directing their attention toward an intractable issue than it is to make a tractable issue look less tractable; e.g., I think I'm particularly powerless over Supreme Court confirmations, so directing my attention to confirmations makes me feel more powerless. This suggests I should look away from the spotlight for better opportunities.

Based on this, I might reconsider donation to political causes; not sure which ones, and they'd probably be justified in terms of Citizenship instead of in terms of Effective Altruism.

Realistically, I doubt I have the time / energy for political action beyond voting :(

Friday, February 2, 2018

The Late Sleeper: a problem for ataraxians

Dan just wrote about Benetar's "Better Never to have Been," and this neat thought experiment occurred to me when I was talking to Killian about it.

ETA: here's what I should have written. Some people think that so-called "good experiences" are actually just the cessation of suffering, reprieve from craving, etc. That would mean that the best you can do is break even; that's good, but at best it's only as good as it would have been to not have had those cravings / sufferings in the first places. I'm calling this the ataraxian position; see also antifrustrationism and tranquilism.

Does this fit with our experience? Well, it seems like sleeping people are free from suffering/craving, so sleep is an optimal state (along with nonexistence or complete cessation of suffering) under these theories, and it's less confusing than nonexistence. Now let's compare supposedly "positive" experiences to being asleep. If you think "positive" experiences can't be better than being asleep, you're an ataraxian. If you think "positive" experiences can be better than being asleep, you're not an ataraxian. I'm not an ataraxian.

(Overly complicated original post follows)

This doesn't address Benetar's case directly, but it deals with a related view that I've heard: the view that the "best" outcomes/experiences for people are actually just a minimization of their suffering. For example, eating a tasty meal just amounts to temporarily bringing your suffering from a lack of tasty food down to zero; it's good, but at best it's only as good as it would have been never to have been hungry at all. If you think that the best anyone can do is hit the neutral point (instead of having "positive" experiences), then it's better never to have been born. I'm calling this ataraxian ethics -- I'm sure it has some real name, but I don't know what it is.

Here's the thought experiment that makes me reject ataraxian ethics. In case it gets famous, I'm obligated to give it a name and a little story:
The Late Sleeper
Alice is sleeping on a Saturday morning. When she sleeps, she is truly unconscious, and is not experiencing any kind of suffering. There's some chance that she'll happen to wake up early, and if she does, she'll see a beautiful sunrise and enjoy a cup of coffee with a loved one. Whether she wakes and sees the sunrise or sleeps and doesn't see it won't have any further effects on her life. Would it be good for her to happen to wake up early?
The ataraxian would say that this would not be good; when Alice is asleep, she isn't suffering, and according to the ataraxian it's not possible for her to be in a better state than this. At best, this experience could be as good as remaining asleep, but in order for this to be the case, Alice-at-sunrise would have to be in her literally best possible state, completely free of suffering. If there's any chance of a bad experience during the sunrise (e.g. Alice's coffee is a little too bitter), it would be better for her to remain asleep. In fact, there's no experience at all (!) Alice could have, no matter how "good", that would be better than staying asleep.

This makes me pretty sure I'm not an ataraxian; there are a million things that sound better than being asleep, and I'd be willing to take some suffering (e.g. being sleepy later or risking injury by walking down the stairs) for many of them. I'm pretty sure I'm not just being culturally pressured into saying this, and if I'm fooling myself, I think I've fooled myself into actually enjoying those experiences.

I'm tempted to say that people who might think they're ataraxians should obviously agree with this and stop being ataraxians (probably retreating to saying "OK, there are actual good things beyond cessation of suffering, but the bad outweighs the good in basically every life, so it'd still be better not to exist"), but who knows, people think lots of different things :)

Thursday, January 11, 2018

You can sing along with "The Cross of Coronado"

This will probably be my only contribution to film music studies! I searched around, and didn't find anyone else who'd pointed it out.

This is the "Cross of Coronado" theme from Indiana Jones and the Last Crusade, and it plays every time the cross shows up onscreen:



In addition to being a killer motif, the rhythm sounds like the phrase "The Cross of Coronado"! Listen again and try singing along; you'll never be able to hear the theme again without hearing the words in your head.

Wednesday, January 3, 2018

Whose values should advanced AI have?

When we're talking about getting advanced AI to behave according to "human values", some people respond by saying something like "which human values?", "whose values?", or "there's no such thing as 'human values', don't you know about [cultural diversity / relativism]."

I sometimes worry that people raising this question aren't being sincere -- instead, they're trying to paint people worried about AI safety as socially and politically naive techno-futurists ("nerds", which we don't like any more), and they're happy to ask the question, show they're superior, and then leave without actually engaging with the question.

Putting this aside, I think "what exactly should we do with powerful AI, and how should we decide?" is a natural political question. However, I also think it's not too different from most other political questions.

For some advanced AI systems, we should probably should treat them like most other artifacts -- they will be used to pursue the goals of their owners, and so the answer to "which values" for "Private Property AI" will be "some of the values of their owners, or some values that their owners want them to have instrumentally to achieve their owners' ends". Use of Private Property AI should be subject to some laws, which might include what values and capabilities Private Property AI is allowed to have, and we'd be hoping that these laws in combination with economic forces would lead to advanced AI having a positive overall impact on human welfare. (It seems like markets have done OK at this for other technologies.)

The private ownership solution unsatisfying if we think that some kinds of advanced AI distributed through markets is likely to be bad for overall human welfare (the way we think that distributing nuclear or biological weapons through markets would be bad for overall human welfare). If advanced AI is powerful enough, it could create super-huge inequality between owners and non-owners, allow owners to defy laws and regulations that are supposed to protect non-owners, or allow owners to control or overcome governments. Owners might also use advanced AI in a way that exposes humanity to unacceptable risks, or governments with advanced AI might use it to dominate their citizens or attack other countries.

In response to this, we'll probably at least restrict ownership of some advanced AI systems. If we want more powerful AI systems to exist, they should be "Public AI", and have their goals chosen with some kind of public interest in mind.

At this point, it seems like the conversation has usually gone to one of two places:
  1. We need to solve all of moral philosophy, and put that in the AI.
  2. We need to solve meta-ethics, so that the AI system can solve moral philosophy itself.
This leads to questions like "Should advanced AI be built to follow total hedonic utilitarianism? Christian values? Of which sect, and interpreted how? Which ice cream flavor is best? What makes a happy marriage? How can we possibly figure all of this out in time?"

I don't think it's true that we actually need to solve moral philosophy to figure out what values to give Public AI. Instead, we could do what we do with laws: agree collectively to give Public AI systems a tiny set of agreed-upon values (e.g. freedoms and rights, physical safety, meta-laws about how the laws are to be updated, etc.), leaving most value questions like "which ice cream flavor is best" or "what God wants" to civil society.

Political theory / political philosophy have spent some time thinking about the same basic question: "we have this powerful thing [government], and we disagree about a lot of values -- what do we do?" Some example concepts that seem like they could be ported over:
  • Neutrality / pluralism except where necessary: don't have governments make decisions about most value-related questions; instead, just have them make decisions about the few they're really needed for (e.g. people can't murder or steal), and have them remain basically neutral about other things.
  • Enumerated powers: "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people." Contrast with letting governments do whatever isn't explicitly prohibited.
  • Rule of law: instead of giving governors full power, make a set of laws that everyone including governors is subject to.
  • Separation of powers: make it hard for any set of governors to take full power.
  • Constitutionalism: explicit codification of things like the above, along with explicit rules about how they can be updated.
  • Democracy: give everyone a voice in the creation and enforcement of law, so that the law can evolve over time in a way that reflects the people.
  • "Common-value laws": I'm not sure if there's a real term for this, but a lot of laws codify values that are widely shared, e.g. that people shouldn't be able to kill each other or take each other's stuff at will.
  • "Value-independent laws": again not sure if there's a real term, but some laws aren't inherently value-related, but are instead meant to make sure that civil processes that generate value for people (like trade) go smoothly.
I think "constitutional democracy" is the right basic way to think about the "whose values" problem for Public AI, and makes the whole thing look at lot less scary.