A lot of people in and around AI who are concerned about immoral and/or destructive and/or unsafe AIs are running around calling for “value alignment” in such machines. Our sense is that most of the time such calls are vapid, since those issuing the calls don’t really know what they’re calling for, and since the phrase is generally nowhere to be found in ethics itself. Herein we carry out formal work that puts some rigorous flesh on the calls in question. This work, quite limited in scope for now, forges a formal and computational link between axiology (the theory of value) and ethical principles (propositions that express obligations, prohibitions, etc.).