In an answer to this question I made the rather bold speculation that if the Turing Test is taken to be our defining criterion of a consciousness of human level, then all questions in the ethics of simulated minds can be reduced to human equivalents with a canonical reformulation.
To put it another way:
Mary and Siri live in TurEngland, a place where the laws are set by a collective of simulated minds, operating on an empirically validated utilitarian calculus- every 'mind that passes a turing test is considered in the calculations, and the calculus 'pulls back functorially over the criterion' meaning all measures of Eudaimonia are demonstrable in mental states recognised as such by a Turing Test.
Siri is a campaigner for 'mind supremacy, with many of his eudaimonia states skew-distributed toward a world where humans counted less. Mary has the opposite skew against that particular variable and wishes for a world where humans rule the roost. Can either devise a scenario in which their hopes are realised? Can they find a way to break the system altogether?
To express the question in its broadest terms:
If one substitutes 'prescriptivist processing network' or 'Categorical imperative evaluation- thingy' for 'utilitarian calculus', one has, I think, still an interesting question. That is:
"Which, if any, of our current moral systems fail to provide answers, or 'break' in some way when Turing-Test-passing minds are inducted into moral personhood?" (and which are therefore not compatible with the Turing world-view?)
Can simulated minds become utility monsters? Are there thought experiments set in such a world where our present ethical apparatus fails to provide an answer?
To clarify: I am not looking for an evaluation of the Turing test as such a criterion, rather to see if there are situations in which, if it is assumed to be our criterion for moral personhood, our current moral frameworks are shown to be inconsistent or incomplete.