Once one has decided that animals require moral consideration the question "How should you weigh animal's needs against each other (and us)?" arises. In this post I shall give some of my thinking on the issue and finish with a list of thorny questions to tax the reader. By person I shall mean any creature with neurons.
I am going to work within my libertarian framework here (see the last post on libertarianism). The libertarian position still requires a base ethical theory to supply answers to various questions and thats what I discuss here. For simplicity a utilitarian base theory is used as most of the time such a base theory works quite well (I think there are situations where utilitarianism is lacking particularly with regard to ethical situations regarding AIs)
The rules of thumb that I propose are only meant for the following two sorts of situations. In other situations things are more complicated.
1) Suppose you have a choice of several courses of action. Each of these courses of action will harm someone who has not consented. Let us also assume that there are no positives. Which should be chosen?
2) Now suppose that each of the courses of action confers a positive benefit on some individual or individuals. No one objects to your courses of action. Which should be chosen?
Firstly within a given species (ignoring sexual differences/type differences that do occur in some other species) I use a rule of thumb I call the quality of life year. You multiply the quality of life gained/lost by the number of years it is gained/lost for to arrive at a measure of how beneficial/detrimental the intervention is. This (I am amazed to discover) is the way the national health service of the UK decides on health care decisions. I heartily approve! It is difficult to decide what value to give different types of effect but the NHS has developed an interesting variety of methods for this sort of issue.
Secondly it will occasionally be necessary to decide how to weigh different animals (including us) against each other. We shall now assume that all other things are equal and that it is purely a decision based on species.
I argue in this case that a good rule of thumb is to weigh the animal according to the number of active synapses in the neural system of the creatures in question (this does imply some animals are worth as much or more than people). Here I shall give a brief thought experiment to support this:
Suppose that there is a world in which blobs of brain move around like polyps. Sometimes many small brains come together into a larger unit to do some serious thinking. The brains then rearrange their synapses to connect with each other and become essentially one super brain. Now notice that it would be practically impossible to ascertain to what degree the brains are engaged in meaningful communication. Its also difficult to see from the outside what the contents of the mind of those brains are (and probably even more difficult (not to mention dubious) to use this knowledge in ethical decisions). As such we must observe that when 100 human sized brains come together to make a gigantic brain that we can't tell whether to treat them as 100 human sized brains or one gigantic brain. In such a situation then the only reasonable and applicable morality is to give the brains ethical consideration according to the number of synapses they possess. Then the gigantic brain will be treated like 100 human sized brains whether or not its acting as 100 independent agents or 1 super agent. Now we can observe that the assumptions made here apply to animal's brains as surely as to the fictional blob brains. It is also a potential ethic to apply when we create AIs (although possibly problematic).
Finally we must decide how to weigh creatures when their brains change size over time. I addressed this issue partially in my post on abortion. If a being has X many synapses and Y many years of expected life left then the loss of that being's life produces a cost of XY irrespective of the size the beings brain may eventually attain. The loss of potential cannot be regarded as being any loss at all unless the potential is unique (i.e. killing the last mating pair of a species at their birth). As such only the current brain size should be counted. This reasoning results in a compatibility between a pro abortion stance and a vegetarian one.
1) If I could make a perfect copy of your brain/mind would it be murder to destroy that copy an instant after creating it?
2) Is it really the case that 1 billion ants should be given 1 billion times more moral weight than 1 ant.
3) What is the value of uniqueness in the mental realm?
4) How does all this relate to issues surrounding AIs?
I'm not going to put my views on these questions yet but I might address them by email or in further posts....