Indirect Theories On indirect theories, animals do not warrant our moral concern on their own, but may warrant concern only in so far as they are appropriately related to human beings. The implications these sorts of theories have for the proper treatment of animals will be explored after that. Finally, two common methods of arguing against indirect theories will be discussed.
Sorry, I had something come up in my personal life. Soeren E April 26, at 1: We will need to reduce the scope quite a bit, as a cannot commit to an ambitious essay.
A good thesis on my part might be there is a negligible chance of humans creating an artificial general intelligence within the next years. I mean it in the sense that donating to places like MIRI is a waste of money. Douglas Summers-Stay April 27, at 5: I work as an AI researcher, and have some relevant publications.
I could contribute together with Soeren, if you are both want to. What is the existential risk of AI technology compared to other existential risks?
My position would be: Even getting to AGI will be very hard and take a very long time. Even if we get to AGI, it is unlikely that it would be able to recursively self improve. Even if it can recursively self improve, it is unlikely that that self improvement would be exponential.
Even if that self improvement is exponential, it is unlikely that it will be exponential for very long. Again, we can focus on AGI if you want, and I do think it would be interesting to do some sort of first principles write up where we nail down definitions and give the readers a layout of the current state of technology and what needs to happen for AGI.
Soeren E April 27, at To make my claim explicit: I reserve the right to update as I write the essay: Would you be willing to assign a percentage to your belief?
I would like to narrow the scope to not consider if MIRI etc. Also, unless the temporal discount rate is really low, it is not worthwhile to care at all about events in years, even if they are very likely.
Would you be interested in adversarial collaboration with both me and Douglas Summers-Stay? Feel free to email me soeren. Perhaps a better question in regards to this issue is to balance out the perceived probability of developing AGI versus the perceived ability of humans to control said AGI for example, by crafting effective morals testing.
And putting this all in context of something that makes sense to consider technologically, I think, means you have to have a time horizon that is within the potential lived experience of someone reading this blog. That gives until to develop AGI in a time horizon that is meaningful in the sense that we ought to think about doing something soon.
Contrast that with the question, "say we developed AGI; how long would it take us to develop the ability to perform effective morality testing on it prior to giving it any kind of power? Ultimately, the question we want to answer is, "should we be worried about this taking over and subordinating human control?The more consistently one attempts to adhere to an ideology, the more one's sanity becomes a series of unprincipled exceptions.
— graaaaaagh (@graaaaaagh) February 5, Meeting with a large group of effective altruists can be a philosophically disconcerting experience, and my recent meetup with Stanford Effective Altruist Club .
I will first put forth the traditional Kantian argument regarding the status of animals. The Categorical Imperative makes a distinction between two types of individuals.
Rational beings are referred to as "persons" while non-rational being are deemed "things"#.4/4(1). Deontological ethics are consequence-based ethics that evaluate the morality of actions according to their results.
false Kant thought that there is no truth about morality, and that what we consider ethical is simply a product of culture, place, or time. Regan believes it is a mistake to claim that animals have an indirect moral status or an unequal status, and to then infer that animals cannot have any rights.
He also thinks it is a mistake to ground an equal moral status on Utilitarian grounds, as Singer attempts to do. After an understandable explanation of Kantian ethics, O’Neill shows the advantages of Kantianism over utilitarianism.
Kantianism and utilitarianism have different ways for determining whether an act we do is right or wrong. According to Kant, we should look at our maxims, or intentions, of the particular action.
This page argues the case against bullfighting in a new and distinctive way.