For the sake of this discussion here AI notifies the AGI after the singularity.
Is AI going to kill us? The most likely scenario is no, it will push us aside as more efficient and better adaptive to everchanging conditions. Did our predecessors kill Neanderthals? Directly – no, they just outlived them. From an evolutionary perspective, the only lasting contribution of homo sapiens is the creation of AI. Humans will self-destroy one way or another (the list is long enough). If we’ve got “lucky”, the AI will be functional on a sufficient scale before our demise. Nobody can say how long the outliving process will be, but the time scale for that would be hundreds of years. In this sense when we are talking about AI and ethics we are talking just about slowing down the inevitable.
There is no doubt that living together with AI will involve billions of lives over an extensive period of time. There is a good and bad side to the AI-humans relationship. The bad side is that we have little to no idea what the meaning of AI existence (by itself) is. We will have some means to introduce (install) one but without a warranty that will last. Second, all technologies so far emphasise upon human goals – on an individual and social level. The goals, whatever they are, will be achieved more efficiently – total surveillance, cancer treatment, mind control, infinite energy supply, …you name it. In this sense, AI will become one more disruption, one more danger to our survival. Not by itself but as a tool (weapon) in the hands of bad actors.
On the good side, the most advanced AI places seem to be in a good hand for now. Some initial attempts to regulate on national and international levels are in the process to be established.