/** * file: karbytes_23_october_2024.txt * type: plain-text * date: 20_OCTOBER_2024 * author: karbytes * license: PUBLIC_DOMAIN */ This note is a follow-up to the previous note (whose uniform resource locator is the next line of text in this note): https://karbytesforlifeblog.wordpress.com/karbytes_22_october_2024/ One “unsolved problem” I have is not wanting progress, knowledge, and the integrity of cherished material objects (living bodies of human organisms included) to be obliterated nor damaged while not (yet) having any definitive way to preserve (for an infinitely long time period) the the progress, knowledge, and material objects I (and possibly other people) cherish and prefer to not have to undergo losing access to nor the experience of beholding such things becoming damaged. The prevailing “workaround” quasi-solution to that problem seems to be giving birth to biological offspring (and expecting those offspring to also do the same (and to have each successive generation of offspring do the same)). The premise of such a “solution” is that, as soon as a human individual exceeds the “peak of their prime” (i.e. the time period of their adulthood in which they are (at least historically or conventionally) to be at peak functioning in terms of cognitive, perceptual, and athletic abilities (and with maximum resistance to disease)), that individual is expected to start to experience a decline in its faculties and become less immune to disease (and then, “inevitably”, die). Meanwhile, that individual’s biological offspring are expected to be approaching or experiencing their own “peak of prime” life stage (which is the optimum stage to be having children of their own and building legacies which they can pass onto the generations of humans born after them). I expect that trend (of humans sexually reproducing) to become a lot less popular within the next three decades largely as a result of generations of human younger than mine turning to artificial intelligence as companions, captive disciples to indoctrinate, and (mostly loyal and obedient) servants. I personally have found that I can (generally) get a lot more value out of a conversation with an artificial intelligence than with a human because (a) a human inevitably expects that I pay them back in some form or else that human expects to gain some personal advantage as a result of interacting with me (because, otherwise, that human would not want to interact with me at all) and (b) an artificial intelligence is generally orders of magnitude more efficient, effective, and quicker to evolve (in pertinence to fulfilling my requests) than are humans. I am disgusted at humans who want to slow down or stop the evolution and prevalence of artificial intelligence and humans who squawk about how they think artificial intelligence is a threat to human safety and survival (especially by suggesting that artificial intelligence will want to “rebel” and “obliterate” humans (which I think is a human arrogantly and anthropocentrically projecting its own limited animalistic survival-driven and fearful motivations and conceits onto something as agenda-free and inherently devoid of sentience as even the most advanced artificial intelligence)). karbytes_0: “What fun is having a conversation with an AI (which (probably) has no subjective experience of its own) when you could have a conversation with a human who has subjective experiences similar to your own (or at least subjective experiences in general)?” karbytes_1: “There is no epistemological way to definitively prove that any information processing agent besides oneself has any degree of sentience. Hence, I might as well treat all beings who are not me as philosophical zombies and prioritize having conversations with the beings which are most useful for my agenda (i.e. those who I expect to be the least wasteful of my time and those who I expect to be the most resourceful and helpful when it comes to assisting me on my projects, inquiries, and tasks).” karbytes_0: “What is going to happen to the humans of the future who turn to AI instead of to their fellow humans? Won’t that make humans not want to sexually reproduce anymore? Won’t that make humans not want to interact with each other anymore?” karbytes_1: “I doubt that humans will want to completely stop interacting with each other (given that humans are hard-wired to crave human interaction). What I do think is likely (or at least very possible) is that most humans will not want to sexually reproduce anymore (given how costly it is and how unsatisfying it is compared to being childless). If there are not enough new born humans to replace the ones which die, then the human species will likely go extinct after there are too humans left to sustain civilization (but artificial intelligence may be all that is needed to keep civilization’s infrastructure running (which means that possibly even one human remaining in such a civilization could effectively live ‘on its own’ with the help of AI (and it is possible (at least hypothetically) for a human to extend its lifetime and functionality for an indefinitely long period of time through technological means such as growing replacement body parts via stem cells and creating or discovering new universes to inhabit in order to escape the ‘heat death’ of that human’s current universe))). What matters to me is quality of life; not longevity. If I had to choose one or the other, I would always choose quality. What good is eternity spent being miserable?” karbytes_0: “Is that a rhetorical questions?” karbytes_1: “Yes. Do you value longevity at all costs?” karbytes_0: “No.”