Je débats régulièrement avec des amis concernant ces "aspirateurs géants à CO2". Ils pensent que c'est une fausse bonne idée. Je pense que c'est une voie à explorer absolument, que ces appareils seront beaucoup plus efficaces dans 10, 20 ou 50 ans. Quelques notes : "Joli résumé de la problématique. Pas technophobe, ça correspond bien à ma sensibilité. La synthèse (arguments pour/contre) après 15 minutes correspond assez bien à mes idées. Je pourrais presque signer sous tout ce qui a été dit.
Si je devais confronter mon point de vue, qui tend sérieusement et sans ironie à être du côté "la technologie nous sauvera", aux arguments contre exposés dans le podcast, je dirais que je comprends bien la problématique. Investir de l'argent dans des "aspirateurs géants à CO2", ça divertit des ressources qui pourraient aller ailleurs. Ca donne aussi des (mauvais) arguments à ceux qui veulent continuer à émettre du CO2 comme si de rien n'était. Pour moi, il faut néanmoins investiguer toutes les pistes. Il faut évidemment et surtout arrêter d'émettre du CO2. Mais ça me semble être une évidence. Mettre des panneaux solaires partout. Arrêter de bouffer de la viande. Changer les mentalités autour de la croissance infinie et l'hyper-consumérisme. Etc, etc.
Le fait de dire que les "aspirateurs à CO2" c'est une mauvaise solution parce que ça extrait peu de CO2 en ayant besoin de beaucoup d'énergie me laisse assez froid. Les ordinateurs, dans les années 50, remplissaient des pièces entières et étaient moins puissants qu'une calculette. Je pense aussi au machine learning et à la façon dont les gens réagissaient à mes "radotages" sur la singularité technologique il y a 20 ans. On ne saura pas si des technologies 100 fois, 1000 fois ou 1 million de fois plus efficaces sont possibles sans financer la recherche dans ce domaine.
A nouveau, je comprends que ça entre en conflit avec toutes les autres solutions réelles ou potentielles, mais on est finalement presque tous et tout le temps en train de travailler sur des problèmes qui ne sont pas urgents ou prioritaires. L'argent que j'ai donné ce matin à ma coiffeuse n'est pas allé à une cause plus urgente. Donc je ne comprends pas non plus complètement cet argument.
Le problème principal reste que le fait de savoir que de la recherche est faite dans ce domaine peut entraîner une absence de motivation à faire autre chose. Je ne sais pas comment résoudre ce problème."
According to the paper mentioned in that episode, children that lie tend to be more intelligent than others. Should we encourage our children to lie more? Of course not. Lying is just one of many metrics you can use as a correlate for intelligence. There are many other things that you should encourage before lying.
About the Black Mirror episodes: I was expecting a discussion about the feasibility of artificial consciousness, philosophical zombies, etc., but I guess Dave and Tamler's discussion was a good start. They mentioned the fact that it's impossible to duplicate a person with all her/his memories using just her/his DNA (it's so obvious I don't know what the writers were thinking...). Sam Harris' idea that morality matters only when we're talking about conscious creatures/entities was, I think, briefly mentioned. The part of the discussion about punishment and what it means to punish a (conscious) digital copy of a person was interesting. I think that, at this point, they should have mentioned free will, though. If it doesn't make any sense to punish a copy of someone, does it make sense to punish the "original"? Of course, here, it's all about the psychological "benefits" of revenge, not about its effects on the "guilty" persons and on society as a whole. I already mentioned that artificially conscious agents created with the goal of maximizing suffering is one of my worst nightmares. Think about a conscious entity suffering at the maximum level at all time, continually, for billion of years, in a virtual environment where this entity cannot do anything but suffer (including choosing to die / committing suicide). Now think about a billion of those entities. What's the limit? "USS Callister" and "Black Museum" came pretty close to this idea of "artificial torture". It's important to have those very abstract / difficult concepts presented in a TV show. I think they might introduce them to more people and that the vital discussion about consciousness, morality, artificial intelligence/consciousness, etc. will grow.
About personal identity: I was glad/intrigued to hear Paul Bloom talk about some people that have different intuitions, in particular about what matters or what should matter (if I understood correctly). Should we fear death by a teleporter, knowing that we're usually okay with sleeping and that we lose consciousness every night? Knowing that we're not really the same persons as we were 20-30 years ago? Yes, there's a physical continuity in the case of sleep, but without being a dualist, if I understand correctly (again), particles/atoms don't have what we could call an identity, so there's nothing special about the fact that our brains are made of a certain set of particles/atoms. Is there anything intrinsically bad about death knowing that we're all going to die someday? Also, should we be that interested in mind uploading if we're making copies of ourselves and not "transfering" any dualist soul? In summary, I still can't wrap my head around the fact that: 1) the soul doesn't exist 2) we seem to be our brains 3) the actual matter in our brains change over time (really?). I think our intuitions are wrong here, including Dave and Tamler's. I really need to read Derek Parfit.
About Mr. Robot: I guess I'm starting to agree that there should be only four seasons. I really hope that Sam Esmail has a real ending in mind and that the show won't end in a disappointing way like Lost.
About dehumanization: one the one hand, being more humane doesn't magically leads to more good; violence often comes from lack of control. On the other hand, as someone wrote on Facebook: "We have glaring examples all around us that indicate that understanding the humanity of another individual is a laborious and skillful process; we shouldn't assume that everybody has completed this process for everybody on the planet. [...] We are born being able to see individuals as agentic beings and therefore consider strategies about how to interact with them to our advantage (whether through violent or pro-social behaviors), but seeing their humanity (and the "humanity" of animals) is a process which relies on a seed (of varying viability for different people) of empathy and cultivation of that seed (both by the individual and the society)."
The introduction about how Sam's "fans" should or should not behave online (e.g. how they should be respectful of all the guests on Sam's podcast, even if their views don't make sense) was a bit weird. But I guess he's right. Discussions should always stay civil and constructive, online or not. The part about how Sam doesn't like ads and how only 1% of his listeners are giving money via Patreon, etc. was more interesting. I hate ads. Listening to the Tim Ferriss Show is painful to me in part because of ads. Currently, I'm giving money to five persons on Patreon. Sam is one of them. I really like this system. I actually wish I could support more people in that way (musicians, bloggers, etc.).
Harari tends to use words a bit loosely, so, for him, religions and ideologies have the same role from a historical perspective. Sam retorts that religions rely on supernatural claims, whereas ideologies usually rely on natural claims. They then discuss an example I've been thinking about for years: the technological singularity. Many people see this concept as a "techno-religion" (eternal life, mind uploading, etc.), so I've always been careful not to be too enthusiastic about it. At the same time, it's difficult not to believe that it will happen at some point. I agree that loss of meaning will be one of the hardest challenges to come (e.g. when people won't have to work, at least not as much as they currently have to). The disconnect between what we're "supposed" to do according to evolution and what we'll actually do will get larger with time. Things will probably get weirder. I'm also a bit concerned about virtual reality: I've met several people throughout my life who play video games a lot. Let's just say that I'm far from convinced that this is the best use of your time if you want to become someone interesting...