Homo Deus: What did you learn?

CharlasOne LibraryThing, One Book

Únete a LibraryThing para publicar.

Homo Deus: What did you learn?

Este tema está marcado actualmente como "inactivo"—el último mensaje es de hace más de 90 días. Puedes reactivarlo escribiendo una respuesta.

1lorannen
Ago 28, 2017, 12:01 pm

What new things did you learn from reading Homo Deus? Were you surprised by any of the things you learned?

2kristilabrie
Ago 30, 2017, 2:50 pm

I wouldn't say I learned anything new, per se, because (again) I think the value of his book lies in the philosophical topics brought up, not any of the facts he claims. I definitely thought of the future of society in a more expansive way, the various directions our world could take us depending on what the collective "we" decide is important (or perhaps just those with money and power). I think this reflection is useful for knowing where our focus should lie today and the years ahead of us.

3modalursine
Sep 19, 2017, 7:26 pm

If I'm not confusing which book is which, (I read "Homo Deus" a few weeks back) I got some insight into how it could be that on the one hand, Homo Saps has become the dominiant species on the planet, the "owner" of the world in every sense, has literally touched the moon, produced glories of art and imagination, and yet can be bone crushingly stupid, cruel and , well, "beastly".

Harari points out that while collectively we know quite a deal (though far from everything, to be sure), individually we can only know a teeny tiny subset of all that. I take that to mean that as individuals, or even in the "wrong" groups , those which lack some essential bit of knowledge)or who are committed to some incompatible cultural meme, we can be "dumb as a box of rocks".

4BrianRobinson
Sep 22, 2017, 8:04 pm

I'm sorry to post this late in the game. I like the idea of a group read. I found Homo Deus to be challenging, unsettling, and provocative. I read Sapiens and Homo Deus in succession. As far as the history, it was helpful to have read Stephen Pinker's The Better Angels of Our Nature, Jared Diamond's Guns, Germs, and Steel, Collapse, and The World Until Yesterday. There are a number of books on technology as well as brain/mind/consciousness which also helped me.
There was definite overlap between the two books by Yuval Noah Harari. To focus on Homo Deus, what I learned is to pay attention to algorithms, "arguably the most important concept in our world." I was shocked to look into the future abyss of human beings as nonessential and useless, with a few enhanced and upgraded humans likely to treat us the way we now treat animals. I take heart that science has "no clue" about consciousness, and resist the reduction of human experience to accommodate technology, such as agreeing that robots can care for our elderly in our place etc. I learned a great deal of the importance of the inter-subjective webs of meaning, our commonly accepted stories. I was provoked by the idea that economic growth is so essential even at the cost of so much stuff, obesity, and climate catastrophe. What are we to do with ourselves? How will we treat each other? The author's final 3 key questions I feel earned thoughtful responses. Personally, I do not believe life is just data processing, consciousness is far more valuable than intelligence, and as for the third question, it is unnerving to not know what will be the result when "non-conscious but highly intelligent algorithms know us better than we know ourselves."
I've not posted before on this site, so please forgive my mistakes etc

5PuddinTame
Editado: Sep 30, 2017, 4:07 pm

I don't know how much I learned. Looking at the implications is a bit depressing, but I console myself that I am probably old enough to die before it has serious consequences for me, and I don't have any offspring. As to the possibility of being immortal, I don't think I could bear it, especially given how things are likely to work out.

One thing that I question about the future of artificial intelligence (AI) is that algorithms have a purpose. Will AI have an overall purpose without us? Can we keep control, even in the face of unintended consequences?

Readers may be interested to know that in Vanity Fair's April 2017 issue, there is an article about Elon Musk's worries about the consequences of AI. He fears that we may be creating something without knowing the consequences. I worry too, because I think that the people developing technology, assuming that they are well intended, often fail to recognize the possible bad uses of their inventions. One other super geek said that he expects AI to replace us, and he is fine with that.

Science fiction, of course deals with this. In E. M. Foner's light-hearted EarthCent Ambassador series, the AI Stryx take various planets under their wing as their technology reaches a certain level of sophistication. The Stryx, generally benevolent but a trifle underhanded, loosely govern a series of space stations where the very diverse intelligent species meet. The Stryx know nothing about their Makers. Eventually, one, Dring, comes to check up on things. He explains that the Makers were in a war that they expected to lose to other AIs who were at war with biological beings, and they created the Stryx not so much as weapons, but as surrogate children to leave something of themselves behind. They actually won the war, but the Stryx then applied themselves so assiduously to taking care of their Makers that the latter fled for parts unknown.

In the Admiral John Geary series by Jack Campbell, the Alliance built a fleet of AI battleships that they lost control off. The AI ships have destroyed two star systems and show no signs of stopping. Oops! Campbell has human soldiers in his universe instead of robots, because any system can be hacked.