Fotografía de autor
3 Obras 26 Miembros 1 Reseña

Obras de Olaf Groth

Etiquetado

Conocimiento común

Todavía no hay datos sobre este autor en el Conocimiento Común. Puedes ayudar.

Miembros

Reseñas

The clearest way to describe the differences between artificial intelligence (AI) and regular computer development is that computers help us understand the world, and AI helps us change the world. That’s my definition (though thousands probably came up with the same idea), and Solomon’s Code reinforces it in every domain it examines.

AI digests unfathomable amounts of data to come up with methods and solutions that would take humans years and centuries to replicate. So driverless cars, arrangements of warehouse goods and defeating humans at chess and go are all examples of AI’s power to process data. If you want a computer to run a robotic arm, endlessly bolting car parts together, that’s old hat, garden-variety computer smarts at work. Saving your life by comparing your x-ray to a hundred million others around the world and coming up with an outside the box diagnosis in four minutes – that’s AI.

Olaf Groth and Mark Nitzberg have combed the planet to determine the state of the art in this nearly comprehensive look at developments, and in particular the thinking behind them. Because the considerations are huge and many. Things like language, religion, history, human rights and culture need to be considered, and so AI is taking different approaches around the world. Government philosophies towards AI startups also twist the landscape accordingly. The authors interview experts everywhere to tease out the differences in every nation’s approach, and the resulting developments.

They are careful not to be (too) swept up in the excitement, citing the very real risks AI poses for civilization. For every great solution, they pose controversies. There are issues like jobs lost, never having to think or remember again, privacy, security and deepening the already considerable gulf with the have-nots (“digital divide”).

For example, there is one inevitable dystopian AI scenario that is already well entrenched. In China, the authors say, the government has installed 170 million surveillance cameras. In addition to recording crime, they use AI facial and gait recognition to nail people by the millions. Things like crossing the street outside the crosswalk affect a person’s “social credit score” in China. Failing to pay for parking, failing to stop at stop signs, being drunk, paying taxes late - pretty much anything the government wants can affect the score, at the government’s whim. And the government demonstrates its whims all the time. A bad score keeps people from flying at all, or traveling first class in trains, or whatever the government wants to deny them. The worse the score, the less people can do in China.

This has gone beyond the famous episode of Black Mirror, in which a young woman, ever conscious of her social media and reputation score and always eager to please, trips up. This leads to a cascade of bad reviews, influenced and exacerbated by her deteriorating score, until her life is a total ruin. No one wants to be caught associating with a low score person. Party invitations cease. Store clerks shun her. Marriage to someone decent is out of the question. So is a career job or even an interview. Forget loans, mortgages or university acceptance. Low scorers are poison. That this is already happening in the world is horrifying.

Not participating is no answer either, as the lack of a score means zero access to anything. Ask any immigrant.

And then, do you really want to have to converse with your phone so it can send updates on your mental state and mood – to someone else? Because that will be part of the mandatory health monitoring system already required as a condition of employment in many large American companies. Fitbits are just the beginning.

As bad as that might be, worse is the fact that many AI systems are black boxes. We actually don’t know how they work. They learn by themselves, “improve” by themselves and decide by themselves, with no human oversight, input or control. No one can explain how they come to their decisions. (This is one rather large aspect of AI the book does not cover.)

The bottom line on AI being smarter than people is captured in the fact that computers can’t think. People think. They take numerous inputs into consideration, and weigh them unfairly, with biases and prejudices and gaps in knowledge. Worse, humans do not even know what consciousness is. They have no viable theory of what makes a person the person s/he is, and not someone else, or a toaster. Until and unless humans can define what consciousness is and how the brain creates, manages and tolerates it, there can be no threat of AI also having consciousness.

Solomon’s Code finishes weakly. The Conclusion is a hope that some sort of global oversight body will emerge to regulate AI developments worldwide, somehow taking everyone’s values and fears into account. It then ends with a bizarre Afterword that is really a Preface, explaining the value of what you are about to read. But for insight into the state of the world of AI, the book is very useful.

I liked Solomon’s Code because it is fair and balanced, and all but forces the reader to think, a property AI seeks to dispense with.

David Wineberg
… (más)
 
Denunciada
DavidWineberg | Dec 15, 2018 |

También Puede Gustarte

Autores relacionados

Estadísticas

Obras
3
Miembros
26
Popularidad
#495,361
Valoración
4.0
Reseñas
1
ISBNs
8