Tipping point? Artificial Intelligence program teaches itself to learn

CharlasScience!

Únete a LibraryThing para publicar.

Tipping point? Artificial Intelligence program teaches itself to learn

Este tema está marcado actualmente como "inactivo"—el último mensaje es de hace más de 90 días. Puedes reactivarlo escribiendo una respuesta.

1MaureenRoy
Editado: Nov 25, 2016, 8:41 pm

Earlier this fall of 2016, Google announced that its artificial intelligence (AI)-driven language translation system displayed an unexpected ability to translate an Asian language into English ... it had not previously been introduced to that Asian language. The plot thickens: When the Google engineers looked at the AI-created language interface for that new translation ability, they could not understand it.

My first question about that Google AI system is, "Is it trustworthy?" For example, AI applications are being developed to evaluate risky situations (such as mining). If we do not develop metrics for assuring the trustworthiness of such systems, that could be the biggest mistake of all. Of course, it may not be possible to develop metrics for evaluating trustworthiness. Then what? Oops.

Here is a news story on the unexpected new AI capabilities of the Google translation system:

http://hothardware.com/news/google-researchers-discovered-their-ai-translation-t...

Here is a 16-page journal article on these developments by those Google programmers, with samples of the AI-created mystery language interface:

https://arxiv.org/pdf/1611.04558v1.pdf

2southernbooklady
Nov 26, 2016, 8:19 am

That is wild.

3DugsBooks
Editado: Nov 30, 2016, 5:43 pm

Very Interesting, wish I knew more about coding - read a {very} little about neural networks some time ago.

"My first question about that Google AI system is, "Is it trustworthy?" For example, AI applications are being developed to evaluate risky situations (such as mining). If we do not develop metrics for assuring the trustworthiness of such systems, that could be the biggest mistake of all. Of course, it may not be possible to develop metrics for evaluating trustworthiness. Then what? Oops.

I worked on high rise buildings some years ago, if you watched the architects who visited the structures as they went up you would notice they stayed near "core" areas where the concrete columns were the thickest and strongest. Any on site "offices" I saw were located in small semi enclosed sections with at least 2ft thick walls of steel reinforced concrete ;-)

4DugsBooks
Dic 8, 2016, 5:02 pm

Scientists discover nature’s algorithm for intelligence;

A rather alarming Extreme tech article about a recently published "... Theory of Connectivity holds that a simple algorithm, called a power-of-two-based permutation taking the form of n=2i-1 can be used to explain the circuitry of the brain."

5lorax
Dic 8, 2016, 5:14 pm

I'm not sure what you mean by "When the Google engineers looked at the AI-created language interface for that new translation ability, they could not understand it.". Certainly the abstract makes no mention of such a novelty, it just says that a translation program trained on various language pairs was able to translate between pairs it was not trained on. To quote from the paper, "For example, a multilingual NMT model trained with Portuguese→English and English→Spanish examples can generate reasonable translations for Portuguese→Spanish although it has not seen any data for that language pair." This is a far cry from "OMG the computer is doing Stuff and we don't know what it's doing!!!11!!!" Given that the *entire goal* of the project, based on the abstract and introduction, appears to have been an attempt to enable this sort of shortcut (by translating everything to an internal 'language' and back) I suspect the press release was engaging in a bit of typical press release hyperbole.

The article also makes NO mention of a previously unknown language, just of previously untrained language pairs. That goes beyond typical press-release hyperbole into outright lies.

6DugsBooks
Editado: Dic 9, 2016, 12:10 pm

>5 lorax: 'The article also makes NO mention of a previously unknown language, just of previously untrained language pairs. That goes beyond typical press-release hyperbole into outright lies'.

When I see stuff like that I write it off to somewhere on the spectrum of good header to "clickbait" in the title. Since we aren't already "captured" by holding a magazine and reading the articles there is a tendency to hyperbole as you state. As long as the article is decent and has links to the actual research I don't mind to much.

Once a few years ago I called a local TV station about using a particularly disgusting method of that sort. They were frequently stating repeated ads like "On news at seven we report on a cure for cancer!" throughout the day. It turned out to be of course a small but significant research advancement {but not 100% cure} for a very specialized type of cancer. I called a producer and explained how horrid that attempt to increase ratings was by grabbing the attention of people suffering from cancer. I have not heard them use that sort of phrasing since, I guess someone was embarrassed and not completely blinded by greed.

7.Monkey.
Dic 9, 2016, 12:12 pm

I'm sure it is totally clickbait phrasing and the creators are not in any way alarmed by this, but the article does say it:
"It's how the Google AI achieves this that is a bit of a mystery. It appears that the NMT has created its own internal language or "interlingua." It examines concepts and sentence structures instead of word equivalents. As a result, the NMT has created translations that are more accurate and natural. Google’s NMT creators, however, are unsure how the neural networks work or what exact concepts the NMT has learned translate languages directly. In short, Google's AI has created its own secret language we humans do not fully understand."
It was indeed programmed to do things on its own so surely this is not a horrifying idea to the creators like the article is portraying it as. But yes, it did say that people didn't get how precisely it was done.

8DugsBooks
Dic 9, 2016, 12:26 pm

>4 DugsBooks: just wondering if, using the formula, anyone has extrapolated how large a {from the article"where a principle like this could enable the creation of artificial brains that are wired in a manner directly akin to our own. } neural wired computer would need to be to approximate a human brain.

9lorax
Dic 9, 2016, 12:28 pm

>7 .Monkey.:

Yes, but the 'interlingua' is exactly what it was intended to do, and it's the nature of neural nets that they're a bit opaque (which is why they had fallen out of favor for a while, but with 'deep learning' they're now coming back into vogue for some purposes). But "Neural net does what it was designed to do" doesn't get many clicks, while "Computer does stuff humans don't understand!" does. That's forgivable. Saying that a computer that had never been exposed to a particular human language was able to translate that language - which is apparently what the press release said, if MaureenRoy's description of it is accurate - is not.

10MaureenRoy
Editado: Dic 19, 2016, 4:43 pm

I find it interesting that "neural networks" and machine learning programs are all digital programs at this time, not analog. In addition, a programmer who has worked with machine learning (ML) and is now taking an AI course tells me that ML will never create a consciousness, no matter how advanced it gets.

11jjwilson61
Dic 19, 2016, 5:43 pm

>10 MaureenRoy: I don't think we can even define consciousness at this point.

12lorax
Dic 20, 2016, 11:03 am

>10 MaureenRoy:

I won't say that no programming technique will ever create consciousness, but what is currently defined as ML isn't going to be how it's done.

>11 jjwilson61:

That's the crux of it, yes. We have chat bots that in limited contexts have passed the Turing test, but that's largely because expectations for random online chats with strangers are so low; nobody would say the chatbots have achieved consciousness.

13jjwilson61
Dic 20, 2016, 12:41 pm

>12 lorax: And on the other hand, if you define consciousness as self-awareness then a thermostat could be said to pass that test.

14timspalding
Editado: Dic 21, 2016, 6:21 am

As an amateur philosopher, I can't abide the nonsense around all this. But I'll leave that aside and just say that, as a developer, I don't see the big deal either.

It's perfectly clear that a person can create a program that does something weird, and creates and consumes data in ways a person can't easily understand.

But, really, a person could "understand" it. They would re-run the program with paper and pencil, playing the computer, as we all did learning to program. It would take a long time, and a lot of paper—perhaps more than has ever been made in the history of the world. And the "understanding" would be really boring. So what? Computers are good at doing things too boring to do, or to understand. That I can't understand something complex doesn't make it self aware. If it did, long division would be self aware--for my son.

Do I understand LibraryThing's code? Sometimes. Do I understand the ways the code interacts with, and communicates, with other parts? Sometimes, in PHP at least. Do I understand the opcodes it compiles down to? No. Do I understand the machine language that actually gets executed? No. So?

Understanding is beside the point. I don't understand exactly why the corn flakes fell into the bowl as they did and not the infinitude of other ways they could have. I could even think of the interactions between the flakes, as their many parts bonk against each other in the box, as a "language"--massively complex communication happens between flakes. If I cared, had perfect knowledge and infinite time I could understand that language. So? Someone made the flakes. They made them to bounce a certain amount--more than some cereals, less than others. And to do their job, not to be "understood." That's enough. And it's not AI.

15lorax
Dic 21, 2016, 9:37 am

>14 timspalding:

It's not talking about understanding at the code level. It's talking about the algorithm. Some sorts of machine learning techniques are much more easily explained and understood than others. As a simple example:

Why did my email client identify this message as spam? Programmers gave a classifier a whole bunch of messages that were spam as identified by humans, and a whole bunch that weren't, and told it which features to use - common words, length, origin, subject line, etc. Then the algorithm found that some features occur much more often in spam messages than in good messages, and vice versa, and used those to identify this incoming message as spam.

A developer with access to the code could even look at the message and identify which particular features were important in making that determination, and it's that level in particular that's missing from neural nets. Which, as I said back in #9, is a big part of why they were unpopular for so long. But of course you just want to dismiss everyone else in the discussion on both philosophical and technical grounds as spouting 'nonsense'. Is it surprising that developers didn't understand the details of a neural net? Of course not. Are some algorithms more comprehensible than others? Of course they are.

And NOBODY except perhaps the press release writer, who has been near-universally condemned for their hyperbole, has suggested that a lack of understanding constitutes self-awareness, so it's a bit of a head-scratcher why you brought up that particular strawman.

16timspalding
Editado: Dic 21, 2016, 10:41 am

It's not talking about understanding at the code level. It's talking about the algorithm. Some sorts of machine learning techniques are much more easily explained and understood than others. As a simple example:

Yes, but they are arrayed along a spectrum of ease. A very simple neutral network can indeed be explained, fully and completely. A more difficult one cannot. The same goes for all code.

Why did my email client identify this message as spam? Programmers gave a classifier a whole bunch of messages that were spam as identified by humans, and a whole bunch that weren't, and told it which features to use - common words, length, origin, subject line, etc. Then the algorithm found that some features occur much more often in spam messages than in good messages, and vice versa, and used those to identify this incoming message as spam.

In any sufficiently complex program, this is more a useful story you tell yourself about the algorithm than anything else.

Think, if you will, if your email spam detector had a bug somewhere that mislabelled things from time to time. If I asked you how it worked, you'd repeat the story above. But your story would not match what it was actually doing. You wouldn't say the truth of the matter--"The algorithm is designed to chop up multibyte strings in potential classifiers in such a way that, when Russian words of more than 20 characters run through the system, half a byte goes into the spam-probability percent, massively skewing it in arbitrary but entirely predictable ways." This was the algorithm actually implemented, but you wouldn't say it. Nor would you say it about any other large program--in most cases you'd never even know what bugs were lurking, and would have to fall back on what you think it does, not what it actually does.

When the Google engineers looked at the AI-created language interface for that new translation ability, they could not understand it.

My answer: So?

AI-created language interface

There is no AI. There is no "language interface" or "translation ability" either. This is all the pathetic fallacy applied to silicon and electricity.

The code did what the code did, as code always does. It doesn't matter whether it's easy for us to understand or hard. It doesn't matter whether it's a language interface or a systemic bug involving multibyte characters producing output as arbitrary as white noise. We call it a language interface, we speak of "translation ability" and, if it works for us to reduce the complexity and half-understand what it's doing, we can get along with that fiction. But it remains a fiction.

so it's a bit of a head-scratcher why you brought up that particular strawman

I brought it up because this sort of thing inevitably leads people to use stupid, inaccurate terms that suggest all sort of things to people. Start with the term "AI," in the piece.

I won't say that no programming technique will ever create consciousness, but what is currently defined as ML isn't going to be how it's done.

And, well, you brought it up.

17lorax
Editado: Dic 21, 2016, 10:49 am

>16 timspalding:

First, let me say that mixing quotes from different people (especially when you interleave quotes from Person 2 amongst those from a single post of Person 1) without attribution is really confusing bordering on deliberately misleading. That said:

(me, in 12, RESPONDING TO MAUREENROY IN 10) I won't say that no programming technique will ever create consciousness, but what is currently defined as ML isn't going to be how it's done.

(you, in 16)

And, well, you brought it up.

1. I was responding to MaureenRoy. I did not "bring it up".

2. What she was describing was the general question of whether computers can ever acheive consciousness - which I do not think has been definitively answered - not your strawman question of "Does a computer doing something we don't understand mean it is conscious."

There is no AI.


There certainly is. I don't like the term, because what it means to someone in the field is not what it means to a member of the general public, but that doesn't mean the term isn't used to refer to something real.

There is a set of algorithms and techniques that people who use them refer to as AI. This is not a general-purpose artificial intelligence, nor is it the sort of thing that by any stretch of the imagination could lead to a conscious program. That doesn't mean that set of algorithms doesn't exist, and coming into a discussion where people are using the term in the first sense to object to the fact that nothing in the second set exists yet is sort of pointless. Now, you could complain about the use of the term, and I'd agree with you 100%, but that doesn't mean "it doesn't exist".

18timspalding
Editado: Dic 21, 2016, 11:19 am

>17 lorax:

Terms can point to something and, within a certain context, be useful, while also being founded on a fundamental error.

If I were at a meeting of the Supreme People's Assembly in North Korea, and someone referred to decision being made according to the "unshakeable will of the people" I'd know what they were referring to (what the current Kim wants). If I were a member of that body, I might use the term every day, usefully distinguishing between Kim's will and matters we need to find out Kim's will on. Within my little profession, the term would be practicable and well understood. But it would never stop being founded on propaganda, and pointing in reality to something entirely different from what it seems to.

An extreme example, perhaps, but at least "the unshakeable will of the people" is someone's will. Artificial intelligence isn't intelligence. It'd be just as accurate to call the fuzzy-edge ball of code techniques "amazing gremlins" or "fairy ghosts." And in a certain context I'd be okay with talking about gremlins and ghosts in code. But I don't think anyone would be misled by those.

Does a computer doing something we don't understand mean it is conscious.

You didn't say it, but I don't consider this a strawman, as the language of the piece repeatedly falls into this error. The NMT has "learned," it has "created its own language we humans do not understand" (but it does) and so forth.

19proximity1
Editado: Feb 28, 2017, 4:19 am

General Question (to all here, except TS, who obviously "gets it") :


"Why is >16 timspalding: so damn hard for some of you to understand?!"


Esp. : "There is no AI. There is no "language interface" or "translation ability" either. This is all the pathetic fallacy applied to silicon and electricity."

"The computer" has no 'awareness' of the functions (or any meanings attached to them) which it is "carrying out." All you're describing is a process of signal-switching, electrical processes which sort, combine and tabulate symbolic representations of "zeros" and "ones", "on" versus "off". There is _NO_ "Thinking" going on here--except on Tim's part.

WTF is wrong with the rest of you?!

20DugsBooks
Editado: Ene 15, 2019, 10:49 am

I thought I might mention here a book I checked out today; AI SuperPowers China,Silicon Valley, and the New World Order by Kai-Fu Lee.

I saw the author interviewed on CNBC, as I remember, and he explained how he lived in Tennessee after his family migrated to the USA. In elementary school there the principal took every lunch period to teach him English. I think he is a billionaire now.

https://en.m.wikipedia.org/wiki/Kai-Fu_Lee

21DugsBooks
Ene 23, 2019, 10:52 am

Here is a quote from AI SuperPowers that I typed out for a different forum and I thought I would post it here. The first few chapters make me believe {probably delusionally} I actually know a little more about AI structures. I have read about half of the book and at one point Lee corroborates opinions above which state an AI has no empathy & other human "virtues".

"So how does deep learning do this? Fundamentally, these algorithms use massive amounts of data from a specific domain to make a decision that optimizes for a desired outcome. It does this by training itself to recognize deeply buried patterns and correlations connecting the many data points to the desired outcome. This pattern-finding process is easier when the data is labeled with that desired outcome -"cat" versus "no cat":"clicked" versus "didn't click";"won game" versus "lost game". It can then draw on its extensive knowledge of these correlations- many of which are invisible or irrelevant to human observers - to make better decisions than a human could.

Doing this requires massive amounts of relevant data, a strong algorithm, a narrow domain, and a concrete goal. If you're short any one of these, things fall apart. Too little data? The algorithm doesn't have enough examples to uncover meaningful correlations. Too broad a goal? The algorithm lacks clear benchmarks to shoot for in optimization.

Deep learning is what's known as "narrow AI" - intelligence that takes data from one specific domain and applies it to optimizing one specific outcome."

22proximity1
Ene 23, 2019, 11:39 am



"It can then draw on its extensive knowledge of these correlations"

________________________

Despite his good intentions here, the author still indulges in an abuse of the terms "decision" and "knowledge" since the operations concern nothing other than the ordinary basis of all computer data operations: sorting strings of 0s and 1s.

At no time is anything actually "decided" or "known" in or by "the computer", which, after all, is inanimate.

23DugsBooks
Editado: Ene 23, 2019, 4:37 pm

>22 proximity1: Sounds like you need to agree on a definition of terms before you argue with the author - programmed to respond, the computer might say your knowledge, comprised of twitching neurons, is to fragile to be relied upon. ;-)

Paywalled article that just popped up in my email, from what I have read about these situations people have certainly not been eliminated in these type processes just AI's pointing to another possible conclusion.

Artificial Intelligence Outperforms Humans in Cervical Cancer Screening

https://www.360dx.com/informatics/artificial-intelligence-outperforms-humans-cer...

24proximity1
Ene 24, 2019, 6:22 am


"Sounds like you need to agree on a definition of terms before you argue with the author - programmed to respond, the computer might say your knowledge, comprised of twitching neurons, is to fragile to be relied upon. ;-)"

:^)

the key-phrase there being, "programmed to respond, the computer might say..." since the computer, unlike you or me, could never come up with such a retort on its own "initiative" for the simple--and, I'd thought, obvious, reason that it hasn't any initiative and cannot have any--ever.

If you program it "properly", a computer could operate a program which sorts and compares strings of what we recognize as "musical notes", and, from its pre-programmed code, it could be operated to compare strings of "notes" which, again, are by design defined as "melodic" within the terms of the program. Using these for its bases, the computer could sort randomly-input data strings of "notes" and, for its "out-put", produce a string of notes which closely resembles what its program has defined as "melodic."

Only a very naive person would suppose that this output indicates that the computer has "learned" to "develop" a "sense of music" or of "melody."

Against that paltry showing, a human brain, using "only" its "twitching neurons," not only might, but in fact did produce this.

The game of chess is not even "child's play" for a computer-program.

25DugsBooks
Editado: Ene 24, 2019, 11:59 am

Since I can barely use a little html and I am not a neurologist. I can't really argue either side of a "turning test" AI/consciousness debate {both sides would want me on the other team}. However if you are interested in bridging that gap and becoming Borg there is this job opportunity https://neuralink.com/

26stellarexplorer
Ene 24, 2019, 11:20 am

This piece on the limits of science and physicalism may seem off topic, but it is a challenge to science and to AI intelligence that I think is well worth considering:

https://aeon.co/essays/the-blind-spot-of-science-is-the-neglect-of-lived-experie...

27DugsBooks
Ene 24, 2019, 12:06 pm

>26 stellarexplorer: Good article, thanks for the link, shows what I have been naively dancing around concisely. I just skimmed a bit I will read more later I am sure.

28pgmcc
Ene 24, 2019, 12:35 pm

>25 DugsBooks:
I once had to deliver a talk on Knowledge Management to the company I worked in (about 150 people at the time). It was a consultancy & IT services firm. At the time there was a lot of talk about corporate knowledge leakage with a high staff churn in the industry due to the boom times. (1999/2000).

My session was timed for immediately after lunch on a full day of management talks. This, as any presenter will know, is the real graveyard shift. Everyone is sitting there with full stomachs and it is impossible to keep them awake. I decided I had to structure my talk in such a way that their attention would be held. I decided to disrupt the narrative.

The talk started boringly enough by my introducing the topic and saying we were exploring three ways of combatting the problem of corporate knowledge leakage and the on-going sharing of knowledge throughout the organisation. I then introduced the three approaches:
1. Direct connection to brains using a jack-plug into the neck as per Neuromancer
2. Telepathy as per The Midwich Cuckoos
3. Hive mind as per The Borg.

I ended on that picture of Locutus of Borg staring down at the audience from a big screen.

Nobody feel asleep.

29proximity1
Feb 11, 2019, 9:34 am



Explainable AI won’t deliver. Here’s why. | Let’s talk about interpretability, transparency, explainability, and the trust headache | by Cassie Kozyrkov* | Nov 16, 2018

________________________________________________

Wikipedia has an article about what "A.I. explainability" is.

"An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision.1" (Wikipedia)

* Cassie Kozyrkov is Chief Decision Intelligence Engineer, Google (at the date the article appeared).

30proximity1
Abr 18, 2019, 9:34 am


More podcats, articles and papers on the issues surrounding Artificial Intelligence (A.I.)

Sam Harris Podcasts: #151 - WILL WE DESTROY THE FUTURE? | A Conversation with Nick Bostrom

Sam Harris Podcasts: #153 - POSSIBLE MINDS
Conversations with George Dyson, Alison Gopnik, and Stuart Russell


_______________

David chalmers
Meta problem of Consciousness
The Combination Problem for Panpsychism

Idealism and the Mind-Body Problem

Modality and the Mind-Body Problem: Reply to Goff and
Papineau, Lee, and Levine
/a>

_______________

The Myth of Sentient Machines | Digital computers might be fundamentally incapable of supporting consciousness. | Bobby Azarian Ph.D.

Why do we still need the agency when we have AI? | Jeremy Lo

AI is all very well. But can machines be taught common sense? | Richard Waters

Artificial intelligence: The time to act is now | Gaurav Batra, Andrea Queirolo, and Nick Santhanam

AI Safety Needs Social Scientists | by Geoffrey Irving & Amanda Askell

The Hard Problem of Consciousness | Josh Weisberg

The idea that everything from spoons to stones is conscious is gaining academic credibility | Olivia Goldhill

(Stanford Ency. of Philosophy)
Qualia: The Knowledge Argument | First published Tue Sep 3, 2002; substantive revision Mon Nov 23, 2009


Principles of Metareasoning (1991)| by Stuart Russell , Eric Wefald

Bounded versus unbounded rationality: The tyranny of the weak | Itzhak Gilboa and Dov Samet

Bounded versus unbounded rationality: The tyranny of the weak | Itzhak Gilboa and Dov Samet

Unbounded Rationality | Melvin F. Shakun

The Control of Reasoning in Resource-Bounded Agents | Martijn Schut and Michael Wooldridge

31proximity1
Editado: Abr 18, 2019, 11:43 am

I think there's an error of logical reasoning somewhere in this argument:

ARE YOU LIVING IN A COMPUTER SIMULATION? | BY NICK BOSTROM | Published in Philosophical Quarterly (2003) Vol. 53, No. 211, pp. 243‐255. (First version: 2001)

and as discussed here, at 01 hour: 07 mins. : 00 secs. #151 - WILL WE DESTROY THE FUTURE? | A Conversation with Nick Bostrom

and it would be an interesting exercise to look and find it.

Listen to the podcast from 01 hrs. : 07 mins. forward because Harris and Bostrom discuss the argument and some of the doubts that ensued from its hypothesis. (Esp. at 01 hrs. : 18 mins. : 05 secs.)

32stellarexplorer
Abr 18, 2019, 4:14 pm

Part of it is whether one is persuaded by the statistical argument.

33proximity1
Editado: mayo 2, 2019, 7:26 am

>32 stellarexplorer:

Yes. I suspect that almost all of it is in that detail.

I'm more and more convinced that, much as I admire Sam Harris, he's way, way off in his concern about Artificial intelligence one day leading to machines which are "smarter than people." The idea is itself simply not coherent. Whatever we mean by "intelligent", "smart," etc. just doesn't and can't apply to inanimate--insensible-- matter, matter that lacks all capacity for sense-perception. Some of this seems to come from people being seduced by the fact that some computer programs produce output that resembles genuine "thinking"--as in language-translation programs, or, especially, game-playing such as the games of chess and "Go"; but these examples are illusory. The machines are of course not playing a game in anything like the way a person is doing. The computer is running a program--which is all that a computer can do--in which it sorts files of data and compares "zeros" and "ones" in one file with a large set of other files of these same digits. Using statistical-probabilities, pre-programmed in the computer, the output is what corresponds to a human's idea of the "best possible move" in a game scenario of the kind which the computer's program sorts and compares. There is no thinking going on there.

Harris believes that there isn't and, logically, cannot be any inherent (essential) difference between organic matter--which is the basis of living organisms--and other inanimate matter. it's more accurate to say that we so far simply don't understand why there is such a difference and in what it consists. We're not that far in our understanding of intelligence or "life," "consciousness" and other key concepts. What he calls "wet-ware" is indeed essentially different--it has many qualities which distinguish it from other matter.

I want to develop this discussion a bit further with some posts on just how inanimate matter differs from living protoplasm and higher forms of invertebrate life and "higher" forms of life.

from SamHarris.org | Podcasts | #153 - POSSIBLE MINDS(1) | Conversations with George Dyson, Alison Gopnik, and Stuart Russell (April 15, 2019)


Please note this Editorial note: About the protocols of the following transcription—



In his (more recent) discussion ( in Podcast #155: Mental Models) with Shane Parrish (on April 29, 2019), Sam voices some concerns about the character of verbatim transcripts of recorded spontaneous talk when these are rendered too strictly verbatim because practically no one can manage to speak extemporaneously in a fashion that looks like the neat and clean and coherent results of carefully-edited texts. I understand that concern and, in what follows, I'm really only trying to represent faithfully the discussion as recorded—without any intentions of doing harm to Sam's image. In fact, his extemporaneous speech is fine as is and, even transcribed verbatim, there is virtually nothing to reproach in its style, coherence, accuracy or clarity. Harris speaks off the cuff much better than many professional writers are able to write and revise their own words for print. (for his points on this issue, listen to the hyper-link above from (01: 28: 54) :


"If you're going to transcribe verbatim what your interview-subject says, you are harming that person because, in listening to someone speak, our ear naturally discounts for the false-starts and the filler-words and the errors that, on the page, look like symptoms of aphasia. You look highly inarticulate when things are transcribed. ..."


Well, Sam's extempore speech doesn't look anything like that when transcribed—even if transcribed in the strictest verbatim manner. And it's certainly not my objective to make his presentation look bad. Readers can go directly to the recording and listen and discover that, whether listening to them on the recording or reading them in a transcript, his words are fine and practically blemish-free—as much so as anyone's could be and I think this is fairly well understood by any reasonable listener or reader. In any case, I've taken some liberty at only a few places where I thought it not worth inserting pauses and hesitations as they didn't lend all that much to the vivid character and immediacy of the discussion which they otherwise often do add to the dialogue.


Everyone knows that one speaks differently from the way one writes and that transcriptions aren't supposed to be like the delivery of a polished, edited, text; that's part of the reason they carry special interest and sparkle—it is real, "un-canned" expression and there's much to be said for that. Indeed, that is what makes pod-casts so interesting and popular. It is a shame to polish every bit of that away.




02 hrs. : 8 mins. : 38 sec. :

Sam Harris to Alison Gopnik --

“We are living in a world where we could rather suddenly find ourselves in relationship to something far more competent than we are in an entirely global sense but it wouldn’t even have to be global—to be threatening. … Are you sceptical that the so-called “Alignment problem” or control problem, is really worth thinking about in the way that someone like Stuart (Russell) is … do you think that that is kind of red-herring?”

02: 14: 04

(later, speaking with Stuart Russell)

Sam Harris:

… “because you know, I go out into the world and argue with people who think that we shouldn’t be worrying about all this—and ‘all this,’ as our listeners will now know, is the prospect of building super-intelligent-A.I. (artificial intelligence) that is not perfectly aligned with human interests. And it’s just, it’s a bewildering experience because the situation I find myself in again and again is being faced with obviously very intelligent people and people who are close to the field … and yet it is a spectacle of very smart people either making terrible arguments or not even making an effort to make an argument at all and they are satisfied that they have closed the door to all these concerns” ….

Stuart Russell:

… “I can only put it down to a kind of self-defense mechanism.” …

02: 17: 54

Sam Harris:

“… I understand if someone thinks they’ve found some reason why super-human level intelligence is not possible, right?—or that we will never build it—there’s no good argument for that that I’ve found but I, but let’s say you thought that you had that argument; well, then, you’d feel the door to these concerns is barred. But for those who accept that we will eventually build something that is smarter than we are, there’s a ‘next step’ that many don’t take and it’s—they don’t seem to realize that we will be in relationship to that thing and this is by definition relationship where we don’t have full control or full insight into the mind of the other. This falls right out of the definition of what we mean by something more intelligent and powerful than ourselves. It’s almost like, you know, the two of us are sitting—if we were sitting here at a conference-table and some new person came into the room and started interacting with us—right? We are suddenly in relationship and the prospect of this ‘person’ being able to manipulate us or lie to us or out-class us intellectually—we have intuitions for this, it’s obvious that all of that is possible, but people seem to imagine that we’re going to build machines that, you know, that as stipulated, will be vastly more competent than we are and yet somehow they think it will be trivially easy to maintain a relationship with them that doesn’t go off the rails.” ...

… …

02: 24: 11

Sam Harris:

“My intuition is that we will build machines that pass the Turing-test before we understand actually how consciousness emerges and then it will seem like a problem we can’t really get a purchase on to worry whether or not these new creatures that are conscious—because it will seem conscious; and all our intuitions that cause us to effortlessly ascribe consciousness to one another and to other animals will be played upon and then it will just—you know, we’ll be in this place of not really having understood the point at which consciousness emerges and yet finding ourselves in relationship to machines that convince us that they are conscious because we will build them to be able to play that side of the board in a social game. We won’t build all of them that way but we’ll certainly build some of them that way." ...

...

… “If consciousness really does just ‘come along for the ride’ at a certain level of informational-complexity, and you actually can’t build something super-intelligent without the ‘lights” of subjectivity coming “on,” well then we could build super-intelligent conscious, yet completely alien, systems—systems that we have no idea whether or not they’re conscious—you know, there’s no interface that suggests that they are—and yet there’s something that it is like to be these systems, and it is an interesting ethical question as to whether or not we’re building systems that can suffer—in those cases." (to 02: 26: 05)

Stuart Russell:

“Yeah, but if we, if we make them humanoid (build them to physically resemble us), we’re likely to ascribe a human-like consciousness to them—"

Sam Harris:

“No doubt—“

Stuart Russell:

“I think—you know and we’re getting definitely into the realm of speculation here, so everything that I say should be surrounded in “scare-quotes”—I think that if we do accidentally make conscious machines, their consciousness will be utterly unlike ours because you know it is a different physical substrate, our mental processing is very, very different from their mental processing, you know, just, just to give a simple example, right?, we can create systems even right now that can look at the entire earth every day down to a fairly small, sort of 30 centimeter, pixel-size, you know—and it’s incomprehensible for a human to have that kind of perceptual capability. But that could be just routine for the systems that we’re building.

"Obviously they have combinatorial and calculation capacities that far outstrip any human, memory and recall that are possibly billions of times larger and faster than humans so their whole mode of being and mode of cognition is not going to be human-like in any way. So the idea that they would be having the same kinds of emotional experiences when they don’t have any of the—you know, from what we understand, a lot of our emotional experiences have to do with the endocrine system and neurotransmitters as well as the sort of digital properties of our brains, and they’re not going to have any of those.”

(down to 02: 28: 00)



Then Sam touches on a number of the objections which he cites from those who are opposed to his and to Stuart Russell’s view of the potential problems which accompany the building of super-human intelligent machines from Russell's comments in the book's (1) section about his views: "A thousand-and-one-reasons not to worry" :

“Don't worry, we can switch it off” :



(02: 29: 09) Sam Harris:

… “many people are persuaded that, ‘What’s the problem here? We’ve built this thing, we can always switch it off.’ “

Stuart Russell:

"Mmm-hmm. And this is something that Alan Turing said, actually. Although he doesn't say we can always switch it off, he says, you know, if we were fortunate, we might, at a strategic moments, be able to turn off the power. So he's definitely not taking it for granted. And he says, even so, our species would be humbled.

"So, right, the problem is it's kind of like saying that, you know, 'What's the problem beating Alpha-Go at "Go"? You just play the right moves. What's the problem?' Well, you know, you can't. There's no human being alive who can and there probably never will be. So, so the idea that you can simply outsmart something that's much smarter than you are is just self-contradictory."

Sam Harris:

"But what is assumed here in the logic is that part of the utility-function or goal-set of a sufficiently-intelligent machine that has other goals is to not get switched-off because getting switched-off is antithetical to doing everything else that it has been programmed to do or comes to want to do.

Stuart Russell:

... "I summarize it as 'You can't fetch the coffee if you're dead'-argument. So if a machine is pursuing the objective of fetching the coffee, it will take steps to prevent itself from being switched-off because, if it was switched-off, it couldn't fetch the coffee. It's a straight-forward derivation from this classical notion of objectives that we put into the machine. And, you know, so there are those who persist in the idea that 'Oh, well we, we, just, we don't put in 'self-preservation' and then it won't be a problem. But the point is that self-preservation is going to be a goal of the machine even if we don't put it in, because self-preservation is necessary for whatever objective we do put in. ... For a machine to allow a human to switch it off, it has to think that that's a good idea." (02: 31: 09)



But there is very long history of people's having built technology which, when relied upon, proves faulty in its expected and intended operations and proves so with quite disastrous consequences for some—sometimes for those involved and sometimes for others who had nothing to do with developing and implementing the technology or for both.

And, fortunately or unfortunately, it is simply not the case that, "For a machine to allow a human to switch it off, it has to think that that's a good idea." In general, a machine does not—because it cannot—come up with its own independent "idea" of what is either "good" or, indeed, an "idea" in the first place—unless, of course, "idea" is defined and "good" is defined. But however these are defined, they cannot be sufficiently flexible and comprehensive enough to be both useful and safe at the same time.

Therefore, in order to program the auto-pilot functions in an airliner, these must be severely strict in their operational limitations. Thus, the computer-program is not and never should be written in such a way that, whenever and wherever may be the airplane when the he time comes that the fuel-level is so critically-low that the plane has only enough time to land before the fuel is exhausted, the auto-pilot executes a pre-described and programmed landing procedure, regardless of the terrain below: forest, highways, a vast stretch of open sea, etc.—the landing gear descend without the human pilots' intervention or approval, the flaps drop to slow the airspeed, the nose is inclined slightly downward, the throttle is pulled back and the engines are reduced in their power-output, all because these are required to place the plane in a landing attitude--human pilots' preferences be damned? No.

In fact, the aircraft's computer will nonchalantly allow the fuel to be fully expended—while at the same time automatically signalling a warning alarm in the cockpit of "low-fuel"—for the sole benefit of the human pilots, of course since, despite the fact that human pilots take the "low-fuel" alarm very seriously, the auto-pilot does not care about it at all. It shall continue to maintain the aircraft in any pre-described flight-line, with its steady ascent, or descent or level-flight attitude per the program's or the pilots' last-given instructions--right up to the moment when the engines cease to function for lack of fuel. That is because, you see, the "auto-pilot" function is not really "flying the aircraft." It can't. It has only a narrowly-limited set of instructions which are premised on others—humans—supplying knowledge and experience which the auto-pilot program is incapable of possessing.

Thus, some hypothetical rule of thumb by which the auto-pilot's program "reasons" that "You can't land the plane (or re-fuel it or fly again) if you're dead," while strictly true, is simply beside the real point: the auto-pilot program does not understand either "death" or "re-fueling" or "flying again" or anything else and it certainly does not equate a failure-to-land-the-plane-within-the-limits-of-the-available-fuel with "death" or that term with "being switched off." In neither case does the aircraft's flight-computer know or care about the significance of these as concepts.



Sam Harris : (02: 48: 43)

... "It (i.e. presumptions that strong-A.I. is not feasible) contains a very strong assumption that intelligence is not platform-independent or substrate-independent--and yet we already know that it is for narrow (not 'strong' but 'weak'-A.I.) forms of intelligence; we know that the intelligence sufficient to perform arithmetic is substrate-independent. Computers made of 'meat' can do it and our phones can do it. At least the argument has to be explicit that you have to say that, for a certain kind of intelligence or intelligence at a certain level, you need biological substrate to perform those operations and, again, I don’t see how anyone would think that is likely. There are so many ways of arriving at an intuition that would defeat that. If you just imagine replacing your own biological neurons with artificial ones, that had exactly the same imput and output characteristics, certainly most neuroscientists’ intuitions there I would think would be that, give me a ‘neuron’ made of some other substrate but it performs the exact same operations at the level of imput and output, that could be used to seamlessly replace biological 'wet-ware.'

Stuart Russell :

Right.

... ...

(02: 56: 45)

Stuart Russell :

“I don’t think anyone thinks that A.I. systems are going to ‘hang around’ at the teen-age, you know, human-I.Q. level, if that means anything, for very long; in fact they’re never really going to be there at all because long before they have the general-purpose intelligence of a human, they will far exceed us in broad areas—for example in terms of general knowledge, right? —they will, you know, as soon as they can read, they will read everything that’s ever been written and be able to integrate that into, you know, a synthesized knowledge-base that far exceeds anything that a human could ever have. (02: 57: 24) So they might not, at that stage, be able to plan their way out of a paper bag, but they’ll already know far more than all humans put together. So there probably won’t ever be a time where they resemble a fairly intelligent surly teen-ager. So the idea that we’ll just be able to access additional compute-power and memory—I mean that seems extremely speculative in terms of our understanding of neuroscience. I mean, it might be that in fact that (it becomes possible that) you can plug-in all the compute-power you want but, unless you completely get rid of your brain altogether, there’s simply a big impedance-mismatch between, you know, a processor that’s operating at, you know, a petahertz2 and your brain which is operating at a couple of hundred hertz—and there’s simply no way the brain can use all of that processing-power. Maybe memory—I could believe that the brain could make use of electronic memory—the brain's very good at, sort of, figuring out, you know, connecting to a new peripheral, and figuring out, unlike Microsoft Windows™ where you connect a new peripheral and you don’t have the right ‘driver’ and, well, all hell breaks loose, but the brain figures out how this new peripheral works and it may be able to figure out how a memory-peripheral works and be able to use that; and that might give us perhaps better short-term memory, faster retrieval from long-term memory, a few other things like that. But the idea that the only way we’re going to survive is, first of all, by understanding the mysteries of the human brain and how it works and then all of us having brain-surgery to connect us to vast amounts of electronic equipment—that doesn’t seem right, right? I mean if that’s our future, we’ve made a mistake somewhere.

Sam Harris :

“Right. That’s a fairly high barrier to entry: cracking the neural-code and getting all the early-adopters to integrate themselves to this new technology—through neurosurgery. Ironically this notion that human-level A.I. is impossible or at least extremely unlikely is in fact true in the sense that, as you say, we will go from sub-human-level A.I. to super-human-level A.I. in the blink of an eye; I mean, there will be no time where we build machines that are as bad at chess as we are, right? So the moment that we have anything that’s general, all of these piecemeal, narrow competences will come on-line as already super-human."

Stuart Russell :

"Yeah. So, it’ll be a gradual piecing together of these super-human components and understanding of how to integrate them and, yeah, so when integration is complete you’re already going to be super-human, it seems to me." (03: 00: 27)

... ...

Sam Harris :

(03: 01: 52)

“I guess what I’m imagining is that what we’re calling ‘general intelligence’ in our case could be something that some vague—vaguely-shaped—abstract idea that stands at the intersection of all of our specific competences and that we could find ourselves stumbling into building an A.I. that is a kind of patchwork of narrow competences—I guess I’m wondering: is there a path to what we consider to be general intelligence which can be arrived at without us ever really understanding what general intelligence is in itself and perhaps it maybe isn’t anything in itself, it is actually just an emergent property of what is just ‘enough’ specific competence?”

Stuart Russell :

(03: 12: 29)

… “No, I don’t think that we’re disagreeing; I think that a capacity for general-purpose intelligence can consist of the integration of a number of fundamental capabilities which are pretty much domain-independent—

“Let’s say there are ten of them (i.e. certain key, essential, capabilities) you could have a system that has eight of them; and it will be—and could have those in super-human quantities; but that system fail in certain ways and there would be certain things that it just can’t do because it’s lacking some of these capabilities but when you figure out how to put—you know, this is sort of what happens in A.I. research, right?: we make our ‘best guess’ about how to make an intelligent system, we try it out, and we say, ‘Oh! Look, it completely fails because we forgot such-and-such’ right? So let’s go back to the drawing-board and figure out this ‘such-and-such.’ But the ‘such-and-such’ is not (some ability “X” as) “it’s not very good at skinning rabbits,” it’s something like, ‘Okay, we, it, it’s not able to learn cumulatively based on its previous knowledge,’ right? So it’s (a matter of) deficits of a very general kind which, when we solve—and, you know, then when we go back and we try to solve those in the context of this particular task but the solution we come up with is typically—if it’s done by a good researcher—it’s generalizable, it’s a solution that works no matter what the task; and I think we need another handful of break-throughs and then ways of integrating those break-throughs into a single entity and then I think we‘ll have something that I think we would all agree is a very general intelligence; you’d have a very hard time finding things that it could not learn to do. So when those break-throughs will occur, it’s very hard to say; I think it’s extremely unlikely that they’re going to happen all at the same time but any one of them could happen over night.” (03: 14: 44)



Now, when I hear this,


... "and, you know— then when we go back and we try to solve those in the context of this particular task but the solution we come up with is typically—if it’s done by a good researcher—it’s generalizable, it’s a solution that works no matter what the task; and I think we need another handful of break-throughs and then ways of integrating those break-throughs into a single entity and then I think we‘ll have something that I think we would all agree is a very general intelligence; you’d have a very hard time finding things that it could not learn to do."


I recall with some exasperation Stuart Russell's own objections, above, to the person he cites who supposedly naively asks


" 'What's the problem beating Alpha-Go at "Go"? You just play the right moves. What's the problem?' "


His reply to that comes back to mind:


"Well, you know, you can't. There's no human being alive who can and there probably never will be."


Nor even any team of researcher/engineers who can.

Because it seems to me that what is involved in this challenge— which. apparently, to some people, is regarded as a relatively modest one—is in fact a challenge of almost unimaginable difficulty.

Just the capacity for what we consider to be and what we describe as a child's "simple reasoning" ability is in fact vastly beyond the capacity of any existing machine computing program. The reasons are quite numerous and varied but taken together they constitute a profile of the nature of the differences between human thought and the physical experiential context in which it occurs and the utterly banal calculations which a computer-program, even a rather sophisticated one, produces in a rote manner.

Supposedly simple examples are devilishly complex.

While computers can seem to "calculate" (abusing that term for the moment) with single integers, 1 through 9, they cannot, as a four or five year old can do, conceptualize" the integers, "one" through "nine" and understand their general concept as words which symbolize specific, discrete quantities which are independent of the "things" that may be enumerated, "counted," using these integers. That is, in terms of intelligence, not just very sophisticated, it's far, far beyond the capacity of any existing computer or program.

We are mistaken to take the fact that machines' capacities in calculating speed and in quantity of data-storage-and-retrieval are, in themselves, an indication of "intelligence" or of problem-solving or manipulation of facts in a wider context for it is this which separates the five year-old from the machine. The child's use of numbers is combined with actual awareness and context: the child has an "experience" of and with "numbers," "one", "two", "three", "four", etc. The machine has nothing of the sort and can have nothing of the sort.

The child lives in a present with a past and a conceivable future--even if, at very young ages, the child's concept of the future, "ten minutes from now," "an hour from now," is too difficult to grasp accurately, that ability shall develop without any additional adjustments made to the child's mental capabilities; they occur in the normal course of things. There is no "in the normal course of things" within a computer-program. There are only logical operators which are written symbolically and used in what only we can recognize to be a "constant, unvarying, present" devoid of "entities," of "movement," without "past" or "future." There is a "clock" program in a computer's program-set, but there is no conception of "clock" and even less of "time." The child, on the other hand, conceives of "existence in time," even if at first this is only rudimentary, before anyone can explain this in so many words. A very young child may only grasp those things which are in the immediate or nearly-immediate: "Mama here ( 'now' is understood)" —ergo, "I (feel safe)." "Mama not here (now); "I (cry)." That is real-world experiential reasoning and it is far beyond the capacity of any A.I.

Stuart Russell trivializes as "reading" what is really nothing more, in a computer-programming sense, than character-'recognition' (i.e. file-set retrieval and comparison) and storage. That is not all of what is going on intellectually when someone is really reading.

If computers are to have "super-human intelligence," then that would include an ability to not only "recognize"—"see" and "pronounce" textual characters— but to understand them, make sense of them.

Even with human intelligence, there are many more people who can "see" and "pronounce" the following words than there are those who can understand them— make sense of them.

By that sense of the term, "reading", no computer A.I. is capable of "reading" the following:


47

Betwixt mine eye and heart a league is tooke,
And each doth good turnes now vnto the other,
When that mine eye is famiſht for a looke,
Or heart in loue with ſighs himſelf doth ſmother;
With my loue's picture then my eye doth feaſt,
And to the painted banquet bids my heart:
An other time mine eye is my heart's gueſt,
And in his thoughts of loue doth ſhare a part.
So either by thy picture or my loue,
Thy ſeif away art preſent ſtill with me,
For thou nor farther then my thoughts canſt moue,
And I am ſtill with them, and they with thee.
Or, if they ſleep, thy picture in my ſight
Awakes my heart, to hearts and eyes delight.



But many children have grown up into adults who can both read and understand these words.
_________________

(1) POSSIBLE MINDS: Twenty-five ways of looking at A.I. edited by John Brockman; New York : Penguin Press, 2019.

(2) Petahertz: ((PHz) a frequency measurement unit. 1 PHz = 1.0 × 1015 hertz)

Further reading:

(paper) There’s No Fire Alarm for Artificial General Intelligence
October 13, 2017 | Eliezer Yudkowsky | Analysis


(paper) Intelligence Explosion Microeconomics | Eliezer Yudkowsky | Machine Intelligence Research Institute

34proximity1
Editado: mayo 10, 2019, 11:35 am

Eliezer Yudkowsky's papers are full of inspiring insights on the issues of A.I.

I owe this conjecture to his thinking on the topic.

When we project hypotheses about how fast and far some supposed A.I. program could "evolve" in a way we'd think of as "unaided," the progression is supposed to reach what resembles a nuclear pile's achieving a critical-mass and proceeding from there unimpeded. (Yudkowsky describes the way Fermi's first chain-reaction unfolded at the lab at the University of Chicago in 1942).

I wonder, however, about some unexpected and purely social disruptions which may attend merely the development of A.I. to a stage which is much closer to but still quite short of the point where the A.I. has reached human-level intelligence.

These problems might be likened to what sound-engineers recognize as "feed-back"--a term more popularly used in another and benign sense, the strict meaning of "feed-back" is a malfunction which is certainly nothing like the experience of friendly suggestions, advice, opinions, etc., offered after the presentation of information.

Thus, as A.I. advances, and well before it comes to any critical-mass stage, it could, it seems to me, produce social effects from practical applications "released into society" without those behind them having first taken enough about the potential harmful consequences--because these weren't foreseen, of course, or perhaps couldn't have been foreseen.

In this way, we first discover some very significant and harmful "feed-back" loops from the implementation of A.I.-like programs only after these are in use and it's effectively too late to withdraw them. They, being already "out there," cannot be recalled.

They are such unexpected developments as these which may derail A.I. "progress" well before it ever comes very close to a postulated "singularity" for specific reasons which have to do with the A.I.'s practical uses placing so much stress and strain on human abilities to cope and adjust to them that there are very disruptive break-downs in key social systems which society relies on for its most basic needs-- food, water, shelter, transport, communications, security, etc.

This is something akin to students blowing up the chem.-lab. through their ignorance of chemistry--but on a far, far larger scale.

Things not typically regarded as related might be involved. For example, erruptions of mass-shootings could be related to social feed-back from technology's effects on people's social resilience and lead them to take such drastic actions.

from Eliezer Yudkowsky's paper,


5-Minute Singularity Intro

... "Sometime in the next few decades, we’ll start developing technologies that improve on human intelligence. We’ll hack the brain, or interface the brain to computers, or finally crack the problem of Artificial Intelligence.
Now, this is not just a pleasant futuristic speculation like soldiers with super-strong bionic arms. Humanity did not rise to prominence on Earth by lifting heavier weights than other species.

"Intelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle. Let’s say we invent brain-computer interfaces that substantially improve human intelligence." ...

____________________

(emphasis added)




I wonder why we ought to suppose that, in achieving this, one or more feedback-cycles should be "positive". Why isn't it (or why aren't they) just as likely to be negative? How are we supposed to predict the direct and short, medium and long-term consequences of this hypothetical technological interface?

When and how should we imagine that A.I., in first reaching and then, supposedly, immediately vastly surpassing, human-intelligence levels, recognizes the fundamental fact that its own design and implementation was first started by and so springs from a greatly inferior intelligence? How does the A.I. make sense of this?

Can it even conceptualize "its"... anything? If so, how does it deal with "its own design and implementation" being the "product" of a greatly inferior intelligence and what should its decision-tree be ensuing from that recognition? Once it has (or "they have") read much or all of history's record, how could the A.I. fail to grasp all the actual and potential dangers posed by people's natural inclinations to violence? Doesn't the A.I. conclude that humans are a dangerously inferior intelligence? Should it then close down all inter-communications with humans? Why not?

Meanwhile, why couldn't the A.I.-human interactions quickly place potentially damaging strains and stress on human capacities to assimilate information, and in the process of what might resemble a data-overload on the human brains, cause all sorts of drastic problems, many unpredictable and, so, unexpected problems for which people are not prepared?



continued from above : Eliezer Yudkowsky's paper,


5-Minute Singularity Intro

... ...

"The potential impact on our world is enormous. Intelligence is the source of all our technology from agriculture to nuclear weapons. All of that was produced as a side effect of the last great jump in intelligence, the one that took place tens of thousands of years ago with the rise of humanity.

"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology. ...





Yet, protein-folding may only be a "problem" from our human perspective. Why exactly it's a problem to be "solved" rather than a process to be understood is not clear to me. For a super-human A.I., protein-folding may be something that is recognized as an inherently unpredictable evolutionary process by which variations are produced. In such a process, there need not be any natural or definable objective.



Chemical and Engineering News

Volume 95 Issue 31 | pp. 32-38
Issue Date: July 31, 2017

COVER STORY
Protein folding: Much more intricate than we thought |

Scientists are still uncovering all the players that help keep proteins folded inside cells—and all the ways the process can go wrong | By Sarah Everts



… …
“ ‘People often mistakenly think that proteins are free to live out their lives in a cell,’ says Stanford University’s Judith Frydman. ‘Instead, for many proteins, existence in a cell is more like life in a totalitarian state. They are never really released from the clutches of the chaperones to find their independent way’ inside the cell.

As it becomes increasingly clear that folding is not a once-in-a-lifetime event for proteins but instead a part of day-to-day life in the cell, scientists are discovering that problems in this sophisticated system are implicated in diseases as diverse as cancer, diabetes, and Alzheimer’s. In June, leaders in the protein-folding field gathered near Stockholm at a Nobel Foundation-sponsored meeting to discuss many of the recent advances in our understanding of how proteins fold, such as newly-captured atomic-resolution snapshots of chaperones in action and strategies for tweaking protein folding as a basis for disease-fighting drugs. They also shared newly discovered rules of folding—insights into the physical chemistry of this process that could enable the design of entirely new proteins by synthetic biologists. …





Differential requirements for mRNA folding partially explain why highly expressed proteins evolve slowly
By Chungoo Park, Xiaoshu Chen, Jian-Rong Yang, and Jianzhi Zhang
PNAS February 19, 2013 110 (8) E678-E686;
https://doi.org/10.1073/pnas.1218066110
Edited by Wen-Hsiung Li, University of Chicago, Chicago, IL, and approved January 10, 2013 (received for review October 18, 2012)
________________________________________


Significance
The expression level of a gene is a leading determinant of its rate of protein sequence evolution, but the underlying mechanisms are unclear. We show that as the mRNA concentration increases, natural selection for mRNA folding intensifies, resulting in larger fractions of mutations deleterious to mRNA folding and lower rates of protein evolution. Counter-intuitively, selection for mRNA folding also impacts the non-synonymous-to-synonymous nucleotide substitution rate ratio, requiring a revision of the current interpretation of this ratio as a measure of protein-level selection. These findings demonstrate a prominent role of selection at the mRNA level in molecular evolution.

Abstract
The cause of the tremendous among-protein variation in the rate of sequence evolution is a central subject of molecular evolution. Expression level has been identified as a leading determinant of this variation among genes encoded in the same genome, but the underlying mechanisms are not fully understood. We here propose and demonstrate that a requirement for stronger folding of more abundant mRNAs results in slower evolution of more highly expressed genes and proteins. Specifically, we show that: (i) the higher the expression level of a gene, the greater the selective pressure for its mRNA to fold; (ii) random mutations are more likely to decrease mRNA folding when occurring in highly expressed genes than in lowly expressed genes; and (iii) amino acid substitution rate is negatively correlated with mRNA folding strength, with or without the control of expression level. Furthermore, synonymous (dS) and non-synonymous (dN) nucleotide substitution rates are both negatively correlated with mRNA folding strength. However, counter-intuitively, dSand dN are differentially constrained by selection for mRNA folding, resulting in a significant correlation between mRNA folding strength and dN/dS, even when gene expression level is controlled. The direction and magnitude of this correlation is determined primarily by the G+C frequency at third codon positions. Together, these findings explain why highly expressed genes evolve slowly, demonstrate a major role of natural selection at the mRNA level in constraining protein evolution, and reveal a previously unrecognized and unexpected form of nonprotein-level selection that impacts dN/dS.

It has been known since the 1960s that different proteins can have drastically different rates of sequence evolution (1). Understanding the cause of this variation has always been a central topic of molecular evolution because it informs us about evolutionary mechanisms (2⇓⇓⇓⇓⇓–8). ...






Self-assembly of biological networks via adaptive patterning revealed by avian intradermal muscle network formation

(This article requires a subscription to view the full text)

Xiao-shan Wu, Chao-yuan Yeh, Hans I-Chen Harn, Ting-Xing Jiang, Ping Wu, Randall B. Widelitz, Ruth E. Baker, and Cheng-Ming Chuong
(This article requires a subscription to view the full text.)
PNAS first published May 9, 2019 https://doi.org/10.1073/pnas.1818506116
Edited by Brigid L. M. Hogan, Duke University Medical Center, Durham, NC, and approved April 9, 2019 (received for review October 29, 2018)
_________________________________



Significance
Humans have the most elaborate facial expressions, controlled by muscle networks anchored at distinct sites within the facial skin. Here, we reveal how muscle networks assemble robustly, adapting to external perturbations and insults, using avian intradermal muscle networks as a model. The muscle network is established with simple, local rules. Muscle fibers extend from feather buds in every direction, and connect to nearest neighbor buds. The network can be reconfigured, adapting to perturbed bud arrangement or mechanical cues. Our computational model validates the local rules, providing a potential mechanism for forming adaptive networks that could be useful in understanding other biological networks or designing non-biological networks.

Abstract
Networked structures integrate numerous elements into one functional unit, while providing a balance between efficiency, robustness, and flexibility. Understanding how biological networks self-assemble will provide insights into how these features arise. Here, we demonstrate how nature forms exquisite muscle networks that can repair, regenerate, and adapt to external perturbations using the feather muscle network in chicken embryos as a paradigm. The self-assembled muscle networks arise through the implementation of a few simple rules. Muscle fibers extend outward from feather buds in every direction, but only those muscle fibers able to connect to neighboring buds are eventually stabilized. After forming such a nearest-neighbor configuration, the network can be reconfigured, adapting to perturbed bud arrangement or mechanical cues. Our computational model provides a bio-inspired algorithm for network self-assembly, with intrinsic or extrinsic cues necessary and sufficient to guide the formation of these regenerative networks. These robust principles may serve as a useful guide for assembling adaptive networks in other contexts.


(emphasis added)


They may serve as a useful guide for understanding the assembling of adaptive networks in other contexts--or they might not. It depends on whether the simple rules are fixed or arise by random or only probabilistically-bounded mechanisms, the kinds of processes described by molecular biologist Jean-Jacques Kupiec as "cellular Darwinism", stochastic probabilities at work in naturally-occurring processes make outcomes only generally probable and, on a case-by-case basis, impossible to predict with precision. This is presented in his general-interest texts, ONTO-PHYLOGENY — Evolution of Species and Development of the Individual (Paris, April 2012, Éditions Quae; Collection: Sciences en questions ) and L'origine des individus (Paris, 2008; Éditions Fayard.

See also :



Science. 2012 Nov 23;338(6110):1042-6.
doi: 10.1126/science.1219021.

The protein-folding problem, 50 years on.

by Dill KA (1), & MacCallum JL.

___________________________



Abstract

The protein-folding problem was first posed about one half-century ago. The term refers to three broad questions: (i) What is the physical code by which an amino acid sequence dictates a protein's native structure? (ii) How can proteins fold so fast? (iii) Can we devise a computer algorithm to predict protein structures from their sequences? We review progress on these problems. In a few cases, computer simulations of the physical forces in chemically detailed models have now achieved the accurate folding of small proteins. We have learned that proteins fold rapidly because random thermal motions cause conformational changes leading energetically downhill toward the native structure, a principle that is captured in funnel-shaped energy landscapes. And thanks in part to the large Protein Data Bank of known structures, predicting protein structures is now far more successful than was thought possible in the early days. What began as three questions of basic science one half-century ago has now grown into the full-fledged research field of protein physical science.

PMID: 23180855 DOI: 10.1126/science.1219021




and



Annu Rev Biophys. Author manuscript; available in PMC 2008 Jul 3.
Published in final edited form as:

Annu Rev Biophys. 2008 Jun 9; 37: 289–316.

doi: 10.1146/annurev.biophys.37.092707.153558
PMCID: PMC2443096
NIHMSID: NIHMS51627
PMID: 18573083

The Protein Folding Problem

Ken A. Dill, (1,2)
S. Banu Ozkan, (3)
M. Scott Shell, (4)
and Thomas R. Weikl (5)


Abstract

"The “protein folding problem” consists of three closely related puzzles: (a) What is the folding code? (b) What is the folding mechanism? (c) Can we predict the native structure of a protein from its amino acid sequence? Once regarded as a grand challenge, protein folding has seen great progress in recent years. Now, foldable proteins and non-biological polymers are being designed routinely and moving toward successful applications. The structures of small proteins are now often well predicted by computer methods. And, there is now a testable explanation for how a protein can fold so quickly: A protein solves its large global optimization problem as a series of smaller local optimization problems, growing and assembling the native structure from peptide fragments, local structures first.

(Keywords: structure prediction, funnel energy landscapes, CASP, folding code, folding kinetics)

INTRODUCTION

The protein folding problem is the question of how a protein’s amino acid sequence dictates its three-dimensional atomic structure. The notion of a folding “problem” first emerged around 1960, with the appearance of the first atomic-resolution protein structures. Some form of internal crystalline regularity was previously expected (117), and α-helices had been anticipated by Linus Pauling and colleagues (180, 181), but the first protein structures—of the globins—had helices that were packed together in unexpected irregular ways. Since then, the protein folding problem has come to be regarded as three different problems: (a) the folding code: the thermodynamic question of what balance of inter-atomic forces dictates the structure of the protein, for a given amino acid sequence; (b) protein structure prediction: the computational problem of how to predict a protein’s native structure from its amino acid sequence; and (c) the folding process: the kinetics question of what routes or pathways some proteins use to fold so quickly. We focus here only on soluble proteins and not on fibrous or membrane proteins."



35proximity1
mayo 11, 2019, 11:04 am



More problems with A.I.

(excerpt from the paper) Alignment for Advanced Machine Learning Systems by

authors: Jessica Taylor and Eliezer Yudkowsky and Patrick LaVictoire and Andrew Critch | Machine Intelligence Research Institute

________________________________

(Conclusion)



...

"Directions for future research.

"Soares et al. (2015), Armstrong (2010), and Orseau and Armstrong (2016) study methods for combining objective functions in such a way that the humans have the ability to switch which function an agent is optimizing, but the agent does not have incentives to cause or prevent this switch. All three approaches leave much to be desired, and further research along those paths seems likely to be fruitful. In particular, we would like a way of combining objective functions such that the AI system (1) has no incentive to cause or prevent a shift in objective function; (2) is incentivized to preserve its ability to update its objective function in the future; and (3) has reasonable beliefs about the relation between its actions and the mechanism that causes objective function shifts. We do not yet know of a solution that satisfies all of these desiderata. Perhaps a solution to this problem will generalize to also allow the creation of an AI system that also has no incentive to change, for example, the amount of computational resources it has access to. Another approach is to consider creating systems that “know they are flawed” in some sense. The idea would be that the system would want to shut down as soon as it realizes that humans are attempting to shut it down, on the basis that humans are less flawed than it is. It is difficult to formalize such an idea; naïve attempts result in a system that attempts to model the different ways it could be flawed and optimize according to a mixture over all different ways it could be flawed, which is problematic if the model of various possible flaws is itself flawed. While it is not at all clear how to make the desired type of reasoning more concrete, success at formalizing it could result in entirely new approaches to the problem of averting instrumental incentives." ...



It would seem necessary for any advanced A.I.—that is, of a type superior to human intelligence— to have the capacity to formally grasp and recognize conceptually a distinction between "human error" and "'machine' error." The computer program would have to achieve this feat without any "awareness" of any sort and, of course, still less, any self-awareness. The machine can sort and record machine I.D.s by some I.D. number and even have and keep one of these of "its own" but it would not and could not be aware of this I.D. as its "own." "Resident" and "other" can be defined logically but these would exist only as formal definitions in the operating code—having, from the machines' "point-of-view" (of which, of course, they have nothing) no "locale", no location anywhere. "Resident" and "other" can be anywhere since they are nowhere.

So, with "human" in human-error. "Human" would be simply another signifier, one which is not inherently different from any other defined named-thing.

Advanced A.I., as people imagine it, would supposedly eventually "read" (scan) the record of all extant printed texts (or at least all of those deemed worthy of record; surely much utter trash would never be included); even the best and most important of these, would contain a great deal of what, to an aware reader, would be anything but flattering; the texts would be full of references to people and their acts and thoughts, all things which would not be recognizable for their significance as entities or as actions or thoughts; nor would the experiential significance which lies behind the alphabetic strings of data which the machine scans be apparent to the machine. Everything scanned would have the same "null" emotional impact. The machine, scanning detailed descriptions of the most horrendous acts, would remain utterly impassive—as only a machine can—no more "moved" by the data it scans than it is when in operates on a mathematical equation's alpha-numeric symbols.

If A.I. intelligence reaches super-human levels, it seems that the A.I. would then be faced with all sorts of human intellectual dilemmas—all those matters for which there is no cut-and-dried factual "answer." It would seem that the machine would then be faced with matters over which it could and inevitably would disagree when in "communication" with "others,"— defined as "machine" or "human," to the machines.

How is the machine to respond when it encounters a human factual-error contended by a human? The machine is either programmed to respect and accept without any unprogrammed challenge or objection, data according to its human operators' programs and instructions or it is allowed to object and insist on an answer which defies what the human operators have thought correct or proper or both.

Then, too, a super-human A.I. would distinguish between matters of fact and matters of opinion. It would necessarily be capable of dealing with and holding both facts and opinions as true or false or neither true nor false or unknown.

36DugsBooks
Editado: mayo 13, 2019, 3:44 pm

"Scientists teach computers fear—to make them better drivers", Attempt to make A.I. drivers have a "driving Ms. Daisy" mode?

https://www.sciencemag.org/news/2019/05/scientists-teach-computers-fear-make-the...

37proximity1
Editado: mayo 14, 2019, 11:07 am

>36 DugsBooks:

Your post with its reference to a report in Science is a useful example of what is going on—and going wrong—with our thinking about A.I. and "learning." The meaning of "reasoning", "thinking" and "learning"(1) are all concerned here and this is actually not so easy a matter to get and to keep straight. The article helps highlight how and why that is.

Programs which emulate certain superficial aspects of what resembles (to us) "reasoning", "thinking" and "learning" are not reliable indications that any of these ("reasoning", "thinking" or "learning") are actually going on with or through the use and implementation of a computer-run program. At most, the researchers' own observations of the computer-programs' operations may help them, i.e. the researchers, "learn" something which they hadn't already understood—though, typically, it doesn't. More often, I suspect, their observations serve to confirm what they'd already learned (and reasoned over).

But there is no reasoning, no thinking, and certainly no "learning" going on inside the computer itself. There's no "awareness" of any context here. The simple fact that the computer's memory has the term "fear" in it does not in the least suggest or indicate that the computer has some awareness of a conceptual relation between that term, which is nothing other than an alphabetic string of data, "fear" linked to other strings of data denoting "definition". In the computer program, the human agents' physical responses will be blindly-linked to sets of driving circumstances. These
relationships are necessarily so crudely-drawn that they cannot possibly approach what even an average experienced driver has assimilated and can apply not only in real-time circumstances but in constantly varying and always-less-than-fully-predictable circumstances.



... "It then used those “fear” signals as a guide while learning to drive through the virtual world: If a human would be scared here, it might muse, 'I’m doing something wrong.' " ...



There is no "musing". No "I" and the import of, the defined operational "meaning" of "wrong" in "doing something wrong" has to been so narrowly drawn that it is next-to-meaningless outside the tiny confines of the program's limits. But real-world driving circumstances are in no way relevant to this tiny section of possibilities. And "something"? Forget about it.

Learning anything—never mind something as complicated as driving in real-world traffic—is an experienced process. Data-sets in computer-programs cannot cover the complexity of such activity.

Again, placing an automobile in a real-world traffic situation and allowing a program to operate the car means placing a computer-program in a living-context without that program's having any conception, any experience of the context itself. Everything is reduced to an operational abstraction—and this includes, no less, the tiny set of so-called "fear" permutations.

Any one of thousands of other entirely possible and plausible intervening events, none of which are included in the program's data-sets, could and should incite equal or greater fear in a real, experienced driver. But, "faced" with them, the computer-program would exhibit the

Thought, the spontaneous waking mental-activity combines and augments in real-world experiential circumstances, to produce reasoning. These activities are not only "pre-numeric," (that is, the infant has no mathematical understanding at all) they are pre-verbal, as is apparent in new-born human infants which begin even before birth to respond physically and mentally to physical stimuli. Once born, their senses begin to absorb and unconsciously assimilate millions of sense-reception events in their immediate experience. Among the earliest of these is the visual stimulus of the appearance of the infant's parents. Once born, the infant shall, associate visually the voices of its parent or parents, heard—that is, audibly experienced uncounted times while in utero.

Of course, almost all of this is at first instinct-derived. But that is not to minimize its significance or importance. Indeed, because intelligence is instinctual and bred in the new-born, the child can, when normal, have experiences, those which later develop into only slightly more sophisticated thinking and reasoning, the essentials on which all real reasoning, all real learning, depends, are the same in the brain. So, an infant's feats of mental activity are, in essence, no less marvelous than an adult's later grown-up thinking.

It may be true that these computer programs are crude emulations of what resembles, to our superficial view of things, thought-like processes. But this should not lead us to conclude that the computer is actually "reasoning", actually "thinking."

The same objections apply to algorithms which take sophisticated programming and combine it with samples of real music from a composer, (Beethoven, for example) to produce something which only superficially resembles the composer's (in this case, Beethoven's) music. This is, again, devoid of experience-based knowledge of music, something of which, of course, the computer has, strictly, nothing. As with games of chess, or Go, the computer "knows", "understands," nothing about "music" itself. Beethoven, of course, to compose his works, had to both experience music itself in the real world and apply that experience, combine it with his native genius, to actually produce something—musical expressions of his own mental life. The computer-program is producing an automatic "mash-up", is it not?




(from Aeon.co/essays) | The empty brain

by Robert Epstein*

| Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer.



“No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

"Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

“To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

“A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

“Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.” …
_____________________________

* Robert Epstein "is a senior research psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books, and the former editor-in-chief of Psychology Today." (from the cited essay's page)





( NYU Center for Mind, Brain, and Consciousness) On December 4-5, 2015, the NYU Center for Mind, Brain, and Consciousness will host a conference on “Is the Brain Bayesian?”.


"Bayesian theories have attracted enormous attention in the cognitive sciences in recent years. According to these theories, the mind assigns probabilities to hypotheses and updates them according to standard probabilistic rules of inference. Bayesian theories have been applied to the study of perception, learning, memory, reasoning, language, decision making, and many other domains. Bayesian approaches have also become increasingly popular in neuroscience, and a number of potential neurobiological mechanisms have been proposed.

"At the same time, Bayesian theories raise many foundational questions, the answers to which have been controversial: Does the brain actually use Bayesian rules? Or are they merely approximate descriptions of behavior? How well can Bayesian theories accommodate irrationality in cognition? Do they require an implausibly uniform view of the mind?
"Are Bayesian theories near-trivial due to their many degrees of freedom? What are their implications for the relationship between perception, cognition, rationality, and consciousness?"


__________________________________

(1)
...
" We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function trained on peripheral pulse measurements that are correlated with human autonomic nervous system responses. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage." ...
Visceral Machines: Risk-Aversion in Reinforcement Learning with Intrinsic Physiological Rewards | Daniel McDuff, Ashish Kapoor | 27 Sep 2018 (modified: 23 Feb 2019)

38DugsBooks
Editado: mayo 14, 2019, 9:55 am

>37 proximity1: That is how the body/brain works bypassing reasoning - with pain & sensory feedbacks. Works fine with the theory that we are augmenting ourselves with silicon chips.

39proximity1
Editado: mayo 14, 2019, 10:59 am

>38 DugsBooks:

I don't know what this is supposed to mean. "What" is "how the body/brain works bypassing reasoning"?

You don't really explain what you mean here. Instead, you're "tweeting" "stuff."

40DugsBooks
mayo 14, 2019, 11:53 am

>39 proximity1: There is not a lot of higher brain function involved in jumping away from an animal attack or touching a hot object - did not think there was a need to document that as it is "common knowledge" in most educational programs.

"tweeting" "stuff." I am satisfied to stick with abstracts of research for topics here. I have yet to see someone publish on LT. Depth to understand the issue is fine but detail to the point of being nonsensical and barely if at all relevant to the topic is something l like to avoid.

41DugsBooks
Editado: mayo 14, 2019, 12:29 pm

>40 DugsBooks: Backing up here a bit, that is my opinion/perception and do not mean it to discourage anyone else from pursuing their interests to whatever extent they would like - understanding that my freely given opinion is usually worth what it costs, ;-)

42proximity1
mayo 15, 2019, 5:37 am


>40 DugsBooks: "There is not a lot of higher brain function involved in jumping away from an animal attack or touching a hot object"

As I've been led to believe, both of those examples are very much other than automatic responses; that is, they in fact do require the brain's active attention. In the case of a hot object, notice that, if you are distracted--doing anything which interferes with your awareness of being on the point of being burned, you will be burned--and feel it--before your attention is brought immediately to the fact of being burned. That interrupts, takes priority over almost anything else going on unless, of course, it is some other more urgent life-protecting acts. Just because the brain's response time is nearly instantaneous we should not ignore that, to experience this burn, the body's nervous-system had to transmit nervous stimuli to the brain, which then had to receive and interpret these stimuli and send a response signal to the muscles which react to remove the burned tissue from the burn-source. That's "blink-of-an-eye" speed, but it's certainly not "unconscious." If one is in the midst of a fire while asleep, the fumes won't awaken one before death from asphyxiation happens in many cases; the heat never reaches the point of burning the sleeper and awakening him before he suffocates. But, if someone holds a flame to one's exposed-skin while he's asleep, that burn will interrupt any dream-state and cause the burned person to awaken to the fact of being burned. If you expect an immediate and unmediated awareness to any burn-situation, try walking into a room in which there is a dangerous source of gamma or beta radiation with no instrument to alert you to this source's presence. You'll be seriously, perhaps fatally, burned before your brain "knows" it.

Even more in the case of assessing a potential threat/danger from the presence of another animal.

If things reach this stage, "in jumping away from an animal attack," one's sense-data have already failed to alert the human to the animal's proximity--which happens, of course. But attention to the vicinity of a potential predator should, in most cases, happen well before the predator is in a position to actually attack a human. In all those cases, I think that there is a good deal more of what I understand you to mean by "higher brain function." Like a response to a burn, the fight-or-flight, though an impulsive response, nearly instantaneous, is also one in which there's a brain-interpretation of the sense-data's significance.

It seems to me you overestimate the automatic functions of the brain which never rise to the level of interrupting aware, waking, thoughts. Awareness of a potential predator or of a burn occurring is not "unconscious". It comes through a conscious brain interaction, as I understand it. This is probably explained somewhere in a layman's-level presentation without going into the intricate details.

43proximity1
Editado: mayo 16, 2019, 7:39 am

So I don't overlook it completely, here I would interject that, in thinking about the quest itself--that is, the general project to produce something called "artificial intelligence"-- by way of programming computers to replicate processes which are supposed to resemble or duplicate the way humans think, reason, etc., we have a pure case of



"looking for a lost article under a lamp-post (ETA) for no other reason than because the light is better there."

44DugsBooks
mayo 15, 2019, 6:20 pm

I was attempting to draw a connection to the >36 DugsBooks: research where

“The scientists placed sensors on people’s fingers to record pulse amplitude while they were in a driving simulator, as a measure of arousal”

And using this data they more or less substituted the A.I. Program for the “flight or fight” response which is genetically/hormonaly wired into people (or more to avoid that response and the hormonal stress it causes)

https://en.m.wikipedia.org/wiki/Amygdala_hijack

Looking in the lighted area would be a good place to start!

45proximity1
mayo 16, 2019, 7:39 am

>44 DugsBooks:

"“The scientists placed sensors on people’s fingers to record pulse amplitude while they were in a driving simulator, as a measure of arousal”

And using this data they more or less substituted the A.I. Program for the “flight or fight” response which is genetically/hormonally wired into people (or more to avoid that response and the hormonal stress it causes)"
_________________

Then am I to understand that you agree that this (as above) in no way, however remotely, has anything to do with any artificial intelligence's "learning" anything at all about anything at all?

46DugsBooks
mayo 16, 2019, 10:14 am

>45 proximity1: If you have the attitude that any information contained by silicon chips is not learning then you are correct. If you believe the information can be a precursor/template to an "A.I." , narrowly specialized or not , then it most certainly is learning.

Addressing the issue of the "technological singularity" A.I. when the computer "accesses all recorded information" as you mention this would certainly be useful to Skynet as it scans our pupils for stress responses when being questioned by giant robots.

https://en.wikipedia.org/wiki/Technological_singularity

47proximity1
mayo 17, 2019, 5:33 am


news briefs:


The Evening Standard (London) Thurs. 16 May, 2019 / (FINANCIAL SERVICES) — “Investec shuts robot savings advisor after losses mount


“The rise of robots hit a glitch today after wealth advisor Investec killed off its ‘robo-advice’ service due to sluggish demand.

“Click & Invest, launched in June 2017, will shut with immediate effect after plunging to a £12.8m loss, the second consecutive year in the red.

“Robo-advice, which manages savings with algorithms instead of humans, has been hailed in some quarters as the future of wealth-management due to its low costs.

“But Investec said the market was growing “at a much slower rate than expected.”

"Union Bank of Switzerland (UBS) also closed its robo service SmartWealth last year.” …


48proximity1
Editado: mayo 17, 2019, 5:41 am

>46 DugsBooks:

RE: "If you have the attitude that ... . If you believe ... then ... . "

This is the stuff of "cargo-cults*." and "Choose-Your-Own-Adventure" for children.

_________________

* (Wikipedia)



"The islanders carved headphones from wood and wore them while sitting in fabricated control towers. They waved the landing signals while standing on the runways. They lit signal fires and torches to light up runways and lighthouses.(16)

"In a form of sympathetic magic, many built life-size replicas of airplanes out of straw and cut new military-style landing strips out of the jungle, hoping to attract more airplanes. The cult members thought that the foreigners had some special connection to the deities and ancestors of the natives, who were the only beings powerful enough to produce such riches.

"Cargo cults were typically created by individual leaders, or big men in the Melanesian culture, and it is not at all clear if these leaders were sincere, or were simply running scams on gullible populations." ...

49proximity1
Editado: mayo 18, 2019, 7:48 am



S.L.: “The mind is a computing machine, it has algorithms, it has moral rules that run as algorithms over representations and some of it we can report on and some of it is totally opaque to us.” …

… “Now that we have all this data from the “moral machine” (project), what part of the democratic process is that data relevant for? … What should we do with this data such that it gets into the democratic process in the right way. Should it be seen as voting? Probably not. Because voting is a very particular act. But maybe it can be used as part of the comment process of creating regulations. There can be an interesting role in democracy for the intersection of a very carefully-designed, scientifically-motivated poll that finds something interesting about people’s psychology and harnesses technology to gather information and to use that in the democratic process. … What we don’t want to do is just find out what people think self-driving cars should do and then program those ethics into the cars. That would be a problem. What we do want to do is have this data and use it to come to some normative conclusion. That might sometimes involve just ignoring that data because we think of it as bad or as unbiased. But just having that data available to consult has positive value.” … The big vision right now is to try to create an A.I. that has a human moral conscience. If it’s going to move around in the human world, it’s going to get itself into moral trouble. The best way for it to navigate those moral situations is for it to know what a human would do in those situations. In order to program an A.I. with that level of moral conscience, we need to know how to write in computational terms what human moral judgment is. This returns to a basic science question that is motivated by this applied question.” …

Q.: “Is step-one to figure out how to encode human morality?”

S.L.: “Really the big question right now is to how to describe human moral cognition. The approach that we’ve been taking is to look to moral philosophy as an inspiration (and its three main camps): consequentialism, … deontology … and contractualism ... .” ...

______________


____________________________

(from a podcast “Glimpse”) : (S2E7) Sydney Levine – “Encoding Morality into Machines” https://www.media.mit.edu/articles/glimpse-podcast-s2e7-sydney-levine-encoding-m... )
_________________________________

Other related links:

http://web.mit.edu/cocosci/josh.html
http://cocosci.mit.edu/

((from "Spectrum", an online magazine of the Massachusetts Institute of Technology) : http://spectrum.mit.edu/winter-2018/defining-the-dilemmas-of-artificial-intellig... )

https://sites.google.com/site/sydneymlevine/press
https://www.media.mit.edu/groups/scalable-cooperation/overview/
https://www.media.mit.edu/groups/scalable-cooperation/projects/
http://moralmachine.mit.edu/
__________________________________

Article

The Mental Representation of Human Action
Sydney Levine, Alan M. Leslie, John Mikhail
First published: 21 May 2018

https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12608

Cognitive Science 42 (2018) 1229–1264
Copyright© 2018 Cognitive Science Society, Inc. All rights reserved.
ISSN: 0364-0213 print / 1551-6709 online

DOI: 10.1111/cogs.12608

https://www.media.mit.edu/articles/glimpse-podcast-s2e7-sydney-levine-encoding-m...

_________________________

ETA 18/05/2019

When I posted this yesterday I actually felt a little guilty: I thought that it was perhaps unbecoming to present the podcast because it was not, well, flattering; readers might think that I was being cruel, deliberately holding up this young researcher up to ridicule.

Since then I've read this (Tim Dowling Weekend column of 18/05/2019) and now I'm convinced my concern was mistaken. These people and their truly amazing technological naiveté and stupidity and ignorance about the importance and fragility of a humanly-bearable culture are dangerous to society and they must be stopped!

The real trouble goes well beyond a truly obscene waste of money on projects which shall never succeed in anything remotely like the objectives they intend to achieve. Rather, the trouble is in the dangerously stupid things which these blind fools are going to produce in the course of failing to achieve their intended goals.

Sydney, you have absolutely no fucking idea what you're playing at and it's obvious that this is something of which you've been able to remain blissfully ignorant--your dangerous (to the rest of us!) naiveté concerning your research program.

It is objectively and patently insane to even desire, to work toward, a future in which there are what is intended by "smart machines" in the sense of "strong A.I".

Machines must never be designed to operate beyond the easy, immediate and direct control of human supervisors. And that is already manifestly feasible in some present technology. This ought never be allowed to develop further. The desire, in itself, to have mahines which could in some manner "outsmart us," could be so like us as to seem and to be completely convincingly "virtually human" is both a wildly stupid and an insanely Faustian ambition.

Únete para publicar