Categories
biology brain cognition cognitive style communication fear introversion law enforcement mental flexibility mental health personality physical environment psychology risk analysis speech stress

Introversion’s potential risks – temporary language blindness

During the past two weeks, I’ve enjoyed rich conversations with some creative, insightful friends—introverts all. As an introvert with many interests, I can occupy myself with research and other projects for weeks on end without feeling the need to engage directly with others beyond my wife. A few years ago I became more aware of research finding introverts, to be optimally healthy, need to deliberately cultivate regular social interaction with others. We can do this without violating our other needs. Introversion entails both health boosters and detractors. On the downside, according to Laurie Helgoe, Ph.D., introverts

  • may experience more stress in social situations or even when thinking about social situations and avoiding social opportunities may erode health
  • be more realistic about negative realities or fixate on them, presenting more opportunities for negative moods or depression
  • may be less emotionally adaptable to open or crowded living or working environments (introverts tend to prefer living in less populated areas where they can be outdoors without being crowded, as in many mountainous areas)
  • may not benefit as much from fitness and other activities that are organized to emphasize socialization (think Cross Fit or many other popular fitness programs)
  • may have less effective immune systems, though the effect is small
  • may require more time and effort to think through decision scenarios (possibly due to the denser gray matter in their brains)
  • are more easily aroused by sensory stimuli, which can make them seek situations with less stimulation
  • may avoid risk-taking, which can have positive and negative effects (they’re unlikely to become gambling addicts but are also more likely to miss significant opportunities that require them to take chances)
  • may ignore negative health indicators and delay speaking with health care providers about potential health issues
  • may experience slower situational comprehension and response times in loud environments or situations with intensified sounds or urgency signals, such as when exposed to alarms, vehicle horns, or other people yelling commands (think of the spate of recent episodes of police excessive force against people the claim were not obeying their screamed orders)

Regarding the last point above, an introvert friend worries she’ll not be capable of understanding the screamed commands of a threatening policeman and will be arrested, injured, or even killed because of it. There is probably a clinical or technical name for such a temporary inability to process language. I’m unaware of any law enforcement training specifically addressing this issue. If you know more about it, please post a comment.

Categories
artificial intelligence cognition complexity

Book review – Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark

Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence, introduces a framework for defining types of life based on the degree of design control that sensing, self-replicating entities have over their own ‘hardware’ (physical forms) and ‘software’ (“all the algorithms and knowledge that you use to process the information from your senses and decide what to do”).

It’s a relatively non-academic read and well worth the effort for anyone interested in the potential to design the next major forms of ‘Life’ to transcend many of the physical and cognitive constraints that have us now on the brink of self-destruction. Tegmark’s forecast is optimistic.

Categories
artificial intelligence brain cognition computing metaphors

Computer metaphor not accurate for brain’s embodied cognition

It’s common for brain functions to be described in terms of digital computing, but this metaphor does not hold up in brain research. Unlike computers, in which hardware and software are separate, organic brains’ structures embody memories and brain functions. Form and function are entangled.

Rather than finding brains to work like computers, we are beginning to design computers–artificial intelligence systems–to work more like brains.

https://www.wired.com/story/tech-metaphors-are-holding-back-brain-research/

Categories
artificial intelligence brain cognition communication complexity computing engineering interaction design interface metaphors semantics speech speech synthesis

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. To make matters worse, she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination. Cries for help punctuated my hours. Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.

Categories
brain brain imaging

Mathematical field of topology reveals importance of ‘holes in brain’

New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve important cognitive functions. Newly discovered densely connected neural groups are characterized by a gap in the center, with one edge of the ring (cycle) being very thin. It’s speculated that this architecture evolved to enable the brain to better time and sequence the integration of information from different functional areas into a coherent pattern.

Aspects of the findings appear to support Edelman’s and Tononi’s (2000, p. 83) theory of neuronal group selection (TNGS, aka neural Darwinism).


Edelman, G.M. and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. Basic Books.