Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Feral neurons

#1
Magical Realist Offline
"In a recent Edge interview, Dan Dennett pitches the most fascinating new idea I've read in a long, long time: That our neurons are powerful computational building blocks in part because they've reverted to an older and slightly feral state.


Here's Dennett :

"Realize that every human cell in your body, including your neurons, is a direct descendent of eukaryotic cells that lived and fended for themselves, for about a billion years, as free-swimming, free-living little agents. They had to develop an awful lot of know-how and self-protective talent to do that. But when they joined forces to become multi-cellular creatures, they gave up a lot of that. They became, in effect, domesticated — part of larger, more monolithic organizations.

In general, we don't have to worry about our muscle cells rebelling against us. (When they do, we call it cancer.) But in the brain, I think, some little switch has been thrown in the genetics that, in effect, makes our neurons a little bit feral. It's like what happens when you let sheep or pigs go feral: they recover their wild talents very fast.

Maybe the neurons in our brains are not just capable, but motivated, to be more adventurous, exploratory, or risky in the way they live their lives. They're struggling amongst themselves for influence and for staying alive. As soon as that happens, you have room for cooperation, to create alliances, coalitions, cabals, etc."

Dennett traces this idea — of the "selfish" neuron — to computational neuroscientist Sebastian Seung. According to Seung and Dennett, it's precisely because of neuronal selfishness that the brain is able to "spontaneously reorganize itself in response to trauma or novel experiences." For example:

Mike Merzenich sutured a monkey's fingers together so that it didn't need as much cortex to represent two separate individual digits, and pretty soon the cortical regions that were representing those two digits shrank, making that part of the cortex available to use for other things. When the sutures were removed, the cortical regions soon resumed pretty much their earlier dimensions.

Or if you blindfold yourself for eight weeks, as Alvaro Pascual-Leone does in his experiments, you find that your visual cortex starts getting adapted for Braille, for haptic perception, for touch.

Why should these [idle] neurons be so eager to pitch in? Well, they're out of work. They're unemployed, and if you're unemployed, you're not getting your neuromodulators, so your receptors are going to start disappearing, and pretty soon you're going to be really out of work, and then you're going to die.

In other words, the selfishness of neurons incentivizes them to be useful — to hook up with the right network of their fellow neurons, which is itself hooked up with other networks (both 'up' and 'downstream'), all so they can keep earning their share of life-sustaining energy and raw materials.

Thus there is, in this view, an internal 'economy' in the brain, in which neurons must compete with each other for resources. This design stands in contrast to the standard, Von Neumann computer architecture, whose parts never have to worry about where their energy is coming from. Without resource contention, there's no need for selfishness. And this is, in part, why computers are less flexible and adaptable — less plastic — than brains.

Plasticity, says Dennett, is itself one of the most amazing features of the brain, and if you don't have an architecture that can explain it, your model has a major defect. I think you really have to think of individual neurons as micro-agents, and ask what's in it for them?

Neurons as agents: This could well be the single most important fact about our brains."====http://www.meltingasphalt.com/neurons-gone-wild/
Reply
#2
C C Offline
(Apr 12, 2015 05:51 PM)Magical Realist Wrote: "In a recent Edge interview, Dan Dennett pitches the most fascinating new idea I've read in a long, long time: That our neurons are powerful computational building blocks in part because they've reverted to an older and slightly feral state. [...] Plasticity, says Dennett, is itself one of the most amazing features of the brain, and if you don't have an architecture that can explain it, your model has a major defect. I think you really have to think of individual neurons as micro-agents, and ask what's in it for them? Neurons as agents: This could well be the single most important fact about our brains."====http://www.meltingasphalt.com/neurons-gone-wild/

It seems to help open the door to what's discussed below, though I seriously doubt Dennett would welcome these consequences....

Eric Schwitzgebel: If you're reading this, you probably know who Dan Dennett is. His book Consciousness Explained is probably the best-known contemporary philosophical work on consciousness from a materialist perspective. Today I'll argue that Dennett should, by his own lights, accept the view that the United States is conscious -- that is, the view that the United States has a stream of subjective experience over and above the individual experiences of its residents and citizens, the view that there's "something it's like" to be the United States, as truly as there is something it's like to be you and me. --Why Dennett Should Think That the United States Is Conscious

Eric Schwitzgebel: [...] Second illustration of the likely bizarreness of materialism: The consciousness of the United States. It would be bizarre to suppose that the United States has a stream of conscious experience distinct from the conscious experiences of the people who compose it. I hope you’ll agree. (By “the United States” here, I mean the large, vague-boundaried group of people who are compatriots, sometimes acting in a coordinated manner.) Yet it’s unclear by what materialist standard the United States lacks consciousness. Nations, it would seem, represent and self-represent. They respond (semi-)intelligently and self-protectively, in a coordinated way, to opportunities and threats. They gather, store, and manipulate information. They show skillful attunement to environmental inputs in warring and spying on each other. Their subparts (people and subgroups of people) are massively informationally interconnected and mutually dependent, including in incredibly fancy self-regulating feedback loops. These are the kinds of capacities and structures that materialists typically regard as the heart of mentality.

Nations do all these things via the behavior of their subparts, of course; but on materialist views individual people also do what they do via the behavior of their subparts. A planet-sized alien who squints might see individual Americans as so many buzzing pieces of a s omewhat diffuse body consuming bananas and automobiles, invading Iraq, exuding waste. Even if the United States still lacks a little something needed for consciousness, it seems we ought at least hypothetically to be able to change that thing, and so generate a stream of experience. We presumably needn’t go nearly as far as B lock does in his famous “Chinese nation” example (1978/1991) – an example in which the country of China implements the exact functional structure of someone’s mind for an hour – unless we suppose, bizarrely, that consciousness is only possible among beings with almost exactly our psychology at the finest level of functional detail.

If we are willing to attribut e consciousness to relatively unsophisticated beings (frogs? fish?), well, it seems that the United States can, and does sometimes, act with as much coordination and intelligence, if on a larger scale. Arguably, the United States is vastly more functionally sophisticated than such organisms, or even than individual human beings, considering the layers upon layers o f bureaucratic military and civilian organization and the subsumption of individual human intelligences within the goals and outputs of those organizations.

The most plausible materialistic attempt I’ve seen to confine consciousness within the skull while respecting the broadly functionalist spirit of most materialism is Andy Clark’s (2009) and Chris Eliasmith’s (2009) suggestion that consciousness requires the functional achievements possible through high bandwidth neural synchrony. However, it’s hard to see why speed per se should matter. Couldn’t conscious intelligence be slow-paced, especially in large entities? And it’s hard to see why synchrony should matter either, as long as the functional tasks necessary for intelligent responsiveness are successfully executed.

Alternatively, one might insist that specific details of biological implementation are essential to consciousness in any possible being – for example, specific states of a unified cortex with axons and dendrites and ion channels and all that – and that broadly mammal-like or human-like functional sophistication alone won’t do. However, as I argued in the discussion of pain, it seems bizarrely chauvinistic to regard consciousness as only possible in beings with internal physical states very similar to our own, regardless of outwardly measurable behavioral similarity. Or is there some specific type of behavior that all conscious animals exhibit but that the United States, perhaps slightly reconfigured, could not exhibit, and that is a necessary condition of consciousness? It’s hard to see what that behavior could be.

Is the United States simply not enough of an “entity” in the relevant sense? Well, why not? What if we all held hands? In his classic early statement of functionalism, Putnam (1965) simply rules out, on no principled grounds, that a collection of conscious organism s could be conscious. He didn’t want his theory to result in swarms of bees having collective conscious experience, he says. But why not? Maybe bee swarms are dumber and represent less than do individual bees – arguably committees collectively act and collectively represen t less than do their members as individuals – but that would seem to be a contingent, empirical question about bees. To rule out swarm consciousness a priori, regardless of swarm behavior and swarm structure, seems mere prejudice against beings of radically different morphology. Shouldn’t a well developed materialist view eventually jettison unprincipled folk morphological prejudices?

We resist, perhaps, attributing consciousness to noncompact beings and to beings whose internal mechanisms we can see – but most materialist theories appear to imply, and probably part of common sense also implies, that such differences aren’t metaphysically important. The materialist should probably expect that some entities to which it would seem bizarre to attribute conscious experience do in fact have conscious experience. If materialism is true, and if the kinds of broadly functional capacities that most materialists regard as central to conscious mentality are indeed central, it may be difficult to dodge the conclusion that the United States has its own stream of conscious experience, in addition to the experiences of its individual members. That’s the kind of bizarreness I’m talking about.
--The Crazyist Metaphysics of Mind
Reply




Users browsing this thread: 1 Guest(s)