Welcome, Guest
You have to register before you can post on our site.

Scivillage.com Join now!

Already a member, then please login:

Username
  

Password
  





Posted by: C C - Nov 24, 2014 03:32 AM - Forum: Biochemistry, Biology & Virology - Replies (1)

http://news.sciencemag.org/biology/2014/...osexuality

EXCERPT: Dean Hamer finally feels vindicated. More than 20 years ago, in a study that triggered both scientific and cultural controversy, the molecular biologist offered the first direct evidence of a “gay gene,” by identifying a stretch on the X chromosome likely associated with homosexuality. But several subsequent studies called his finding into question. Now the largest independent replication effort so far, looking at 409 pairs of gay brothers, fingers the same region on the X. “When you first find something out of the entire genome, you’re always wondering if it was just by chance,” says Hamer, who asserts that new research “clarifies the matter absolutely.”

But not everyone finds the results convincing. And the kind of DNA analysis used, known as a genetic linkage study, has largely been superseded by other techniques. Due to the limitations of this approach, the new work also fails to provide what behavioral geneticists really crave: specific genes that might underlie homosexuality....

Print this item
Posted by: C C - Nov 24, 2014 03:26 AM - Forum: Geophysics, Geology & Oceanography - No Replies

http://www.scienceworldreport.com/articl...-earth.htm

EXCERPT: A catastrophic landslide that occurred 21 million years ago may be the largest to have ever occurred on Earth's surface. Scientists have discovered the remains of the Markagunt gravity slide, which was the size of three Ohio counties.

Geologists have known about smaller portions of the Markagunt slide for years. Yet recent mapping techniques have finally shown its full extent. Now, researchers have announced that this landslide was far, far larger than previously expected.

The landslide itself occurred in an area between what is now Bryce Canyon National Park and the town of Beaver, Utah. It covered a staggering 1,300 square miles, and was probably far larger than the Heart Mountain slide....

Print this item
Posted by: C C - Nov 24, 2014 03:16 AM - Forum: Meteorology & Climatology - Replies (1)

http://www.scienceworldreport.com/articl...-field.htm

EXCERPT: The sun may actually be influencing lightning strikes on Earth. How? The sun is temporarily "bending" Earth's magnetic field and allowing a shower of energetic particles to enter the upper atmosphere. In fact, it could be causing as much as 50 percent more lightning strikes in the UK...

Print this item
Posted by: C C - Nov 24, 2014 03:10 AM - Forum: Logic, Metaphysics & Philosophy - Replies (2)

http://chronicle.com/article/Neuroscienc...he/150141/

We have shifted our focus from the meaning of ideas to the means by which they’re produced. When professors began using critical theory to teach literature they were, in effect, committing suicide by theory....

EXCERPT: When, in 1942, Lionel Trilling remarked, "What gods were to the ancients at war, ideas are to us," he suggested a great deal in a dozen words. Ideas were not only higher forms of existence, they, like the gods, could be invoked and brandished in one’s cause. And, like the gods, they could mess with us. In the last century, Marxism, Freudianism, alienation, symbolism, modernism, existentialism, nihilism, deconstruction, and postcolonialism enflamed the very air that bookish people breathed. To one degree or another, they lit up, as Trilling put it, "the dark and bloody crossroads where literature and politics meet."

Trilling belonged to a culture dominated by New York Intellectuals, French writers, and British critics and philosophers, most of whom had been marked by the Second World War and the charged political atmosphere of the burgeoning Cold War. Nothing seemed more crucial than weighing the importance of individual freedom against the importance of the collective good, or of deciding which books best reflected the social consciousness of an age when intellectual choices could mean life or death. And because of this overarching concern, the interpretation of poetry, fiction, history, and philosophy wasn’t just an exercise in analysis but testified to one’s moral view of the world.

"It was as if we didn’t know where we ended and books began," Anatole Broyard wrote about living in Greenwich Village around midcentury. "Books were our weather, our environment, our clothing. We didn’t simply read books; we became them." Although Broyard doesn’t specify which books, it’s a good bet that he was referring mainly to novels, for in those days to read a novel by Eliot, Tolstoy, Dostoevsky, Conrad, Lawrence, Mann, Kafka, Gide, Orwell, or Camus was to be reminded that ideas ruled both our emotions and our destinies.

Ideas mattered—not because they were interesting but because they had power. Hegel, at Jena, looked at Napoleon at the head of his troops and saw "an idea on horseback"; and just as Hegel mattered to Marx, so Kant had mattered to Coleridge. Indeed, ideas about man, society, and religion suffused the works of many 19th-century writers. Schopenhauer mattered to Tolstoy, and Tolstoy mattered to readers in a way that our best novelists can no longer hope to duplicate. If philosophy, in Goethe’s words, underpinned eras of great cultural accomplishment [...], one has to wonder which philosophical ideas inspire the current crop of artists and writers. Or is that too much to ask? Unless I am very much mistaken, the last philosopher to exert wide-ranging influence was Wittgenstein [...]

[The unabashed mission of] postmodern theorists [...] was to expose Western civilization’s hidden agenda: the doctrinal attitudes and assumptions about art, sex, and race embedded in our linguistic and social codes. For many critics in the 1970s and 80s, the Enlightenment had been responsible for generating ideas about the world that were simply innocent of their own implications. Accordingly, bold new ideas were required that recognized the ideological framework of ideas in general. So Barthes gave us "The Death of the Author," and Foucault concluded that man is nothing more than an Enlightenment invention, while Paul de Man argued that insofar as language is concerned there is "in a very radical sense no such thing as the human."

All of which made for lively, unruly times in the humanities. It also made for the end of ideas as Trilling conceived them. For implicit in the idea that culture embodies physiological and psychological codes is the idea that everything can be reduced to a logocentric perspective, in which case all schools of thought become in the end variant expressions of the mind’s tendencies, and the principles they affirm become less significant than the fact that the mind is constituted to think and signify in particular ways. This may be the reason that there are no more schools of thought in the humanities as we once understood them.

[...] This is not to suggest that the humanities have been completely revamped by the postmodern ethos. There are professors of English who teach literature the old-fashioned way, calling attention to form, imagery, character, metaphor, genre, and the changing relationship between books and society. Some may slant their coursework toward the racial, sexual, and political context of stories and poems; others may differentiate between the purely formal and the more indefinably cultural.

That said, what the postmodernists indirectly accomplished was to open the humanities to the sciences, particularly neuroscience. By exposing the ideological codes in language, by revealing the secret grammar of architectural narrative and poetic symmetries, and by identifying the biases that frame "disinterested" judgment, postmodern theorists provided a blueprint of how we necessarily think and express ourselves. In their own way, they mirrored the latest developments in neurology, psychology, and evolutionary biology. To put it in the most basic terms: Our preferences, behaviors, tropes, and thoughts—the very stuff of consciousness—are byproducts of the brain’s activity. And once we map the electrochemical impulses that shoot between our neurons, we should be able to understand—well, everything. So every discipline becomes implicitly a neurodiscipline, including ethics, aesthetics, musicology, theology, literature, whatever.

[...] All this emphasis on the biological basis of human behavior is not to everyone’s liking. The British philosopher Roger Scruton, for one, takes exception to the notion that neuroscience can explain us to ourselves. He rejects the thought that the structure of the brain also structures the person, since an important distinction exists between an event in the brain and the behavior that follows. And, by the same token, the firing of neurons does not in a strictly causal sense account for identity, since a "person" is not identical to his or her physiological components.

Even more damning are the accusations in Sally Satel and Scott O. Lilienfeld’s Brainwashed: The Seductive Appeal of Mindless Neuroscience, which argues that the insights gathered from neurotechnologies have less to them than meets the eye. The authors seem particularly put out by the real-world applications of neuroscience as doctors, psychologists, and lawyers increasingly rely on its tenuous and unprovable conclusions. Brain scans evidently are "often ambiguous representations of a highly complex system … so seeing one area light up on an MRI in response to a stimulus doesn’t automatically indicate a particular sensation or capture the higher cognitive functions that come from those interactions."

What makes these arguments, as well as those swirling around evolution, different from the ideas that agitated Trilling can be summed up in a single word: perspective. Where once the philosophical, political, and aesthetic nature of ideas was the sole source of their appeal, that appeal now seems to derive from something far more tangible and local. We have shifted our focus from the meaning of ideas to the means by which they’re produced. The same questions that always intrigued us—What is justice? What is the good life? What is morally valid? What is free will?—take a back seat to the biases embedded in our neural circuitry. Instead of grappling with the gods, we seem to be more interested in the topography of Mt. Olympus.

[...] Twenty-five years ago, humanist ideas still had relevance; it seemed important to discuss critical models and weigh ideas about how to read a text. "What are you rebelling against?" a young woman asked Brando in The Wild One. "What d’ya got?" he replied. As if to make up for two and a half centuries of purportedly objective aesthetic and moral judgments, an array of feminists, Marxists, deconstructionists, and semioticians from Yale to Berkeley routinely engaged in bitter skirmishes. Yes, a few traditional men and women of letters continued to defend objective values, but it seemed that practically everyone in the academy was engaged on some antinomian quest.

Nothing remotely similar exists today. Pundits and professors may still kick around ideas about our moral or spiritual confusion, but the feeling of urgency that characterized the novels of Gide, Mann, Murdoch, Bellow, or Sebald seems awfully scarce. Is there a novelist today of whom we can we say, as someone said of Dostoevsky, he "felt thought"? To read Dostoevsky, as Michael Dirda pointed out, is to encounter "souls chafed and lacerated by theories." This is not to suggest that you can’t find ideas in Richard Powers or David Foster Wallace, it’s just that the significance attached to their ideas has been dramatically muted by more pressing concerns....

Print this item
Posted by: C C - Nov 24, 2014 02:39 AM - Forum: Computer Sci., Programming & Intelligence - No Replies

http://edge.org/conversation/the-myth-of-ai

EXCERPT: [...] The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

[...] What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.

For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.

But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.

The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive....

Print this item
Posted by: C C - Nov 24, 2014 02:37 AM - Forum: Religions & Spirituality - Replies (2)

http://edge.org/conversation/the-myth-of-ai

EXCERPT: [...] In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion...

[...] For three decades, the AI world was trying to create an ideal, little, crystalline algorithm that could take two dictionaries for two languages and turn out translations between them. Intellectually, this had its origins particularly around MIT and Stanford. Back in the 50s, because of Chomsky's work, there had been a notion of a very compact and elegant core to language. It wasn't a bad hypothesis, it was a legitimate, perfectly reasonable hypothesis to test. But over time, the hypothesis failed because nobody could do it.

Finally, in the 1990s, researchers at IBM and elsewhere figured out that the way to do it was with what we now call big data, where you get a very large example set, which interestingly, we call a corpus—call it a dead person. That's the term of art for these things. If you have enough examples, you can correlate examples of real translations phrase by phrase with new documents that need to be translated. You mash them all up, and you end up with something that's readable. It's not perfect, is not artful, it's not necessarily correct, but suddenly it's usable. And you know what? It's fantastic. I love the idea that you can take some memo, and instead of having to find a translator and wait for them to do the work, you can just have something approximate right away, because that's often all you need. That's a benefit to the world. I'm happy it's been done. It's a great thing.

The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.

In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up.

This is not one of those. What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.

The problem here should be clear, but just let me state it explicitly: we're not paying the people who are providing the examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms work. In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.

This, to me, is where it becomes serious. Everything up to now, you can say, "Well, look, if people want to have an algorithm tell them who to date, is that any stupider than how we decided who to sleep with when we were young, before the Internet was working?" Doubtful, because we were pretty stupid back then. I doubt it could have that much negative consequence.

This is all of a sudden a pretty big deal. If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects...

Print this item

Latest Threads

Artemis Stuff

Astronautics
Today 12:20 AM

Yazata

2026 Midterms

History
Yesterday 11:16 PM

Syne
Magical Realist
Magical Realist