This site is not maintained. Click here for the new website of Richard Dawkins.

Comments by Zeuglodon

Go to: Celebrating Curiosity on Twitter

Zeuglodon's Avatar Jump to comment 41 by Zeuglodon

Comment 31 by ColdThinker

Sadly, one of the things that easily spurs crowds of people on is having a common opponent. It's depressing how easily people begin thinking in terms of "us versus them".

Comment 38 by Alan4discussion

The Chinese solved their exploding population problem, with their radical "One Child" law, and absolute intolerance of meddling religinuts. Drastic - but it can be done!

Not entirely without complications. China still has a monstrous population size, and the policy has resulted in a lot of unfortunate infanticide. The trouble is it's either birth control or mass famine, and we live on a planet where a few deluded souls with political power act like contraceptives are murder weapons.

Comment 37 by Alan4discussion

The short answer is there are no confirmed examples of bacteria with arsenic incorporated in their DNA. Just sloppy science, media hype, and some resistant bacteria living in arsenic contaminated water.

That's news to me. I must have a look at that link.

Comment 40 by holysmokes

I approve, but it is sometimes convenient to call someone an American or a Russian rather than to recall their specific birthplace, and more satisfying to nail their nationality than to be vague.

Tue, 07 Aug 2012 16:46:37 UTC | #950502

Go to: A Baltimore Catechism for the New Atheists

Zeuglodon's Avatar Jump to comment 12 by Zeuglodon

Comment 6 by Quine

Comment 9 by Quine

I am wondering if the invocation of "purpose" was more specifically directed at living beings in a manner similar to the argument from design. Perhaps it would be worth addressing this: say, that the apparent purposefulness of nature is derived from a non-random selection process that favours traits better fitting to whatever else is around, including other traits and copies of itself.

Comment 11 by Alan4discussion

Their "evidence" is inevitably simply a denial of science, speculation on areas of uncertainty, or comparing the different "philosophical, hypothetical towers on their theistic "castles in the air" - which have no physical connections to the material universe or objective observations.

The more obvious "evidence" used more often is circular reasoning. They presume without evidence that a deity exists as part of the explanation, which is the point of contention. They presume a deity is necessarily connected to morality, which is the point of contention. They presume a human being was designed, which is the point of contention. In not one case do they cite real world evidence that would justify these assumptions. To them, it's just intuitively self-evident.

Tue, 07 Aug 2012 11:56:19 UTC | #950488

Go to: Against All Gods

Zeuglodon's Avatar Jump to comment 113 by Zeuglodon

Comment 111 by susanlatimer

Comment 110 by Schrodinger's Cat

Let's not get confused here. There are three separate issues being conflated regarding this "Why" question, and it would help all of us if we separated them.

The first and most obvious use of "Why" is indeed a straightforward explanation for a given observed phenomenon, as SC points out. If I ask why humans exist, an explanation would indeed be one that made reference to evolution and to the history of the hominids as they evolved from apes; say, that apes evolved larger brains for Machiavellian intelligence or for more refined abilities of planning and problem-solving as part of an extremely omnivorous lifestyle. It need not be restricted to ape-hominid evolution - in principle, every evolutionary split is part of the explanation - but this is the most obvious place to start. In this sense, "How" and "Why" are different ways of phrasing the same question.

The second use of "Why" is an implicit assumption of purpose imposed by a conscious mind. "Why do humans exist?" in this sense might be answered with reference to a deity's plans or goals for us, akin to asking why tools or useful domesticated animals exist. This is the problematic one because it assumes that such a mind exists from the get-go, so a religionist invoking it is making an implicit circular argument.

The third use of "Why" is a comparison between two states: "Why do humans exist as opposed to not existing?" More broadly, it's a subset of the question "Why does anything exist (as opposed to not existing)?" The problem with this one is that "nothing" is a concept mostly used on Earth to mean a vacuum, a negation of something that does exist within space-time (in the sense of dividing table from "not-table", which is practically everything else apart from the table) or a place filled with air. It's hard if not impossible to say what metaphysical nothingness would even be like, or if it could, paradoxically, "exist". As there's no means to even establish that the alternative is possible outside of our imaginations, this question is unanswerable to everybody (scientist, religionist, philosopher), so no one achieves anything by invoking it.

Sun, 05 Aug 2012 14:53:33 UTC | #950411

Go to: Meme Theory, Zahavi's Handicap, and the Baldwin Effect

Zeuglodon's Avatar Jump to comment 38 by Zeuglodon

Comment 32 by phil rimmer

Comment 31 by Zeuglodon

I think it's an error to assign the designation of "copying" too readily

and

it must follow from the genetic need for brains to replicate information.

I fail to see your semantic nicety.

My point is similar to one about group selection - it presupposes that the genes have brought about certain qualities in their host organisms. In the case of memes, memetics presupposes that there is a replication mechanism set up in brains by genes. In the case of group selection, it presupposes that animals have social behaviours. In both cases, I think it would be more interesting and more productive to ask why those conditions have come about in the first place. After all, group selection has been rendered obsolete by such investigations. Memetics may or may not go the same way, but either way those previous mechanisms must be explored first. This is why genes are the topic of my interest.

I qualified "copying" in a very detailed way, aligning it with "RNA World Soup" replication, which we understand to be very poor with huge lateral leakage and little to help us define entities at all.

RNA is not a terrible replicator - it still has the ability to preserve its genes for multiple generations before the first mutation kicks in. Memes cannot be bootstrapped by a weak analogy, because their mutation rate is severe by contrast.

I did suggest mechanical and expressive actions to be candidates for gene-like memes (the video illustrated the potential fidelity of this copying) and that these can constitute a (formal) substrate for more complex transfers (?)

I'm not sure what you mean here, so I'll assume it's a lead-in to your next paragraph.

O'Hooligan details the mechanical nature of ritual, rhyme, rhythm and music and the like. These formal processes, mirror neuron copied with good accuracy, can become the cultural machine that transcribes more complex stuff.

My point, though, is that neurons don't copy with good accuracy. A single fragment of information (say, the original copy of a sheet of music) may endure, but the copies that come from that one are the first generation, and the copies of those become very Xerox-like in that a copy of a copy quickly degenerates. Try copying out a passage from a book, and then copy out that passage from the copy, and do this several times, and in every generation the spelling mistakes, minor punctuation, and the like will accrue, word substitutions may occur, and people are not above "improving" the text. Now, this feeble Xerox-of-Xerox style copying may have enough juice for the weakest of evolutionary processes, but it does not last multiple generations without change, so how can it fixate in a meme pool?

Comment 33 by OHooligan

"imperfect" makes it sound as though the replication process could be as awful as I describe, but if the mutation rate is so high, it can barely be called replication from brain to brain

By "imperfect" I meant a non-zero mutation rate. If I'd meant "wildly inaccurate" - as you seem to assume - I'd have said so.

I think you minimize the difference and don't realize it. A gene can literally be atom-for-atom exact for hundreds or thousands of generations before the first mutation hits it. If you're going to be so wide with the designation "imperfect", you could call anything an imperfect evolutionary system because everything would have a non-zero mutation rate. Mutation is, when you get down to it, a change of structure or composition.

You assert that idea transfer from mind to mind is too error prone to serve as a replication mechanism in the evolutionary sense. I disagree. There's a lot of error-correction going on that maintains the core essentials of a viable meme.

I talk about the correction mechanism further down this post, but to focus on it for a moment: I already pointed out that two brains trying to replicate an idea or components of an idea between them are more likely to make it less error-prone if they repeatedly go over the idea i.e. have a discussion. The trouble is that the change then works both ways, so the original meme is mutated too. My scepticism comes from the fact that, to compare a meme with a gene, invoking essentials is a sign that the replication system is not up to scratch. Nobody says the essentials of a gene are maintained. A gene passes through literally identical at all points, until it mutates, at which point it is no longer the same gene. It becomes a different one, though that different one will still play the evolution game as its precursor did.

This is a long way from what happens to a meme. An idea shared among people is always different in some detail, and this is only the first or second generation. It's changed and altered in so many ways that it can soon diverge and even branch in wave after wave of subcultural change. It is true that the ideas have an epidemiology much like a virus', and fads grow and diminish, change and sometimes return altered.

However, once it's changed, it's not the same idea. Every iteration changes, so every iteration, in a strict sense, is not the same idea. This will come across as pedantic, but either this means that bits of ideas are memes as opposed to whole ideas (like bits of genome or genes are the true replicating entity of interest as opposed to whole genomes) or that memes are not evolving entities. You may wish to loosen the term replicator to include ideas, but it's the fact that such a loosening has to occur that would make a scientist suspicious of the concept.

Just because an information channel is noisy, doesn't mean that it can't carry a signal. For an example right under your nose: the internet.

I did not say the channel could not carry a signal, so this point of yours is true but addresses a straw man. What I was saying was that the signal could not be part of a replication system because the very existence of noise that can alter the process (even on the first generation) is enough to raise scepticism about meme theory. I am more supportive of the idea that memes would be bits of ideas rather than whole ones, to get around the noise problem. I even said so in my last comment:

It is possible that bits of the message, or the gist, are what endure, and that the whole idea is more like a genome than like an individual gene. In the jargon, it could be a memeplex, with some memes surviving the transition better than others. I think this would be a stronger angle to stress, as the gist of a speech would be like those bits of it that must translate well, and a piece of speech would have higher person-to-person fidelity than a whole speech. Nor do I mean that you'd remember fragments of speech like "and then", "evoluti", "Selection b", or "Evolution by natural selection". I mean that, if you and your interlocutor have a long enough chat about an idea, the odds are greater that your ideas will align with each other and be more faithful copies.

This is similar to your point about the "essence" of an idea getting across, but I think it is much more rigorous because it keeps us with what's physically going on in brains. The imparting of an idea, then, would be like making a copy of a memeplex - bits will fail to be copied, but any one meme could be copied exactly, and copied multiple times. This would be a legitimate way to get around the imperfect copying problem, because then bits of an idea could be exact copies for long enough to enable an evolutionary mechanism to occur.

My main criticism for this counter would be that any attempt to make sense of information coming in must already rely on a built-in ability to reverse-engineer what the other speaker said, and must already rely on some built-in ideas. In other words, discussing an idea with someone is not so much about creating a new pattern in their heads as about activating one already in there.

But then in what sense is this an independent replication system? Those parts of the brain coding for such innate ideas must already have been set up by genes, just as the legs of a stick insect clone must have been set up by the genes. The replicator is the gene or set of genes that install that part of the brain. An independent replicator is not needed to explain the similarities.

I suppose I could make it more explicit if I refer to an example. Think of subcultures of recent decades, like rebels, outlaws, wild ones, bohemians, punks, shock jocks, mau-maus, bad boys, gangstas, sex divas, bitch goddesses, vamps, tramps, and material girls (not my own wording - taken from How The Mind Works, page 502). They all differ in many ways, but an underlying thread connects each one: defiance of mainstream or of authority. Quentin Bell calls this conspicuous outrage, and it possibly follows the logic of the Zahavi-Grafen handicap principle, but applied to seeking allies and social integration. The image says "Look at me, I'm so secure and strong in my position that I can afford not to cooperate with you".

Followers without such confidence naturally might mimic it to get the same response from other people (fear and respect), but doing so dilutes the effectiveness of that subculture because now everybody - weak and strong alike - is doing it. So, a new subculture with new details arises among the socially powerful to make them stand out. It's discussed in more detail in the aforementioned book.

My point is that the common thread underlying the idea or ideas, far from being a clue to a replicator system exclusive to culture, is actually something that's set up in individuals by genes building brains to behave in certain ways. It doesn't itself replicate because it is the phenotype of genes that do the replicating. When a new trend is set off, it's noisy window dressing to the "essence", which is actually not a replicator. The reason we can understand each other in the first place is because the same genes in both our bodies set us up identically, at least in part, to begin with. Even if I've never heard of postmodernism before, the reason I can grasp the idea in the first place is because I have a brain that's like the brain that told me about the idea, not because the idea has copied itself from one to the other.

Chinese whispers show a counter-example: when there's no error-correction, what's transferred isn't a meme, it's just a Confused Noise (to borrow from E.E. Milne once again).

See my point above about this error-correction. In any case, I would have thought that Chinese Whispers would at least make you raise questions about a replication system that doesn't get it right on the first try. DNA has nearly never experienced such a problem in all the millions of years it's been around. Chinese Whispers shows why memes are awful replicators - because the ratio of mutations to number of generations can distort a message beyond recognition of the original.

Yes, but the copying doesn't need to be at all accurate in its internal details, just as long as it results in a recognizable copy at the macro level.

This is a confusion, akin to saying that genes can mutate highly so long as the overall genome is preserved. If anything, the instability of the small detail would quickly add up to instability of the big picture, because the catchment area of mutations increases with size. And any unifying thread between the fashions of subcultures, as mentioned above, would most likely be an innate thread owing their existence to genes in the first place, defeating the point of memetics.

Imagine reverse-engineering a product to make an equivalent one, as opposed to getting hold of the original design and knocking out identical replicas. A bike is a bike, even if every one is a bit different, and it's bike-like ideas that make for viable memes. The design, plans, tools in my bike factory may be totally different from those in yours, but both factories make things that people recognize as bikes.

I think this is disingenuous. A bike could have an extra wheel at the rear and be called a tricycle. Giving it a motor makes it a motorbike. Two extra wheels might make a quad bike, and before you know it you've got something similar to a small car. If we invented hovering technology, we'd next get hoverbikes. At which point do we draw the boundary of "bike", and why? In any case, this tells us nothing about bikes or about the ideas behind them getting replicated - only that the ideas endure in some fashion across genetic generations - nor does it suggest that the only possible origin for bikes would be the replication or evolution of ideas. The mistake is to think evolution is the only algorithm by which a good design comes about. There's nothing necessary about this connection at all.

The main unifying thread is that we use a bike as a sort of transport, which is a basic need for a creature that evolved to get about, possibly over long distances, for food or to find new habitat to settle into. We don't follow the basic mechanics of bikes because there are memes for bike-ness, but because anybody designing a transport device has to follow the laws of physics, such as those involving gears and wheels, and some work better than others.

It is tempting to point and say that this falls into line with memetics, but does it really? Any system that zeroed in on a better thing than others could be recruited for memetics in this way. Memetics requires some ideas to be better than others for some purposes (e.g. for making an efficient transport), but if some ideas are better than others for some purposes, nothing about it suggests ideas are memes. At most, you've got a decision-making or problem-solving system, which is what brains are.

Most modifications of a basic idea can and usually are made to the prototype. For instance, if something is wrong with the bike's steering, we don't get rid of the bike or artificially select generations of better bike designs. We go to a mechanic to install the new parts or to fix the old ones, to give it an upgrade, or to buy a different bike (whereupon the old one may be sold to someone else needing it for another purpose).

You may wish to go back to the drawing board and look at the ideas behind bikes through the ages and claim that those are the replicating entities, but most of these ideas come from artifacts that can outlast any one biological generation. We preserve them, and copy them only as a last resort when these artifacts start to decay - say, an old photo begins to wear and tear. Each copy is like a Xerox of a Xerox, which is why original copies of manuscripts and of ancient tools are so valuable in the first place. Ideas preserved in this way don't replicate. They endure.

Error correction is thus a basic requirement for an idea to be a meme - in a circular, self-serving definition, memes are the ideas that have what it takes to survive in their environment.

But the error-correction you describe, just as with the replicating mechanism, had to come from genetic evolution in the first place. In any case, see my point above about how the error-correction could actually be a problem for the meme idea.

Comment 34 by Schrodinger's Cat

I think the problem with identifying memetic replication arises solely because one is trying to identify it at too high and broad a level. One needs a closer analogy to genes...

Yes, I agree completely, which is why the macroscopic approach strikes me as being flawed.

At the most basic level, one can find an extremely accurate meme in the letters of the alphabet. The alphabet is a structure that daily gets copied with pretty much 100% accuracy. This is of course analogous to the 4 letters of DNA code.

Then one has words. These are long lasting memes.....which do in fact 'evolve' over time. Its interesting to note how many new words are alterations to existing words, either in form or context...similar to genes.

To take the analogy further, sentences or short pieces of prose are the chromosomes. In that context, consider the huge number of short 'sayings' or quotable quotes that exist within culture. In fact its interesting to note how most political or religious organisations have short 'slogans' and religions have short stories....parables. The brevity itself makes the slogan easier to remember......less likely to 'mutate'.

While I probably would question the exact one-to-one parallels between the two hierarchies, I would not deny you've got the point. If memes exist at all, they cannot be the gross structures such as sentences or general ideas. They would have to be much smaller entities.

And so on. So the fact that someone might not have the exact brain state as Marx on reading about dialectic materialism is sort of missing the point. The level of complexity plays a huge factor in ability to 'copy'....and clearly there is a simpler level at which memes most certainly do get copied with a high degree of accuracy.

I agree, up to a point, but you have mischaracterised my position with that comment about "exact brain state". I mean that, if a unit called a meme is to be copied at all, it must be a physical copy in the brain - whether by neural nets of hundreds or thousands - that matches. In other words, something somewhere must be replicating with total accuracy.

My problem is that this steers away from the Charybdis of gross mutation rates and towards the Scylla of genes doing all the work. Any similar structures between our brains at the neural net level is very likely to have already been set up by genes in the first place. We would all have an ear for phonemes even if we came to identify different phonemes in our particular language. But in that case, the longevity of any particular phoneme - say, the short "a" - is explained by genes setting up the mechanism for recognizing one when we hear one, defeating the purpose of invoking an independent replicator in the first place. I go into more depth above.

I appreciate this is the weakest point of my counterargument to memetics, largely because it depends on the parameters both for the generation/mutation rate and for how much this structure owes its set-up to genes, but even if it were refuted, it does not change the fact that at the macro level, ideas are unlikely to be replicating, much less evolving, entities.

Comment 35 by jimblake

You assert that replication is the key mechanism and that information is not the thing to focus on.

I'm pretty much paraphrasing what Dawkins has said. The whole of the life sciences is rooted in there being a replicator, both to kick-start and to maintain it. Dawkins explains it well enough in The Selfish Gene. No replicator, no life, so no life sciences.

I dispute that. As you know, a DNA molecule with no meaningful information is not a gene, but the replication of the molecule will not produce evolution because the molecule itself will have no effect on the number of copies produced.

I think you've confused yourself. A DNA molecule, and by necessity a gene, is a replicator whether it effects the number of copies produced or not. A gene is a piece of DNA, so how could it be otherwise? The point you contest over is actually about phenotypes, which are the other things that genes produce. They do this simply by manufacturing amino acids, which produce polypeptides and the bases for proteins. Information is not a magic quality added to make it possible. It's the inevitable outcome of such a replication-phenotype system, because a gene that codes for a protein that, via a very long chain of causation, results in more blubber under the skin, could be a gene with information about the cold environment in which a whale lives if a geneticist was given the corpse and told to glean as much about it as possible. Replication is necessary for the information to arise in the first place.

By contrast, brains carry masses of information, but only in a few species, and most dramatically in humans, has there even been a candidate for memetics. This must be because some structure or structures exist in the brain which make replication possible.

However, a DNA molecule with genetic information written into its structure can produce evolution as it replicates because the meaning of that information may have an affect on the number of copies. That is why I say that the gene is the information and DNA is the medium.

It's certainly true that a DNA molecule can be transcribed to an RNA molecule and create the same "gene", and probably has done in biological history, but the value of such an information transfer would be down to physical properties of both, and one of those properties would be replication. Sooner or later, you simply can't escape replication's role in the process.

I would say that replication is not the key mechanism and that information IS the thing to focus on. If something in the information in the gene or the meme has an affect on the number of copies produced, then mutation of that information can allow evolution to take place.

Again, I think you're confusing phenotypes with information. The phenotypes have an effect on the genes, it is true, but it is as an outcome of the replication effect that information enters the picture. After all, if all DNA did was manufacture proteins, nothing special would happen. You wouldn't even get bodies.

The genes for adaptations to hot climates that make up the phenotypes of a kangaroo or of a camel came from a random mutation generator, and the variants only failed to survive because they did not match onto the environment's needs as well as their allelic rivals did. The reason any useful gene can spread is because of replication. Even neutral drift requires a replicator.

Comment 36 by phil rimmer

Most anglophones of a certain age knew you meant A.A.Milne. How did we know? Multiple noisy channels. Execute the chinese whisper routine down enough (not necessarily contemporaneous) channels and we will be able to extract useful information.

And suppose neither of us had checked the original primary source and had gone on and described "E.E.Milne"? I wouldn't even have known it was the same person until phil rimmer mentioned it, by which point I could have passed it on to someone else, and they could have passed it on to someone else etc. To speak metaphorically, the mistake could be halfway around the world before the truth got its boots on, and yet never compete with A.A.Milne because people could easily think they were two different people.

The main reason you can verify the spelling in the first place is because the mass-produced books (one generation, remember) written by him still have his name on. The other main reason is that spelling is generally digital, so it's harder to get a name wrong than to get a biography about the man wrong. Nobody is suggesting that the name "A.A.Milne" arose when it competed with alleles for the slot of "name" among human brains and reached fixation by following selection pressures. Instead, the name was decided upon by a single brain (say, the mother or the father). The structure representing it in the brain did not replicate against its alleles until it reached fixation. "A.A.Milne" did not crowd out "B.B.Milne" or produce better "A.A.Milne" copies by comparison.

Again, all the language machinery and mental mechanisms were set up by genes beforehand. Most of the quibbling over a name is like quibbling over a tool to use for a specific purpose rather than like genes noisily crowding their rivals out of existence (his parents are unlikely to forget the alternative names they thought up for A.A.Milne any more than an evolution-supporter is likely to forget what creationism is). This is one reason why I think memetics is more likely to be a problematic rather than a helpful analogy in the long run.

Brains are wired and memories encoded using the associative Hebbian mechanism, cells that fire together wire together, thus, combined with a simple Bayesian predictor, have the structure to become pattern maximising discriminators extracting the most likely information from those channels.

My point is that there are alternative mechanisms to account for cultural changes that don't involve an "evolution by natural selection" analogy.

memes are the ideas that have what it takes to survive in their environment

This is the basis of a nice functional definition.

There are a mountain of problems with this definition. What is an "idea", and how would you distinguish it from a non-idea? What is it physically? What do you mean by "survive", and what is the "environment" in which they must live or die? How is this different from straightforward change and having a beginning, middle, and end? Does calling such a thing a "meme" lead to confusion, akin to calling groupiness "group selection"? And where do replicators appear in the definition?

This is one reason I stick to my group selection analogy, because it is too, too easy to get sloppy with the idea precisely because there are bits in it that make sense on their own - the spread of ideas, for instance. You have to ask yourself how a scientist would go about the problem.

I like this very much. You are describing what I have liked to describe in the past as "cultural machines" the processes of which are wired into brains in the very earliest of years. These machines (cultural processes) are concerned with the precise packaging of information and its formal organising permitting amongst of things noisier channels.

Aphoristic knowledge is certainly what we aspire to. It is highly portable and robust.

It may be worth considering that aesthetics could be key to memetic robustness.

See my reply to SC above.

I'll stop there for now, as I'm already pushing my luck writing a reply so long. Hopefully, it will contribute a little more to the discussion.

Fri, 27 Jul 2012 13:45:45 UTC | #950159

Go to: Scapegoat for Catholic evils?

Zeuglodon's Avatar Jump to comment 4 by Zeuglodon

The trouble is that the only argument to counter the Catholic Church being the target of liquidation, as opposed to those individuals who were directly involved, would be evidence that the crime was perpetrated or aided by higher levels, if not by all levels, of the organisation. You could also refer back to the history of the church and the atrocities it commits.

The reason this is a trouble is because they do "charity" work and that the crimes are claimed to be against the church's official ethos, which means you have to check their written documents. In some people's minds, this "negates" many of the wrongdoings, which is a double standard as a business that acted like this would be condemned and dismantled in a heartbeat. This appeal to the good that the church does is often seen among Catholics who otherwise condemn the actual practices.

You're not going to get the church liquidated until you remove this "religions are special" glamour and convince people that religion is a cost without benefit, or in more subtle cases a cheater and an exploiter who has you in a psychological trap. By this, I mean you have to convince people that the church's raison d'etre is either hocus-pocus or nothing exclusive to them. Not to mention people fear (or at least act prudently around) something that wields more influence than they do. If the Church was reduced to a cult like Scientology, there'd suddenly be a lot less political pussyfooting.

The solution? Keep beating religion on intellectual grounds, and promote better alternatives like secularism, humanism (or personism, if you insist), science, reason, and a genuine consideration of ethics as opposed to religious doctrine that claims to be about morality. Remove the justification for religious privilege and get this removal recognized by politicians and the judiciary system - by the usual avenues of lobbying, protesting, and making your stance clear when you have a chance to vote for something. Only when the glamour is removed will it be easier to tackle religions as you would any other organization.

Wed, 25 Jul 2012 11:52:53 UTC | #950040

Go to: Religious Olympics

Zeuglodon's Avatar Jump to comment 50 by Zeuglodon

Comment 42 by Quine

Which Witch? -- Contestants are each presented with a panel of people, all claiming to be innocent, and must find the witch, and using only a large pile of sticks, get a fire going and get that witch burnt in record time.

They get bonus points if the panel of suspects doesn't actually include a witch.

Wed, 25 Jul 2012 11:23:10 UTC | #950033

Go to: Do we need objective morals?

Zeuglodon's Avatar Jump to comment 16 by Zeuglodon

Comment 11 by ThoughtfulTheist

That being said, I think this article misses the point behined the moral argument for God. The argument is not so much that we cannot come to know objective morals without God, but rather that the existence of objective moral values points to the existence of a God. As far as I can tell this article didn't try to make a case that objective morals didn't exist, but rather rightly discredited the claim that we need holy books to know these morals.

This is actually a poor argument because it presupposes what it concludes - that morality or ethics implies a deity is involved, so if morality exists, a deity most likely exists. The truth is you have to justify that claim without presupposing it, because frankly a deity could exist without reference to morality at all.

I think our capacity to have morals and be moral is certainly a product of evolution, I just think its too reductionistic to explain objective morals in this way

Being "reductionistic" has never been a valid criticism of anything. It is literally the case that, if you take out kin selection and other evolutionary explanations, moral emotions that follow their logic will not exist. Moreover, the selfish gene theory is a reductionistic view of life, and it is the only one that makes the most sense of the most amount of data gathered.

In fact, its reductionistic nature has been the basis behind its success because of its comprehensiveness. Compare that with a holistic explanation like group selection, which is pretty much an invalidated fringe theory by now. This is why the evolutionary angle has so far been the strongest explanation for our good feelings found so far. All that's really left to do is examine how the body's physiology has achieved it.

Comment 12 by CleverUsername

What JosGibbons said, but with the addendum that group selection would actually be a terrible explanation for morality. If anything, it would vindicate fascism and throwing out the weakest members of the group (a group that got rid of useless surplus would thrive against those that wasted resources on members who couldn't fight), and those are stellar examples of immoral behaviour. It also doesn't agree with the observation that the most warlike tribes tend to be those who are well-to-do already.

Comment 13 by Quine

I'm not impressed by the ought/is distinction Hume raised, though his intentions for raising it were a lead-in to a more thorough analysis of the basis of morality. A brain is a decision-making organ, and something or some process in it must be the basis for telling which option is preferable to another, however complex or bland this commonality is. A tiger would not hunt deer if it could feel the prey's pain at the moment of killing, if at all. It's not just that there are better or worse ways to achieve certain goals - it's that there are better or worse ways to reconcile incompatible goals, within the one brain or across multiple brains. People's ability to ask why they "ought" to follow that process is tantamount to saying that the rule doesn't apply to them - except that it does. The question of "ought" in this case is a category error.

I agree that the subjective/objective division isn't helpful, but I identify the main counterargument to the distinction as coming from neuroscience. If a "thought as we call it" corresponds with a bit of brain so well that the two are interchangeable, then for all practical purposes it is that objective bit of brain. Our confusion is the result of our attempt to see the back of our own heads, to speak metaphorically, when we try to consider ourselves objectively. By definition, it can't be done. The best we could do would be to ask someone to video tape your head when the neurosurgeon puts you under and takes a look. Our confusion also comes from the fact that, for nearly all of our evolutionary history, we never got to evolve to see the insides of other people's heads directly and had to evolve proxy rules, which is probably why dualism is so attractive.

If anything, we might gain from dropping the dichotomy between objective/subjective. A subjective opinion is basically a fact I (however tentatively) think is true for whatever reason. A subjective view is a fact or set of facts about the person giving it, however implicitly it's conveyed.

Comment 14 by Jos Gibbons

The Euthyphro Dilemma does more than that, though I agree it's never been answered without invoking bad logic. It points out that we have to be aware of the reasoning behind any moral rule given, because following a rule without knowing why so strongly does not chime with how ethical behaviours play out. This is why the "just following orders" justification is actually lamented as a failure of ethics.

This is mostly why I don't find deontics very convincing as a standalone, because its specifications (of what makes a given behavioural rule good) seemingly come out of nowhere, without themselves being based on any non-assumptive reasoning. It suggests people follow arbitrary laws just because they're told what to do. Yet, morality is not a set of commandments. If anything, it has more in common with a decision-making system that has goals, weights for weighing up the value of a decision, and reference to the consequences of an action (costs as well as benefits). It must have a consequentialist component. It would probably be more harmonious to recognize consequentialism and deontics as two sides of the same coin, and the coin was forged by evolutionary systems.

I'm fully behind "objective" in the sense of a. I think the idea behind b has been rendered archaic by scientific progress, and c is simply something we're doomed to fall into on account of a in any case.

Comment 15 by Pete H

The economics parallel for why "perfection" clearly isn't realistic is a good one. Perfection is simply wishful thinking made apparent by its strong dissociation with how the world really works. It's the perfect-solution fallacy. Long-term improvement is far more realistic. If anything, I consider the concept of a cost/benefit analysis and something like ethical hedonism and utilitarianism (if not actual utilitarianism and ethical hedonism) to be a key component of explaining and refining a scientific ethics, just as the cost/benefit analysis of an organism's resources is helpful when considering evolutionary logic and actual economics. I also think ethics must be a complicated subject even if it is based on something simple about brains, just like biology is complicated even though the process that makes it possible - the replicator - is ludicrously simple.

Wed, 25 Jul 2012 10:37:47 UTC | #950031

Go to: Meme Theory, Zahavi's Handicap, and the Baldwin Effect

Zeuglodon's Avatar Jump to comment 31 by Zeuglodon

I know this is a long post, so don't feel you have to read it all now. Do it in bits, if you prefer, and come back later. I mention this only so that I don't have to repeat what I've already typed simply because someone commented without appreciating my full argument.

Comment 22 by Quine

That's OK. I agree with the mods: both sides were in a stalemate and getting nowhere, so it was probably best to just admit as much and call it a good day. I'm just pleased the thread's turned out to be my longest yet. :-D

Comment 25 by OHooligan

I don't mind the belligerence,

So I was coming across as belligerent? :-( In that case, I must apologize for my tone. No disrespect was meant. If anything, I value having someone of a different position from me to discuss my ideas with. I'll try to reign in my language.

it's the haystack of strawmen, all the stuff about stars and bits of brains and stick insects with missing twigs.

I'll gladly expand on my reasoning for any of those analogies, though I don't know where the missing twigs came from. I was talking about missing legs. But first I'll respond to that charge of straw manning, as it is a serious one.

I don't think it is a straw man of memetics, and my general response to your post should make it clearer why. If anything, it's the main way to validate it. I think you overemphasize the role of information transfer, to the point where it seems you come dangerously close to saying that replication is secondary. This is the wrong way round. Information transfer by replication, however much you insist the process is secondary, still relies exclusively on physical replicators. Even if a train went from "X" in one medium (say the brain) to "Y" in another and then to "X" when it encounters the first kind of medium again (another brain), you must still concede that the X's in either brain would have to match. How could it be otherwise?

It is indubitably based on physical copies, which is why Dawkins came up with the meme in the first place - there would be a physical copy in my brain of the information in your brain. Give this a moment's thought, and you recognize it must be true, or else we would be free to call any causation or cycle replication (like the sun distorting nearby matter, say by exploding, which creates more suns).

Granted, the mechanism as I describe it is still a copying one. I do not dispute that brain configurement X being translated through the air (say, as my speech) could legitimately result in a copy of brain configurement X being made in your head. That would be a legitimate replication system, even if it requires extra machinery to bring it about. The fact that that machinery was set up by genes could be dismissed as irrelevant.

There are two problems I have with the approach you take after that, though. The first is that you still fail to appreciate that DNA's replication is not a vindication of memetics. DNA does not hire out other molecules to scan it and make a copy of itself, though it does recruit RNA to transfer the DNA code into protein molecules (the phenotype). The DNA does not delegate for replication, though. It hires the cell molecules to prize its double helix structure open and to separate the strands.

When this happens, each strand of DNA, all by itself, physically hooks up to any nucleotides present, in a precise order, by autonomy, the result being that each new DNA molecule is now a flawless copy of the original. It is a direct process, which is why a strand of DNA (a gene) can last for generations literally identical down to the atom before it changes. This stability is vital to calling it the unit of selection because selection requires generations for fixation of a gene to occur.

What I had hoped to get across with the arrow diagram was that ideas don't translate directly at all. It is all done by proxy, and this leaves the replication system vulnerable to a higher-than-desired level of mutation. Granted, we recognize that a large amount of signal would have gotten through, otherwise we'd be completely incapable of understanding each other. But repeat the process and it cannot be called replication because the generation/mutation ratio is essentially like that in Chinese whispers. We get the gist, of course, but we rarely remember people's words verbatim even if we are devoid of distractions, and even the gist can distort - indeed, most likely will distort - if I try and explain it to someone else.

It is possible that bits of the message, or the gist, are what endure, and that the whole idea is more like a genome than like an individual gene. In the jargon, it could be a memeplex, with some memes surviving the transition better than others. I think this would be a stronger angle to stress, as the gist of a speech would be like those bits of it that must translate well, and a piece of speech would have higher person-to-person fidelity than a whole speech. Nor do I mean that you'd remember fragments of speech like "and then", "evoluti", "Selection b", or "Evolution by natural selection". I mean that, if you and your interlocutor have a long enough chat about an idea, the odds are greater that your ideas will align with each other and be more faithful copies.

The second problem is that, while you rightly point out that self-replication and replication-by-being-replicated-by-something-else in practice amount to replication however you look at it, I think you dismiss the difference too readily. A molecule, say, that replicates itself is limited only by access to the necessary ingredients. DNA is less potent than RNA in many respects, but once they are both in reach of the needed nucleotides, they both replicate spontaneously and accurately. Neither uses a proxy for the replication process: a DNA molecule is always capable of replication. Other molecules can move it about and manipulate it, but once it's in position the DNA does all the replicating work.

Ideas, though, or configurations of neurons in my brain that will be copied into yours, don't have this spontaneity. They really do just sit there. The machinery that enables replication was given by the rest of the brain, which takes an active effort to communicate the technical specs to others and then to reverse-engineer a signal they receive. This arrangement only lasts as long as the genes building the brain want it to. Knock the machinery out of commission, and the idea ceases to be a replicator. Blindness, mental impairments, muteness, and so on quickly destroy the mechanism, at least in part. The idea itself can be perfectly intact in the brain, but it won't have replicator power. DNA's and RNA's cellular helpers could be knocked out of commission, but a DNA strand and an RNA strand will still be replicating molecules because they still replicate when given the right ingredients. Far from being an inevitability, the replicator status of the idea hangs on a knife edge. This is probably why the copying mechanism is so rare in brained species in the first place - because ideas do not replicate and have to be replicated.

The most obvious difference, though, is that the brain mechanisms of interpretation and language coordination have to actively struggle to get this imperfect replication system working at all. Compared to a gene's simple binding at the atomic level, it is such a ludicrously expensive balancing act that has to be done on behalf of a limp bit of information that culture's rarity among living species could be ascribed to this simple fact alone. It seems effortless to us, but then genes would not build bodies that strained all the time to do it, and the metabolic costs are still larger than they would be if we had smaller brains. Hence the imperfection of the copying mechanism is a significant implication of the fact that the information is not self-replicating.

This is why I dispute your wording here:

Evolution is inevitable in any system that supports repeated imperfect replication of packages of information with competition between packages for finite resources necessary for replication.

Firstly, because the "imperfect" makes it sound as though the replication process could be as awful as I describe, but if the mutation rate is so high, it can barely be called replication from brain to brain. It would just be straightforward change.

Secondly, because I think switching over to information distracts from the fact that replication is the key mechanism. Yes, there is information, but information is not the thing to focus on. There's a lot of information in the brains of living creatures and in computers, but it isn't all trying to replicate at all. Information is simply following instructions.

The unit of persistence is the Information Package, not the temporary structures or organisms that participate in the competition for resources and the replication process.

I think you underappreciate the physical side of things. Information is not some ghostly thing that hops from matter to matter. What you describe is the long-term pattern of a system whose basic steps are as I describe them: replication of some form or other. Genes are still our topic of interest in biology for this reason even if we could encode them on a computer in binary form.

This means that other systems, not just our familiar "carbon based lifeforms", can also evolve. Granted, we know of no mechanism other than RNA for getting this started.

I have no problem with this conclusion. Replication does not restrict itself to RNA or DNA, and I agree with RD's arguments on universal darwinism.

You place "self-replication" center stage, and therefore exclude any system that doesn't have this property.

I've dealt with what follows this sentence already, so I'll just add that I don't exclude non-self-replication by fiat. If a replication system occurs via another agent other than the thing being copied, what's the difference? My point is that such a replication system will be vulnerable to - indeed, has fallen for - the shortcomings of relying on a proxy.

Without requiring genetic (DNA) mutations, memes can evolve much faster.

Human technology has developed from next to nothing in an eyeblink on the genetic evolutionary timescale. The human brain today is hardly any different from the brains of the people who built the pyramids. The "database" for our technology lies beyond our DNA, and is updated much more frequently.

I think this is the rub. Cultures change at a rate faster than genetic evolution, so it is sensible to ascribe it to an alternative process, so long as the debt to genes is acknowledged.

This alternative process does not necessarily have to be an evolutionary one, though. My point about bringing in the Baldwin Effect, extended phenotypes, and so on was to show that, even if genes haven't been doing anything for the last ten thousand years, they can still be ample explanation for this rapid change by setting up bodies that follow flexible rules that change the environment - remember, the environment includes other bodies as well as inanimate matter.

I suppose a more helpful way of looking at it would be to consider chaos theory, or more specifically (and to make sure I don't look like I'm invoking it in vain) the fact that small initial changes in a complex system can result in vastly different outcomes. Once genes set up brains that could make a stab at scanning (for transmission) and reconstructing ideas by proxy, the sheer number of ideas could allow for huge deviances in cultures much later. Those deviances are limited by the huge number of biological factors that shape it - all cultures have marriage systems and funerals, but the specifications vary tremendously - but do not in themselves constitute an evolutionary process. I'll gladly expand on this point if it is not clear, but the essential message is that nothing logically requires us to diminish genetic roles in culture as though the two were fully separate.

Comment 26 by jimblake

Zeuglodon, I agree with OHooligan. You are over-analyzing this issue; reducing it down to unnecessary levels.

On the contrary, I only lament that I can't give it a more systematic analysis. If we are to take the meme idea seriously, then we need to isolate the physical object that is being copied, however remotely the actual copying mechanism works. But this will have to wait for neuroscience to begin isolating the neural nets involved, and that won't happen for a while yet. In the meantime, I confess I remain baffled by the charge of overanalysis. It genuinely does not matter what is replicating or on what scale the replication system works - just identify the unit for me, and we can then measure on that scale whether it replicates or not by a simple list of criteria.

I think you are mistaken when you say that a gene is self-replicating. A gene is informtion. The medium for this information is DNA. There is nothing in the gene that tells it to copy itself. The information in the gene is copied by the DNA into another DNA molecule. The analogy with meme theory is that the 'meme' is information, the brain is the medium, and the brain copies the information into another brain.

See above, but to be pertinent for a moment, this only shows your ignorance of the basis of evolutionary theory in biology. Genes do not need something in them to tell themselves to copy. They are bits of DNA and RNA, and will spontaneously copy themselves whenever the molecule does simply because they follow physical laws. That is the basis of their power.

I think that meme theory is just a possible explanation for cultural evolution that is on a different level than biological evolution.

I agree it's a possible explanation, but then so's group selection. The issue at stake is whether it's the best explanation. My current verdict is that it is not, for reasons outlined above.

Comment 28 by Schrodinger's Cat

That sort of begs the question of who gets to decide that a meme has actually been copied. There's no such problem for genes, because a gene is a physical object, and thus replication means physical copy, but what's the criteria for a meme being passed on ?

I think it is a mistake to act like memes are abstract even though genes aren't, because that's a step towards gapology - moving things away from a person trying to analyse them. If memes are to have any credit as replicators, there must be a physical replicator involved at some point.

Comment 27 by phil rimmer

Thats not to say that copying doesn't go on. It clearly does.

I think it's an error to assign the designation of "copying" too readily, which is probably why memetics is still maintained. A Xerox of a Xerox is a "copy", but the change from one to the other would mean that such a quickly-degenerating replication system cannot evolve. The confusion is that elements of one do look a lot like those found in the original - but if we're going to assign the label of replicator so loosely, we might as well call ourselves copies of our parents. If you're going to call something a replication system, your standards have to be tighter than allowing a mutation every second generation or so.

If nothing else, the basic point I want to get across is that there is a mechanism here worth looking at, but it isn't the idea itself. The genes have set up brains with expensive machinery that goes to the trouble of sending out signals and of capturing incoming signals, identifying them as speech or as writing, and then reverse-engineering the signal to try and reconstruct the idea or ideas that went into making the signal. The genes want brains that are a lot like each other, that go some way to reconstructing what it is like in another brain's configuration. I want to identify why that is, and I maintain that memetics may be distracting too much from the question of why there is such a system in the first place - because memetics by its nature presupposes such a system, just as group selection presupposes group-forming behaviour in individuals.

After all, even if memetics were true, it must follow from the genetic need for brains to replicate information, and those reasons may be enough to explain the phenomenon of cultural differences if we identify them. They may render memetics irrelevant, just as kin selection, reciprocal altruism, and the proxy rules that fulfil the logic of both render group selection irrelevant. I'm not trying to get across that I already consider memetics to be incorrect, like group selection, but that it might turn out to be if we actually look at the idea and don't simply run with it.

Wed, 25 Jul 2012 09:26:03 UTC | #950027

Go to: Meme Theory, Zahavi's Handicap, and the Baldwin Effect

Zeuglodon's Avatar Jump to comment 21 by Zeuglodon

Comment 19 by OHooligan

It's nothing like discussing atoms with a car mechanic. I'm trying to check whether an idea ticks all the boxes needed to qualify for replicator. That means asking:

  1. What is the structure, the unit of replication?

  2. Does it make discrete copies of itself?

  3. Is the number of generations large before it hits its first mutation?

To answer one, I zoned in on the structure in one brain, say that could be representing "christianity" or something. Say we represent it by X.

To answer two, you look at two brains. X is in one brain, but not in the other. The replication process occurs. Now, X is in one brain and an X is in the other.

To answer three, you look at more brains in a chain. If it goes only one brain more before X becomes a Y in the next brain, that's hardly enough time for any kind of meaningful spread to occur. Even if it was like one person from generation one told twenty people from generation two, that would still be one generation. If, on the other hand, it goes a hundred or a thousand brains more before X becomes a Y in the next brain, that's plenty of time for a meaningful spread to occur. We'd be able to call it evolution, though not automatically natural selection.

What you call me using the microscope is actually me looking at the right scale - by analogy, watching the moment when an RNA strand makes a copy of itself. And when you look, you notice two things:

  1. I would therefore have to look at nervous system nets such as those in the brain to show one. I have to identify the unit, and that would be a nerve net that represents the idea in one head. To take a broad view is bad policy here because the unit isn't found by assuming it exists and jumping straight to the natural selection metaphor.

  2. I question whether it makes discrete copies of itself on two grounds: the first is the high mutation rate between generation zero (my brain) and generation one (your brain); the second is the stick insect argument.

Genes are crisp, digital, separable from the genome and isolate-able. It's not arbitrarily picked, as you misinterpret it, and it doesn't mutate after only a handful of generations. It's described, in RD's case, with reference to an allele. Any change is not a gradual thing, but an either/or state. A mutation changes a genome in the blink of evolutionary time, and then the resulting genes competing with their alleles follow the same algorithm as blindly as before. Genes are not terms of "convenience". You misunderstood, and subsequently exaggerated, the observation that a gene is not one of a string of beads along the genome, because for all practical purposes it is.

Ideas aren't beads even for practical purposes. When I tell you about christianity, the idea in your head is guaranteed to be a mutation of mine, especially if my explanation is complicated. And if you tell somebody else, another mutation is added. Bits will pass on unscathed, true, but the mutation rate will be so high that it'd be a game of Chinese whispers. The number of generations for this would be too low before the idea morphed beyond recognition compared with the original. And weirder still, it's recursive. Generation one can be fed by generation ten a completely new idea that is, ironically, the result of the idea it passed on.

Any common ground between people - for instance, their understanding of alphabets and language instincts - are the products of genetic phenotypes, just like a stick insect's leg and brain. Damage to these will not be passed on in the next generation. I'd understand what an alphabet is even if my ancestor received a head blow that destroyed his or her language circuits. Those bits of the idea that everyone automatically gets are those bits that didn't replicate, so memetics is as unnecessary as clone selection here.

You also fail to appreciate the significance of the virus RNA. All a virus RNA cares about is meeting nucleotides that, when it is exposed to them, automatically arrange themselves into a copy. The fact that it can exploit the goldmine of stuff in cells follows evolutionary logic: why waste time looking for them elsewhere when you can specialize in parasitism on a nearby and available bounty? In the very early days of replication, simply drifting about bumping into material like plankton do would have to be part of the process. RNA is not a helpless little thing that has to ask organelles to read it and build a copy without it. Give this a moment's thought and it should be apparent that a replicator needed to get life started could never be such a thing. It had to be something that replicated under its own steam. I told you that modern organelles are refinements of that process, with the RNA still hanging around and the DNA acting as a kind of database for it to refer to. RNA has to physically touch the matter needed to make RNA before it can replicate. An idea in your head behaves nothing like this because it is helpless. The machines have to do the dirty work for it. Moreover, they're not doing it for it. The genes have made large-scale copiers for their own purposes.

And this is putting it mildly. Far from the copying mechanisms being spontaneous, it's so mutational and more akin to straightforward causation that the few times an idea lasts more than a few generations without changing at all are rare. Too rare for evolution, never mind for any kind of selection process, to occur.

I maintain you're running with a half-baked idea. That is the last thing you should be doing.

The alien analogy you provide doesn't work. The transmitters don't make more transmitters. They are already set up by the genes long ago. The information "sent" is simply another form of causation, but a transmitter never makes a transmitter any more than a mouth makes a mouth. That's like saying a bit of brain makes another bit of brain just like it. It doesn't - the transmitter and interpreter bit is already set up in the other guy's brain, and the info actually being sent isn't sent: specifications on how to build it have to be translated into code before being reverse-transcribed into info. This is assuming your mechanism doesn't lose anything in transmission.

It doesn't goes like this:

Idea -> Idea -> Idea

Or like this:

Idea (moved around by machinery until it physically touches material) -> Idea (moved around by machinery until it physically touches material) -> Idea etc.

Genes do work like this:

Gene (moved around by machinery until it physically touches material) -> Gene (moved around by machinery until it physically touches material) -> Gene etc.

It goes more like this:

Idea -> Transmitter -> Light/Sound wave -> Sensor -> Reconstructor -> Idea

And even then, it's just as likely to do this:

Idea 1 -> Transmitter -> Light/Sound wave -> Sensor -> Reconstructor -> Idea 2 (mutation of 1)

It's easy enough to say that we can treat it like it's evolving, but that's to confuse any kind of change in general with evolution, which is specific and technical. Star cycles aren't evolution, they're straightforward change. So too is the constant transmission and reconstruction of ideas.

Comment 20 by OHooligan

The trouble is, this doesn't get you out of the mix. A physically identical molecule physically comes away from the original replicating molecule. You could swap them and not tell the difference, and the beauty is that this isn't a coincidence, but will happen over and over again until a mutation, gross or small, occurs. You can't duck it by saying information is passed on via an intermediary or that it's not material. To even declare it's passed on requires a physical replicator in the first place, or else you might as well say that an idea gets passed on when anything happens to anything else - say, a star's explosion disturbs a dust cloud that collapses and forms another star. That's not replication, but under your logic, it would be because stars lead to other stars.

And you commit an awful mistake here:

Evolution is an innate property of Information, independent of the specifics of the processes that encode, replicate, mutate and select it.

Evolution is not an innate property of information. Every non-meme brain is proof of that, because the information in brains doesn't get replicated any more than the bodies of asexual stick insects do. If damage or a change occurs to a stick insect - say she loses a limb - this isn't inherited by its offspring, and the same applies whether it's a change to her leg or to her brain. If she loses an eye or gets damage to a part of her ganglia, this won't get inherited by her children.

Replication is the first step to getting any evolutionary process started. Replication is what we should focus on, because if something doesn't replicate, and do it often enough so that mutation comes only every 100th generation or so, there's a basis for differential survival of replicators. But the first step is to prove replication is happening. Information about the previous star could be held in this new star caused by the previous one's explosion in the form of its molecular content, but again this information is not making an evolutionary process.

My apologies if I come across as a little belligerent, but it's too easy to assume change is evolution, and when it's not I think it's important to get to the heart of the issue and point out these differences. I think genes enable an independent causal chain to operate in parallel, but in a similar sense to how a tool might be passed on outside of genetic generations. The distinction between an active and a passive copying mechanism (self-propelled and set up by something else) is key, and I really want to get that across.

Mon, 23 Jul 2012 15:23:35 UTC | #949897

Go to: Religious Olympics

Zeuglodon's Avatar Jump to comment 2 by Zeuglodon

Swimming enthusiasts might like to try out for the deepity dive-athon. Contestants will have one hour to sink to the depths of abstract thought and come up with a suitably deep-sounding phrase that they can intone in a solemn voice. Bonus points will be added for each fashionable scientific term thrown in, but you'll be penalized if you actually use them correctly or characterize the concepts accurately! A panel of judges will award points to the deepity that sounds the most impressive and the least coherent.

So, as the wise man says, may the super-auto-oppressive heteronomous energy vibration with oscillatory pro-anti-feminist consciousness transubstantiate your scientistic psychosophical neo-postchemical Kalam-ity! Hail sports fans, amen, and good night to the question mark!

Mon, 23 Jul 2012 14:02:16 UTC | #949894

Go to: Do we need objective morals?

Zeuglodon's Avatar Jump to comment 2 by Zeuglodon

Comment 1 by Jos Gibbons

I agree with you, and would add that ethics works best when it is done on a case-by-case basis rather than trying to invent a grand unified theory of good and bad. It also has parallels with cost/benefit analyses, originates from evolutionary logic, and indeed, is based on it, and to an individual comes as our impulses for compassion, empathic concern, and feelings. I would also argue that, since these depend on real world facts and cause (and are caused by) our connections with the rest of the world, part of ethics is science and learning. Trying to build an ethics system without reference to real world facts is to lose sight of what ethics is about in the first place.

Mon, 23 Jul 2012 13:52:16 UTC | #949893

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 286 by Zeuglodon

I honestly have to throw in the towel. I've lost any sense of what SC is supposed to be promoting, or why he's so confident in it.

All I've seen is a very unscientific idea he supports by a combination of introspection, doublethink (claiming new physics without providing any suggestion of how to tackle it scientifically), faulty thought experiments like Searle's, word juggling, ignoring counters and jumping to another argument, and presumptive claims of irreducibility and self-evidency that no one can verify. Plus, I'm not enjoying his unpleasant manner of communicating one bit, especially his consistent line of saying "You've failed to grasp the point", which he uses so often I wonder it actually means anything. I've argued with religionists who have shown more politeness and interest and less smugness and condescension than him.

I can't penetrate those philosophy articles because the terms psychology theory, mental state, internal constitution, function, role, identity, causal, behaviour, just don't seem to attach to anything I've learned so far about brains. They're just used as if it's self-evident what they mean. There's no mention of perception theories and how perception is actively working. There's no sign of a real world data set or attempt to measure anything in either article. They seem to come from a time before neuroscience. And, irrelevant though this is, they're dull to boot. I almost agree with raytoman about philosophy. My apologies if that doesn't satisfy you, SC, but I'm afraid after facing religious arguments that seem to be all in the arguer's heads, I've long since lost confidence in assumptive thought experiments and arguments that don't involve studying or engaging with anything tangible.

To provide an example of why I've lost interest:

For (an avowedly simplistic) example, a functionalist theory might characterize pain as a state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state

What? I don't think a deer with claws in its flank believes that something is wrong. The brain detects it, certainly, but the deer feels the pain and gets the urge to buck like crazy. This seems to confuse belief as in have information about the world that doesn't match a goal (all while the deer is unconscious of this process but aware of something because of that process) and actual declarable belief, as in a human saying "I believe people are in pain when nociceptors are activated". The word belief just confuses because it's an everyday word used in an unfamiliar way, and I have no idea what the author was thinking.

Ant brains can perform complex mathematics to find their way about, but I don't say the ant knows calculus or trigonometry. That's the fallacy of saying a snail knows logarithmic scales because its shell follows the pattern, or that a kin member consciously goes through Hamilton's kin selection equation before deciding who to love. The ant does, however, have a sense that home is that way or this way, so it is aware of something. I don't know anything about the advanced mathematics of the parabola of the ball coming towards me, but I can make a safe guess whereabouts it will land and catch it because my brain can do the calculations.

I've been talking to you, SC, for 4 days, and I'm no nearer understanding why you're so excited about a "new physics" which, by all rights, kills any attempt to discuss it just as creationism tells future scientists "Don't bother." On the plus side, you've persuaded me not to keep the functionalist moniker, but more because I've decided it's better to actually look at scientific stuff rather than ally beforehand with any particular "camp".

Now I'm going to do some research into neuroscience, where I can actually come away having learned something about the complex of matter between my ears.

Comment 285 by Tyler Durden

I won't ask. I'll just stop chasing the bait, now.

Mon, 23 Jul 2012 12:46:44 UTC | #949886

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 280 by Zeuglodon

Comment 279 by Schrodinger's Cat

Software is very much physical stuff, though not in the sense of being a material in its own right.

Erm.....why do I have to keep scrolling down half a million pages of alleged disagreement in your posts...only to see you make the exact point I've made that you are supposedly disagreeing with ?

Nonsense. Compare this with your Comment 273 by Schrodinger's Cat:

Well...I've come to realise that the true physicalist in all this is me. I'm the one arguing for an actual physical, physics, explanation of awareness......whilst others try to hide behind the supernatural realm of 'software'.

So what is software? Is it a supernatural cop-out, or a consistent physical thing (but not a material in its own right)? How can you accuse us of hiding behind "software" one comment and then effectively claim that software is a viable explanation the next?

And you're still avoiding the same charges I raise:

  1. Explain why my analysis of Searle's thought experiment from Comment 190 is invalid.

  2. Provide some example of where and how a scientist would look for consciousness

  3. Justify your use of the terms "experience", "qualia", etc. so that your argument is distinguishable from dualistic claims.

If you don't make any attempt to address these three points, I'm defaulting to the null hypothesis - that you're all talk and no show. However much you complain that I don't grasp your argument (something you're more fond of pointing out than of helpfully correcting), a point I haven't understood is indistinguishable from outright nonsense.

Mon, 23 Jul 2012 02:15:26 UTC | #949875

Go to: Anti-Dawkins legislation

Zeuglodon's Avatar Jump to comment 23 by Zeuglodon

Comment 20 by Neodarwinian

"In God We Trust", "One Nation Under God" and "National Day of Prayer"

No, that's sneaky!

Yes, very sneaky to put it on the dollar bills you use every day. "Big Brother is watching you... when you spend your money on ten items or less!" That said, all our paper money in Blighty has the queen's portrait on it, so by my own logic we're guilty of institutionalized royalism.

Mind you, given how many religious advocates have called for religious prayer and worship to be made mandatory parts of our education, plus how much the religious organisations influence political decisions, I think we've got many reasons to worry.

Mon, 23 Jul 2012 00:49:52 UTC | #949872

Go to: Meme Theory, Zahavi's Handicap, and the Baldwin Effect

Zeuglodon's Avatar Jump to comment 18 by Zeuglodon

Comment 17 by QuestioningKat

Ideas are dependent upon functions of the brain, but the idea cannot be manipulated separately by matter like a chisel to a rock.

It's not that difficult to pin down, trust me. Let's say an idea is a bit of my brain, just to keep it rough. Now the spread of the idea would be the idea appearing in more brains. It could be said to be a population of ideas. Now, the population of ideas itself could replicate, each generation being a new population of ideas budding off the old one like clouds fragmenting and then each one growing. You'd have a kind of evolution of populations if this went on for enough buddings and the idea occasionally changed in detail. Is that what you're getting at?

Mon, 23 Jul 2012 00:18:00 UTC | #949869

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 276 by Zeuglodon

Comment 273 by Schrodinger's Cat

Well...I've come to realise that the true physicalist in all this is me. I'm the one arguing for an actual physical, physics, explanation of awareness......whilst others try to hide behind the supernatural realm of 'qualia', 'experience of redness' ,'irreducibility', 'acausal things which don't have causality yet which I can talk about', 'unrealistic and unsound thought experiments', 'things I can't point to', and '100% certainty in being aware'.

Software is not supernatural because any idiot with a computer can point to an example. We can download some, upload some, run some, program some, install some, capture some on a CD, buy some in packages, dismantle a computer with some on, create some, improve some, come up with a new and exacting computing language from which to construct some etc.. This would be a bit tricky if it didn't exist, wouldn't you say?

Mon, 23 Jul 2012 00:00:13 UTC | #949868

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 275 by Zeuglodon

Comment 268 by Schrodinger's Cat

Steve already gave his analogy: the mind and the brain are like software and hardware.

No....

SC, you can't be this blind. Steve himself said that in Comment 261:

Perhaps the best analogy I can come up with is that awareness is equivalent to software. Software is patterns of activity in computers. It is caused and it has causal effects. It's functional. No-one insists that because there is software that there must be some extra physical presence called 'softwareness' that is non-functional and follows software around wherever it goes.

Now, next bit from you:

Steve's epistemological argument

Just argument will do.

is that qualia

I haven't seen him use the word once. He never once invoked it. You did.

CAUSE ( that means physical ) events.

Yeah, I think we've established that SZ thinks awareness, consciousness, whatever you want to call it, is a physical phenomenon.

Are you saying that computer software doesn't have causal factors? So what you're saying is that a computer program doesn't effect machinery?

A computer program is ( as Searle argued in the article you obviously didn't read )

Given how flawed his thought experiment is turning out to be, I'm not encouraged to read it. I don't recall you asking me to read it, in any case. Also, see Comment 274 by phil rimmer about software and programs.

purely an abstract symbolic representation of the physics going on.

abstract, (noun):

  1. having no reference to material objects or specific examples: not concrete

No, I don't think that's what software is at all. If that were true, you'd be saying that Microsoft Word has an independent life from the bits of data travelling through the PC. We end up with computer dualism.

Besides, your point about "abstract symbolic representation" makes it sound like it's arbitrary, when the reality is it is anything but. Software and programming requires strict adherence to the rules of the stuff you work with, and it is temperamental stuff, highly reliant on the compatibility of the hardware/software. Why do you think we hire specialist programmers and not people to "think up" symbols to impose on the physical matter without once touching or typing? Besides, I've already discussed the problem with the "symbols" part as part of my critique of Searle. The one you keep ignoring.

representation of the physics going on.

Which physics? The physics in the computer circuitry, or the physics of the outside world?

No....programs themselves do not cause anything.

So when I ask Microsoft Word to print, it's not the program that causes my printer to print?

The physical causality comes...amazingly enough....from physics !

This contradicts the software/hardware metaphor how, exactly? Software is very much physical stuff, though not in the sense of being a material in its own right. It's physical in the same way an evolutionary algorithm like natural selection is: it describes a process of physical matter, but doesn't have substance or ghostly presence. It is the physical matter plus doing something like following the algorithm. You'd have to convince me evolution was an abstract symbolic representation of the real world process and not the process itself. In other words, your word trickery would be revealed for what it is.

Since you're arguing it does something, you're supporting functionalism. It doesn't matter whether it's an arbitrarily designated first step (I'm saying this to distinguish it from an uncaused causer, FYI) or not, so long as it's part of a causal chain, being caused and causing.

You simply don't grasp the point being made.

Are you surprised? You're making minimal effort to aid me. Your arguments are all over the place. One minute, you claim mental activity is caused but has no effect. The next minute, you invoke qualia, experience of redness, abstract, symbol etc. You've made a lot of noise, and yet refused to answer a simple scientific attempt to get at the problem. You haven't even provided evidence for your claims. Of course I don't grasp your point. You're barely making one.

If qualia

Such as? Can you point out an example?

are causing physical events, that means that qualia themselves are physical causal agents.

So you will have no trouble pointing out whereabouts in the brain one could scientifically investigate them. Yet, you shy away from this and duck for philosophical arguments with lots of assumptions in them. It's not even as though I'm asking for a specific, microscopically detailed place to pick. A general area of the brain, say, will do, and a means of dissecting it would do.

The experience of the colour red is thus a physical thing.

That's what we keep saying! It's a neurological activity akin to computation via built-in gadgetry that genes installed and programmed to represent and interpret external aspects of the world like light waves. It's not irreducible "qualia", whatever that means.

I'm really not sure how much more clearly I can put it.

See above.

Comment 270 by Schrodinger's Cat

Incidentally...you can't simply argue that the physical components of qualia are what have the causality...as that is effectively the philosophical zombie position.

How does this make sense? Reduce awareness to its components, and bit 1 causes bit 2, say. The zombie argument is that everything is physically the same, but something is not present. How are these two related in the slightest?

Now answer these points:

  1. Explain why my analysis of Searle's thought experiment from Comment 190 is invalid.

  2. Provide some example of where and how a scientist would look for consciousness

  3. Justify your use of the terms "experience", "qualia", etc. so that your argument is distinguishable from dualistic claims.

Sun, 22 Jul 2012 23:45:26 UTC | #949867

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 267 by Zeuglodon

Actually, I'm not quite happy with how Comment 266 came out. Let's try again:

You cannot simply argue only that awareness ' is the result of ' this or that, because your own epistemology holds that awareness is physically causal.

Consciousness is caused. Consciousness causes. Consciousness can be dissected into smaller steps of causality because the brain functions that enable access to information can be broken down hierarchically from brain and lobes to individual neurons. That's functionalism.

The whole point of your causal chain from qualia to report of qualia is that the qualia is the first step in a chain of causality.

Yes, if by qualia you mean awareness, but a) it needs to be caused in turn i.e. it's part of the causal web, and b) this is functionalism. It does and is done to, and is made up of smaller stuff doing things that add up and so forth. Consciousness is reducible in this regard.

If you are arguing that causal chains require consciousness to cause something, then join the club - the functionalists say exactly that!

(My original response)

Who said anything about a "first step"? I think it's perfectly clear that Steve Zara means that you yourself are actually a functionalist without realizing it, because you agree with him that being aware of red is a function in the brain (i.e. it does something). In fact, I wonder if you've been a functionalist without realizing it.

Since you're arguing it does something, you're supporting functionalism. It doesn't matter whether it's an arbitrarily designated first step (I'm saying this to distinguish it from an uncaused causer, FYI) or not, so long as it's part of a causal chain, being caused and causing.

Thus to argue simply that ' qualia are caused by neural activity '

Well, again, if by qualia you mean awareness, because nobody else is using the word.

sort of misses the point that your own epistemology has qualia being a causal agent themselves

What's the contradiction? Being a causal agent does not mean one cannot be caused. If anything, a while ago, you were saying they are caused but did not have an effect, and were "along for the ride".

........and where, then, does that causality come from ?

What do you mean "where does it come from"?

(My original response)

Steve already gave his analogy: the mind and the brain are like software and hardware. You're supposed to be the computer scientist. You know software is hardware's internal arrangements and coding for performing specific tasks.

This was a bit off for me, hence the correction here as this Comment.

(SC)

You can't get round this simply by dismissing awareness as 'software', because that software is simply an abstract expression of whatever is going on physically.

Are you saying that it is or is like a mathematical relationship or pattern, then, of matter? That arrangement matters? Well, so do we. That position is part of functionalism. The other part is when you have the electronic signals coming through (i.e. when the computer or brain is switched on).

It is not an "expression", as though it were a Platonic realm floating over this one. A mathematical and logical relationship has to do with the physical. The arrangement of matter matters.

Sun, 22 Jul 2012 19:43:01 UTC | #949858

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 266 by Zeuglodon

Comment 262 by Schrodinger's Cat

Comment 253 by Zeuglodon

With all due respect, I think that in trying to cover the entire philosophy and science of consciousness in every response, you are not really answering the specific points being made.

All right. You've made a serious accusation against my method of replying. Give me an example of where I've failed to address a point you've made, because at this point I can't tell if you're serious or if you're being a troll.

It's perfectly simple:

  1. Explain why my analysis of Searle's thought experiment from Comment 190 is invalid.

  2. Provide some example of where and how a scientist would look for consciousness

  3. Justify your use of the terms "experience", "qualia", etc. so that your argument is distinguishable from dualistic claims.

You cannot simply argue only that awareness ' is the result of ' this or that, because your own epistemology holds that awareness is physically causal. The whole point of your causal chain from qualia to report of qualia is that the qualia is the first step in a chain of causality.

Who said anything about a "first step"? I think it's perfectly clear that Steve Zara means that you yourself are actually a functionalist without realizing it, because you agree with him that being aware of red is a function in the brain (i.e. it does something). In fact, I wonder if you've been a functionalist without realizing it.

Thus to argue simply that ' qualia are caused by neural activity ' sort of misses the point that your own epistemology has qualia being a causal agent themselves........and where, then, does that causality come from ?

Steve already gave his analogy: the mind and the brain are like software and hardware. You're supposed to be the computer scientist. You know software is hardware's internal arrangements and coding for performing specific tasks. Your response:

You can't get round this simply by dismissing awareness as 'software', because that software is simply an abstract expression of whatever is going on physically.

Are you saying that it is or is like a mathematical relationship or pattern, then, of matter? That arrangement matters? Well, so do we. That position is part of functionalism. The other part is when you have the electronic signals coming through (i.e. when the computer or brain is switched on).

Unless you can explain how purely abstract 'software' gets to have causal powers, then I have every reason to believe in a physical 'extra'.

Are you saying that computer software doesn't have causal factors? So what you're saying is that a computer program doesn't effect machinery?

How do you think an onboard computer works for a robot, then, if not because software matters and has causal effects? It has a strong mathematical and logical component, yes, but that does not mean it's non-functional - if anything, it means functions work because o how the system is arranged into patterns, which is what functionalism is about. This is still functionalist territory because it means a system and software is doing stuff.

And I've been stating that my view of consciousness was physical, right here on this forum, for well over a year now!

Again, so you'll have no problem identifying a likely avenue of scientific investigation?

Sun, 22 Jul 2012 18:49:00 UTC | #949856

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 258 by Zeuglodon

So my conscious experience is not something I go around 'believing' in, in the first place. It is more direct and in one's face than anything else. It requires no belief whatever. I do not 'believe' I am conscious....I know I am conscious.

Actually, you just are conscious. Certainty is an emotional response to information that can be miscalibrated, hence overconfidence and underconfidence. You don't know you're conscious if your prefrontal cortex has just been wiped out. I wouldn't even know you are if your language cortices were wiped out, because I'd have no means of hearing you say it.

A predator doesn't believe or know he's a predator. He is a predator. The thing about consciousness is that it seems recursive, because we can be aware that we are aware etc., but you actually hit your limit pretty quickly, like a person who claims to think infinity but actually just gives up after the number ten, jumps to a hundred, thousand, and then stops thinking about it and uses the shortcut "that's infinity". This actually has limits and you phase out the more you try to be aware of being aware (i.e. the more you introspect) because the more you spend introspecting, the less you spend monitoring your environment and the more easily surprised you are. This suggests awareness is a finite resource, a function with limits.

Sun, 22 Jul 2012 16:24:19 UTC | #949847

Go to: Meme Theory, Zahavi's Handicap, and the Baldwin Effect

Zeuglodon's Avatar Jump to comment 16 by Zeuglodon

Comment 15 by QuestioningKat

Does it interact with matter...at all?

If an idea doesn't have anything to do with matter, I think anyone trying to prove it would be in trouble, don't you? Even mathematics and logic are about matter sooner or later.

Ideas do not make copies spontaneously, human interaction is necessary.

That's my point. If a thing cannot make copies of itself, it doesn't fulfil the criteria needed for a replicator, though it can mimic one superficially just as a leaf insect can mimic a leaf. The copying mechanism, however, might be a benefit to genes if their success depended on relatives or helpful partners holding common ideas. On another thread of mine (which I think you've seen), I suggested some reasons why people sharing the identical ideas could be adaptive: for instance, that it makes it easier to cooperate or to bond (by reducing mutual misunderstandings - after all, you see the world roughly as they do), because of a social contest in which one advertises one's mental prowess by feats of memory for trivia and cultural mores etc.

If an idea does not have any tangible, literal substance and exists purely in an abstract state, it can never literally copy itself. Any transfer from one person to another is still in this abstract form which is communicated from one person to another. It can be communicated through any of our senses, intellect, emotions.... We can show a physical object and people can literally copy this, yet the underlying concept of the object is what is being examined. This process is not literal and searching for direct physical evidence of one idea replicating seems impossible. We can examine the results of physical manifestation, but not the actual action/idea.

But this dualism of ideas and physical matter isn't really a dichotomy. My knowledge of football, for instance, doesn't sit apart from the physical technicalities of the sport, actual games I've witnessed, and the workings of one's own mind. I won't go into depth on it here, but in short appealing to an abstract realm like a Platonic cave and claiming it cannot be tested by matter is a cop-out because memetics neither requires it nor suggested it. It's moving the goalposts after I've done my critique.

I do agree that lineages of cultural ideas do look convincing, but I explained above how I think this lineage actually works, and it doesn't need replication.

Sun, 22 Jul 2012 16:03:45 UTC | #949845

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 257 by Zeuglodon

Comment 255 by Steve Zara

Excellent! Thank you for the confirmation. I didn't want to misrepresent your views, but it's good to know that I haven't.

Another way to look at the situation is to consider that any argument for believing in non-functionalism hits a singularity just like a hidden division by zero in mathematics. You end up with a point where a proof becomes undecidable. What you are trying to prove may be true, but your argument can't reach the proof.

A singularity? That seems quite close to my description of the "event horizon" counterargument I put up for SC's point about a mirror causality in Comment 196 (I reproduced it above in Comment 253): what goes beyond the event horizon can't come out again.

The question is whether or not it is reasonable to believe that something is real when it is impossible to come up with evidence for it even in terms of your own thoughts.

I think not.

Agreed, which is why I hope my sidenote for you in Comment 253 helps out a little. You mentioned you needed a formal proof, and I noticed a little overlap, if it gives you food for thought. Here, I'll save you the bother of fishing my old comment and copy it here:

I think it has something to do with connection by particles, matter, energy. For instance, the divide between the observable and unobservable universe is defined by the fact that objects on the border but crossing into the observable (like a 13.7 billion year old galaxy) emit photons that have had plenty of time to reach the eye and the brain 13.7 billion years later. The connection between object and observer allows any knowledge to be made of it. Of course, we can infer the existence of galaxies outside the bubble based on what's in the bubble, but as the philosophical problem of induction points out, it ceases any pretence of knowledge. Anything that surprises us is regular proof of that, because we had no connection and were unaware of it.

Even genes, which connect our brains to the physical laws of the universe via the process of natural selection (genes being forced to fit the environment), can only connect us to those relevant to us, and their own limitations mean the built-in assumptions we use to navigate the world are assumptions only for those bits of the world the genome has information about. This is why induction produces more information than goes in: because it has information already waiting within the brain. Induction and thermodynamics are linked in some way, as the brain and the genes that built it are open systems.

Sun, 22 Jul 2012 15:53:07 UTC | #949844

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 256 by Zeuglodon

Comment 249 by Schrodinger's Cat

Seeing is not an effortless act of passively absorbing light waves. The brain has to make an active effort to analyze and interpret the 2D projection on the retina (which is translated into 1D pulses along the optic nerve) into a 3D reconstruction. Assumptions about the world, however, need to be built in. To put it in Pinker's delightful way: any perception trying inverse-optics or any form of impossible reverse-engineering has to use a cheat sheet. The cheat sheet is provided by genes that have honed in on the correct cheat sheets over evolutionary time.

The apparently effortless way you see red requires you to eat food, breathe atmospheric gases, and avoid being deactivated permanently by killers - and to do this regularly - to keep this machinery going, or else it ends up not seeing red. The reason it appears effortless is that it doesn't cause dissonance because you usually don't try to strain your perception to the limits of endurance. Try watching a blurry film at a cinema and your brain tries to adjust the perception so that the blur vanishes. When it doesn't, the system activates dissonance mechanisms, which is one component of frustration.

Sun, 22 Jul 2012 15:46:23 UTC | #949843

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 253 by Zeuglodon

Comment 245 by Schrodinger's Cat

Comment 238 by Zeuglodon

As I've already asked you: prove your claims scientifically.

This is what I call the Don Quixote method of scientific arguing.

You can give it all the clever literary allusions you wish. Science demands evidence for claims and logical reasoning that follows that evidence.

At the very least, you can provide an estimate of where you think consciousness would be.

Respond at such great length and verbiage and conflation with all manner of other things that have been thrown in

You've had multiple chances to ask about anything I've introduced, say in Comment 190 about how I describe awareness as compared with non-awareness, or about the weaknesses identified in Searle's Chinese Room experiment. Don't blame me for your own lack of engagement. I've even explained, as well as I can, why my comments are long; because I don't want to fall into the trap of making unjustified assertions, and I'd prefer to expose my reasoning for people to assess and discuss. You are free to ask about or to question specific bits, or even to give me the courtesy of explaining why my digression doesn't address the point.

, that you are bound to hit a windmill if you charge about enough!

SC, are these the words of a reasonable person or of a person so emotionally invested in a debate that he feels the need to ridicule rather than engage with other disputants?

That's how I end up being a 'dualist' despite not having uttered a single word in defense of dualism

We've already explained that your invocation of "more", despite being unable to pin down a physical basis, is precisely what makes it dualist. Neuroscientists have already isolated the parts of the brain responsible for several functions, such as short-term memory, emotional mechanisms, and interpretation of incoming data. Again, Pinker describes some of these mechanisms in depth in How The Mind Works.

and how I end up being a non-physicalist despite having repeatedly stated categorically that consciousness is physical

Yet, when invited to isolate this physical location, you decline from doing so. Please explain where you think or would guess the mechanism for consciousness can be found and analyzed by a scientist.

. It's also how inferences drawn from other people's arguments get turned into 'claims'

I've explained as well as I can why I think there are problems in your position. The trouble is that you don't seem to address them. You still haven't explained why my analysis of Searle's thought experiment is incorrect, for instance.

Or maybe someone on here can give you a few pointers about computing.....

They did so way back at college in 1975.

I should think computing has moved on exponentially since then, SC.

That's how I ended up being a Computer scientist :)

Assuming you are not lying or distorting the fact by a half-truth (say, you took a course, but never finished it), and wondering which specific field within computer science you studied, this doesn't save your arguments from being poor one jot. In any case, it tells me nothing about your knowledge of neuroscience, robotics, evolutionary psychology, or even, strictly, about certain details of computing. Besides, I'm fully aware brains and computers have many differences, but this cannot be exaggerated to saying that they have nothing in common.

Comment 247 by Schrodinger's Cat

No it isn't! You claimed it was something physical. That means you should have no trouble pointing out where it is or how to scientifically investigate it.

Erm.....

You're still being as patronizing as ever, I see. SC, you honestly would lose nothing by dropping this manner of communicating. However much we disagree, we're not enemies trying to win debating points, and I apologize if I come across as too aggressive at times if you feel pressured.

yet Steve's very own epistemological loop demands that consciousness is physical, a point to which I still have not had a response,

Because Steve would agree with you that consciousness or awareness is physical, but that consciousness has to be functional and cannot be a non-functional thing. You've misinterpreted him as defining it out of existence, when what he's defined out of existence (or more specifically subjected to a reduction ad absurdum) is non-functional consciousness. As far as I can tell, he's pointed out that certain beliefs about consciousness (specifically the non-functional ones) cause problems. See below.

but I don't see you harranguing Steve with demands that he show where it is, etc.

Steve, as far as I can tell, would agree with me that consciousness in general is a function of a simulation machine related to its access to information, its ability to analyze that information as a computation (the inverse-optics problem is what such analysis has to solve, and it does so by referring to built-in rules of operation), and the match between the goals the machine is designed to achieve and how well those motor actions fulfil the goal.

In animal bodies (I mean animals as in including humans), this would be measured and refined by natural selection - a brain whose models of the world were a worse fit compared with someone else's would be outcompeted by brains with better models, which feed back into genetic success. The gene's goals are biological survival and reproduction, which have representations in their host brains as survival mechanisms and mechanisms to encourage reproductive behaviours, say, but these also entail subgoals like perception, information gathering about the environment, and regulation of the body systems when faced with certain conditions. These in turn beget subgoals tailored for a particular environment such as underwater, in a bright desert, a thick jungle with a cacophony of sounds, or an open savannah filled with lions.

His own argument which you refer to seems to be this one, from Comment 151:

The problem of the closed loop of epistemology - knowledge to brain, brain to speech, speech back to awareness, is a logical defeater of any kind of dualism: dualism destroys the ability to know justified truths about the mind, and this applies to non-functionalism too, not just dualism. If there is no functional nature of consciousness, then all our statements about it have no truth value.

I don't see why inserting experience does anything to this statement. Experience has to be doing something just as much as knowledge, brain, speech. In fact, you just assume that experience escapes the same problem. I even explained in Comment 196 why your mirror image analogy doesn't resolve the problem:

Your analogy is still within the paradigm: the mirror causes an effect that goes back into the rest of the system, but any effect that goes out and never comes in has been lost to an event horizon, so nothing can "see" or be influenced by it because it is lost for good, which was the point I'm making.

None of your subsequent comments met this criticism. If anything, you insist again that consciousness is a physical process (we already agree, but we were never arguing against that in the first place) and that we think it is an illusion (we don't - we think the caused one that's along for the ride is self-contradictory). I'll justify those parentheses in a moment, but first, an aside for Zara:

I'm close to a formal proof of this, if I can figure out how to express it! It's related to my ongoing efforts on the ontology and epistemology of supernaturalism.

On a sidenote for Zara:

I think it has something to do with connection by particles, matter, energy. For instance, the divide between the observable and unobservable universe is defined by the fact that objects on the border but crossing into the observable (like a 13.7 billion year old galaxy) emit photons that have had plenty of time to reach the eye and the brain 13.7 billion years later. The connection between object and observer allows any knowledge to be made of it. Of course, we can infer the existence of galaxies outside the bubble based on what's in the bubble, but as the philosophical problem of induction points out, it ceases any pretence of knowledge. Anything that surprises us is regular proof of that, because we had no connection and were unaware of it.

Even genes, which connect our brains to the physical laws of the universe via the process of natural selection (genes being forced to fit the environment), can only connect us to those relevant to us, and their own limitations mean the built-in assumptions we use to navigate the world are assumptions only for those bits of the world the genome has information about. This is why induction produces more information than goes in: because it has information already waiting within the brain. Induction and thermodynamics are linked in some way, as the brain and the genes that built it are open systems.

Back to SC:

See Steve's own account in Comment 208:

No, you have just argued against yourself here. Chalmers does not say that consciousness has causal powers. He says the opposite - consciousness is non-interacting. This is why he came up with the idea of philosophical zombies - he says that there could be a version of a person which is physically identical and yet has no consciousness. A zombie is physically identical, and would say the same things as you. Therefore you could not see any additional knowledge in the zombie's words than in yours. Therefore consciousness can not add epistemic content to your words. Therefore what you say about consciousness cannot be a consequence of consciousness.

In other words, Chalmers' own stance on what consciousness is leads to a contradiction. This is the reductio ad absurdum, which despite the similarity, is not actually about showing an argument to be absurd, but self-contradictory. This is against Chalmers' particular idea of what consciousness is, not against any and all forms of consciousness, as his next point shows:

However, if you now see the need for consciousness to be a physical thing that has causal powers so as to produce epistemic content in thoughts, then you are a supporter of physical interactionist dualism. In which case you now have to explain how the physical thing that is consciousness has its causal effect.

In other words, if you support a particular idea of what consciousness is, you have to address the contradiction.

As for what an alternative view would look like, only two comments before, he clarified the issue you raised:

As I have pointed out what seems like hundreds of times, I don't assume consciousness is an illusion, I assume that beliefs about it are mistaken. Raising Dennett is a red herring.

What I need is an explanation for how the epistemic content of your words arises because of the existence of irreducible non-functional consciousness. How can a non-functional aspect of reality provide epistemic content?

Note non-functional. Steve Zara's point is that (though I hope he can clarify this), if you think consciousness is non-functional, you say that it doesn't do anything. However, if it doesn't do anything, it can't cause anything because to do something is to cause something, by whichever means. It can't cause you to talk about it any more than a deistic god can cause you to have revelations while remaining outside the universe. You can only make a guess. And your guess is unjustified by the evidence provided within the universe. You couldn't talk about non-functional consciousness because you haven't identified a link between it existing and you talking about it.

Example: if I want to talk about the colour red, I need a brain that picks up long-wave radiation, sends a signal back, and depending on the source of the signal (whether it came from a rod or a cone), the brain will send it to a specialist neural net that codes for it as opposed to coding for blue. Many parallel links will be involved, such as language nets for "r", "e", "d" phonemes, motor control for those words, and a means of switching them on and off to get the phonemes in the right order.

There can be hierarchical functions involved that regulate these smaller functions. The different lobes of the brain have been identified for functions such as visual and linguistic analysis, and they operate at speeds faster than a hundredth of a second.

A scientist could knock out those functions, and I would be unable to discuss or recognize redness, by a similar principle to how Gage was unable to suppress or override his short-term behaviours (and became literally a different person) of his prefrontal cortex being damaged, and to how a person with malformed rods and/or cones can't identify colour until a surgeon fixed them.

In any case, there are links, and all those links are neural connections from input to processor to output. That's the minimum needed for awareness as access to information. Of course, my red may well be your green, but this could be because the gadget in my head for red is, in your head, shaped and functions as a gadget for green. If a surgeon replaced it or corrected it so that a red gadget was there, the connection with the rest of the system, such as subsequent perception gadgets in the cortex, would receive new information, compare it with memory systems, and lead to the motor cortices operating the mouth so that the words "Hey, I can see a different colour" emerge. Everything does something and is connected up all the way.

..........wanders off mumbling something about whether double standards demonstrate free will..

Well, now you can judge for yourself whether it is a double standard or a simple misunderstanding. I am perfectly willing to expand on anything here, if you still don't agree with certain details. Of course, I'll need Zara to confirm that my interpretation of his argument is correct, but it seems to make sense given the above.

Sun, 22 Jul 2012 15:32:34 UTC | #949840

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 243 by Zeuglodon

Comment 241 by Schrodinger's Cat

Comment 238 by Zeuglodon

If you think it's a physical property, show us where it is and how it works.

That's completely missing the point....

No it isn't! You claimed it was something physical. That means you should have no trouble pointing out where it is or how to scientifically investigate it.

which you keep on doing at great length.

It's easier to accuse others, isn't it? I'm sorry if my comments are long, but that's no excuse to dismiss them as though you've already won the argument.

I don't need to 'show' anything

Oh, yes you do. Don't give me that excuse. You've argued yourself into making a claim about reality, now you can justify it with some real world evidence.

There. That's short and to the point. Try it sometime.

Now, you're starting to sound like a broken record. Being short and to the point does not save you from being wrong.

Justification for your physical-yet-not-dualistic stance, if you please?

Sun, 22 Jul 2012 13:27:33 UTC | #949829

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 242 by Zeuglodon

Comment 235 by nick keighley

There are resolutions to the Schrodinger's cat problem that don't involve giving the observer mystical powers. For instance, the idea that a quantum event branches off to allow all possibilities to occur in multiple worlds, so that in this universe, the cat is alive, but in another one, it's dead. Debate continues.

Comment 236 by Schrodinger's Cat

Anyone with sufficient astuteness will observe how I turned Steve's 'epistemological loop' right back on him.

I think, given your position, you'd be a little more cautious about judging other people's astuteness. The link between the real world and the simulation machine's model is doomed never to be certain. That's why people can be deluded in the first place. The reason the match is so good is because natural selection penalized genes that built obviously mismatched simulation machines, allowing genes that produced excellent simulation machines to dominate the gene pool.

Sun, 22 Jul 2012 13:24:04 UTC | #949828

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 240 by Zeuglodon

Comment 229 by susanlatimer

I think it's simply that there are differences between the brain's responses to a book on the subject of red light and to red light, just as there are brain differences (including sense organs like eyes) between a guy who can see red/green and a guy who can't, and this ignorance of neuroscience leads some people to conclude that there isn't any physical difference.

The other thing is that it seems to be a lack of imagination: they can't believe that a network of neurons is what is collectively doing the seeing without a homunculus or observer involved, so they assume it can't be. Switch "consciousness" with "soul" or "spirit" when such people speak, and you'll be surprised how well it fits.

Sun, 22 Jul 2012 13:17:29 UTC | #949826

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 238 by Zeuglodon

Comment 201 by Schrodinger's Cat

Getting on nerves: irritation, a mild form of anger indirectly measurable by the level of adrenaline and noradrenaline (yes, I'm British) in the subject's bloodstream, which in the brain results from the processing of feedback interpreted in computationally-symbolized terms of goals (state-of-the-world built-in data which systems use to represent goals) against actual incoming data, translated through several social and epistemic calculation programs and behavioural programs before sending signals to the limbic system to modulate hormone production and sympathetic nervous system, readying the body for action. Other factors include setting up behaviours designed, given built-in assumptions about the environment including other organisms, to mimic behaviours or actually be behaviours that show a readiness to fight. These may be modulated by social regulation systems in the prefrontal cortex to direct the behaviours according to updates and memory on current contexts - for instance, how to operate a PC. May engage language software to communicate via complex symbols to other organisms. All the features have been installed and programmed by selected genes and designed to be self-correcting given new data from sensors, within parameters, which is why they match very well with environments the system evolved in, though their applicability elsewhere is usually a result of a general rule that is widely applicable in any case (because of how the world is) or of a fortuitous coincidence.

My apologies if that is not concise enough for you, but conciseness is not actually a mark of validity. This captures everything a scientist needs to define experience and behaviour, roughly speaking, as well as a non-specialist like me can put it. The brain has shortcut programs that let it treat other brains differently to background things or to artefacts or animals - for instance, with reference to desires and to goals - as you would learn if you read a book called How The Mind Works (and possibly some others AAMeme pointed out). It even makes evolutionary sense, as an angry neural organism is an organism asserting its social dominance and status. To go into molecular detail would probably require a series of volumes in any case. Do you really want me to get out a neuroscience study book? Perhaps I can give you a crash course in evolutionary biology? Or maybe someone on here can give you a few pointers about computing and nervous system designs and how, even though a silicon chip does not have everything in common with a neuron, the two show features useful for computation.

Schrodinger's Cat in general

If you think it's a physical property, show us where it is and how it works. Your arguments haven't shown a single link to real-world facts except by wordplay of what others bring to the table, and have more than a lot in common with theological arguments. You're using words like "experience" and "qualia" without defining them or making it clear what you mean, claiming something is self-evident or irreducibly complex without justification, ignoring facts and counterarguments presented to you (as you've done with me repeatedly while discussing Searle's thought experiment), going after red herrings (Zara pointed out some of your diversionary tactics like attempts at Tu Quoque in Comment 209), making clever-sounding accusations about your opponents with obvious glee, presuming something exists (like qualia) without elaborating, and (deliberately or not) confusing Zara's claims of reducible explicability with claims of nonexistence. Couple this with your overconfidence and general spirit of one-upmanship, and you end up being an example of how even sans religion, people can still be excessively irrational. This is ironic, given your own lamentation of the irrationality of people without religion.

As I've already asked you: prove your claims scientifically. How would you go about proving, for instance, whether I'm a zombie or a so-called conscious being? I could claim I was conscious, and behave as though I was full-bloodedly conscious, and from a neurological scan everything would check out that I am as aware and structured in every physical sense as anybody else. But maybe I'm set-up as if I was conscious of something or aware of something, when I really am not, despite the fact that the scientific explanation and definition of being aware of something (or of not being aware of something, which I've defined as well as I can in Comment 190) has been covered.

But there's your problem right there. "As if". "Really". Once you start using words like that, you've gone beyond being a trustworthy source of information, thrown Ockham's razor aside, and joined the ranks of woo quacks and religious pedlars and conspiracy theorists and personal revelationists and introspecting pseudoscientists and "other ways of knowing" con artists, because you're claiming that you can go beyond limits even though you simply can't prove it. How do you think that looks to anybody else?

You're claiming things that nobody can verify, not even yourself, and you're doing so in a way that suggests you haven't given the distinction between "being conscious" and "being not conscious" or between "being aware" and "being unaware" much thought. What if the world merely works "as if" science were correct, but it turned out to "really" be a chaos that happened to look ordered, or a vast and superclever simulation of virtual reality? What if everything acts "as if" it didn't need a god, but it "really" only worked by the super-elaborate mechanisms of a deity? Come to that, what if either of those revelations were themselves only another pair of "as if" truths and there was a second "real" world under that one? Where do you draw the line, and why? "As if" and "really" can be invoked to cast an apparent seed of doubt on any claim, even - ironically - on the "really" that the sceptic is invoking to justify the "as if". It's an unjustified complication that goes beyond the data, results, and conclusion.

Sun, 22 Jul 2012 12:58:38 UTC | #949823

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 198 by Zeuglodon

Comment 197 by All About Meme

As far as I can tell, his argument is this:

  1. Current scientific theories don't account for X. There's no evidence or suggestion that X even exists.

  2. But I know, I just know X exists because it's self-evident. I said so, but in a "subtle" way.

  3. Therefore, there must be another theory that explains it.

I wonder that he could justify number 2, especially when 1 by all rights disproves 2, because it's exactly the line a god believer takes. Come to that, I wonder why he doesn't think that what X is is actually accounted for by current theories and that he simply hasn't recognized or refuses to recognize it as such. His best counter has been Searle, and he still hasn't met my critiques of the thought experiment from Comment 190.

Sat, 21 Jul 2012 20:13:27 UTC | #949771

Go to: The raw deal of determinism and reductionism

Zeuglodon's Avatar Jump to comment 196 by Zeuglodon

Comment 194 by Schrodinger's Cat

SC, the experience of red is my eyes looking at a pool of blood reflecting red light, the light waves reaching my eyes, transmitting signals to the specialized gadget in my brain that switches on (or more accurately sends more rapid signals along its network) when light of the red wavelength is detected by the eyes, and switching it on. The instant I look away, the light waves don't connect with my eye, so they don't transmit signals, so the specialized gadget doesn't turn on. Even when I'm imagining a red thing, that same gadget will be turned on, but to a lesser extent. Other functions in the brain, like other gadgets, will be interfering with the signal, but those other gadgets are not fundamentally different. That is what you will find, that is what any scientist will tell you, that is what anybody can prove at all.

The knowledge of red, or rather learning so that I have knowledge, is me looking at a book's pages about red light, the eyes detecting the light waves, pattern-recognition gadgets in my head diverting the signals to language gadgets, and possibly connecting at some point with the gadget that turns on when red light is detected, but again to a lesser extent as it is imagined. That's all the extra it needs, all you will find, all any scientist will tell you or prove.

These things - light detection, processing, gadgetry wired to represent it etc. - aren't what we experience. I can only tell you about them in the first place because a scientist investigated it scientifically. Instead, they're as scientific an explanation as we've got of what experience is, in the brain between your ears as well as the one between mine. According to your logic, I would trust introspection because I would be "aware" of all this, but that's not what anyone is saying and it's blatantly incorrect. Introspection has actually failed and badly to prove anything about the brain because you can't open the box with the crowbar that's inside it and you can't look at the back of your own head. Even the knowledge I have now of my own brain functions is stuff I infer and assume, as the actual experiments were never done on my brain. I'm not even strictly aware of it. I'm making a guess that the lines of evidence suggest is the correct one. That is induction, and that's the best anyone - me, you, the guy on the bus next to me - is doomed to accept. No new physics. No physico-chemical properties of the brain. No word salad juggling of "experience" and "awareness". No quantum brain. No irreducible complexity. No magic ingredient, however much you complain your suggestion isn't one. It is crucial you get this point right now, before any more confusion comes of it.

You've missed the subtlety of my argument.

The arrogance in your posts is getting on my nerves. I am asking you politely; will you please stop acting so cocky and self-righteous and address the points I've raised?

For the only way you can argue that awareness is reducable to something in physics is to argue that it is an experience of something in physics

What's this supposed to mean?

awareness is reducable to something in physics

awareness is an experience of something in physics

Therefore:

an experience of something in physics is reducable to something in physics

I'm staring at the words, but I'm getting nothing out of them no matter how much I squint. That means either your point is subtle and you've mangled it, or you don't actually have one and are labouring under a serious misunderstanding. Give me at least one good reason why I should take this theologically inspired wordplay seriously.

...which would in turn mean 'new physics' as experience is not part of known physics.

Again, meaning please?

I thus win the 'new physics' argument either way!

Don't pat yourself on the back yet. My hackles are well and truly raised now, and I'm not buying your argument without further clarification.

If red is reducable to some aspect of 'known physics', that would have to actually be 'new physics' as nothing in known physics accounts for red.

Why do you say that? You yourself admitted that a bottom-up approach accounts for it scientifically.

Likewise if red is not reducable to known physics then the explanation would ipso facto be new physics.

Actually, it wouldn't be an explanation at all. It would be a dead end, because without an ability to reduce it or dissect it, there's nothing to analyze. You've just killed any discussion on a presumption. That's all I'm getting at the moment from this comment of yours.

I am losing my patience with you, SC, because you are dancing around the issue and you've also shown no sign of paying any attention to the details I've posted for the current argument. For the last time, give me an example of the experience of red, so that we know what you mean and can investigate the claim. If I were to give you a model of the brain, all the detecting devices and investigative tools science has provided, and any team of trained scientists using any rigorous scientific methodology you care to name, where would you direct their attention for discovery of this "more" without looking like a ghost in the machine advocate?

Comment 195 by Schrodinger's Cat

Your point about epiphenomenalism suggests that mental processes were caused, but then cut off from the rest of the system so that it had no effect. I don't think this is a good argument, because a thing that is caused but which has no effect could never induce a body to acknowledge its existence. I could not talk about consciousness if it had no effect on my motor actions, because without an effect or input, how could those motor actions be caused? It's no argument to point to the cause heading towards the mental part, the cause that branches off, because that presupposes that cause has an effect on the rest of the system right before it vanishes into the mental other realm, so you end up back where you started.

I'm not clear why a'cutting off' is necessary. I prefer to think of a mirror image instead.

Except that doesn't address my point because the caused thing doesn't have any effect on the system. Your analogy is still within the paradigm: the mirror causes an effect that goes back into the rest of the system, but any effect that goes out and never comes in has been lost to an event horizon, so nothing can "see" or be influenced by it because it is lost for good, which was the point I'm making. It doesn't escape the trap. It falls right into it.

And I don't think it's a "just there for the ride" system any more than I think it is a homunculus system. The simplest explanation is that my brain or my nervous system is synonymous with me. In case you haven't noticed, you're the one trying to invoke new physics that aren't evolutionarily explained.

Sat, 21 Jul 2012 19:56:08 UTC | #949768