gfish: (sketch)
gfish ([personal profile] gfish) wrote2016-12-16 03:41 pm

A proof against epiphenomenal consciousness

For the last couple years, I have found myself thinking more and more about the nature of consciousness. It's just weird that we don't have any theoretical understanding of the single most evident fact available to us -- that we exist, and that we are experiencing things.

At one point I realized I didn't understand it to such a degree that I couldn't even rule out rocks having some form of awareness! I think that shows pretty clearly just how hard it is to get a solid grip on the subject. (And I'm not even the only one to end up there -- the philosophical term for this is "panpsychism".) I've managed to work my way back from THAT a bit, at least. I still haven't actually convinced myself what consciousness is, or how it even begins to work, but I did make some important progress recently being able to rule out one of the common paths people take.

The core problem of consciousness is that it makes absolutely no sense if you look at it from a purely materialist perspective. We have a decent understanding of how cognition works, sure. How neurons and neural nets work, how data can be processed, how an ever-increasing subset of "intelligent" behavior comes about. We have a long way to go, of course, maybe centuries of work remain, but the foundations are pretty solid. In contrast, consciousness? Who the fuck knows. There is absolutely no basis for even beginning to see how a first person, subjective experience can come out of data processing.

The classic solution is dualism. Consciousness is a metaphysical property, like a soul, which somehow gets stuffed into a body and drives it around. A very popular option, even today, but it has lots of problems. How does the interface work? By definition, it would have to be violating the laws of physics as we understand them, making atoms do things they wouldn't otherwise do. This has never been observed, and obviously there is a lot of resistance to the very idea.

(I'm ignoring subtleties like monism here, but that doesn't change anything at this level.)

One tempting compromise which I found myself coming back to is epiphenomenalism. Basically, what if consciousness existed as some special metaphysical property, but it didn't actually do anything? (AKA, it's causally isolated.) So some quality of neurons or feedback loops or massive computation or whatever causes a little bubble of subjective experience to spin up, conscious of what is going on. We already know that the actual thinking is going on in the brain, and now we have a way to understand how we're experiencing it so vividly. Great!

It really makes sense in a lot of ways. Even ignoring the interface issue, how could consciousness have an output? What would that mean? Imagine you're looking at something that is red. (All philosophy of consciousness papers have to talk about red, it's the law.) So your brain is getting signals of "red" from your eyes. Your consciousness intercepts that, and you experience redness. Does your consciousness then send a new signal saying "I'm experiencing redness" to the rest of your brain? From your cold, unfeeling cognition's point of view, there would be no difference between that and the original signal. Data is data, and it already knows your eyes are receiving red light. Why would it even pay attention to this weird ghost signal? So much simpler to just wall consciousness off and not get into this!

So epiphenomenalism is great, except for one big problem: I'm physically typing sentences that contain ideas about consciousness right now! How can consciousness be epiphenomenal if consciousness is directly affecting the physical world by making me yammer on and on about it? There is only one possible way to get around this that I see: I'm not actually thinking about consciousness. While I am experiencing things, including my thoughts on the subject, those thoughts are not actually about that experience. They're about whatever internal model my cognitive processes keep about themselves, which naturally would look a lot like consciousness to them. (If you're some kind of zombie* lacking consciousness, then the concept of 'consciousness' just means 'data about yourself' to you.) My cognition has just kind of happened to invent an explanation for something it could never know was going on in the first place.

While this is all kinds of twisty, it does nicely explain why consciousness is so hard to think about -- because we're not actually thinking about it! Our cognitive tools have no access to conscious experiences, just raw data, so of course they have trouble explaining them properly.

Cute answer, but no. This is self-contradictory. We started this whole exploration by thinking about the unique spark of consciousness, how weird it is that subjective experiences can emerge from the ruthlessly materialist universe we see around us. Following that line of thoughts has now lead us to the point of deciding that those thoughts were never about consciousness in the first place. Those thoughts had never experienced consciousness, didn't know it actually existed, and could never do so. Which means that, as far as our cognition is concerned, materialism is a perfectly fine answer, since it can provably generate a situation that convincingly feels like experience to a purely materialist cognitive system.

Whew!

Is this the end of my search? No, because it only rules out epiphenomenal explanations. There are many other options available, some of which I think even have testable hypotheses. But it's the first bit of real progress I've made, and I'm pretty happy about that.

* Yes, zombie is the philosophical term of art.

[identity profile] tylik.livejournal.com 2016-12-17 12:02 am (UTC)(link)
So, how much actual neuroscience do you have, as opposed to the sort that assumed that neural nets are actually a thing?

I think a lot of poor analogies get made on the assumption that brains are a lot more digital and a lot more like the computers that we think we understand (though a lot of that tends to be a hand approximation, AFAICT, too, but I know brains better at this point) than the messy sloppy weird slow but parallel and did I say weird analog thingies that they are.

(Part of what I want to talk about is about how neurons work, and part is about things like how learning works, which is way more about the limbic system than data processing in a computational sense, really... Unless your computational sense is really broad.)

[identity profile] gfish.livejournal.com 2016-12-17 01:22 am (UTC)(link)
I guess I'd say a lot more than most people, but far less than you. :) Enough to know that biological neural systems are messy and slow and massively parallel, yes. I personally come from a perspective that assumes they could still be simulated to whatever level of precision is desired in a Turing Machine given enough time and a big enough tape, of course.

I don't think it really matters, though. The way cognition is implemented doesn't matter for these purposes. Analog, digital, serialized, parallel, whatever. They're all material systems with no even halfway convincing explanation of how one gets from that to subjective experience.

[identity profile] tylik.livejournal.com 2016-12-17 01:32 am (UTC)(link)
I guess "subjective experience" is still this big undefined thing for me, so I'd had to have a better idea what you find surprising.

For me, it's not, anymore. And a lot of that came from pulling apart the emotional basis of learning... but I don't know that we're coming to it with similar questions, and it's also a very personal process of accretion that has developed over a number of years. Be fun to talk about, though.

BTW, how to do you feel about the piracy of textbooks? (I am general anti-book piracy, but moderately pro-textbook piracy due to specifics of that market.)

[identity profile] gfish.livejournal.com 2016-12-17 01:42 am (UTC)(link)
I'm generally in favor of keeping short (~5 year) copyrights, which weirdly feels like an extreme position now. With technology having made the legal fiction of copyright utterly ridiculous and late-stage capitalism simultaneously pushing for absolute, unending protections, I have no idea where we end up. Normally I just try to deal with it on the level of tackiness -- I can afford to buy media, so I do so if it is made available in my market. I don't actually see much evidence that piracy is hurting creative output (note: not the same thing as people getting paid to be creative), but I'd just as rather not be tacky. But, yeah, textbooks manipulate the market in such ugly, exploitative ways, it's particularly hard to get too worked up over piracy there.

[identity profile] tylik.livejournal.com 2016-12-17 02:25 am (UTC)(link)
I was just about to send you email, and then noticed that I have no saved email address, and am not sure which listed ones are preferred.

Also, apropos of nothing - a young woman I've been corresponding with (friend of a fictive family member of my sister's*) is getting interested in programming and robotics. Seattle area. She's an animator by trade, so she's pretty solid on modeling and animation. She's working a lot on her math and programming at home, and on the verge of branching out into arduino stuff. I have no idea where to send her to start picking up soldering and other skills, not to mention community stuff now that Metrix is closed to public use. I'm also poking her to start volunteering with one of the UW labs, but while I can send her to the right part of the web, the labs I actually know personally are the wrong ones.

Do you know any good resources? I know there are some public groups, but I don't know much about them. Oh, and I sent her up to Ada books. (I'm also super out of date on the other hackerspaces, except for a couple that last I checked in would be, um, poor cultural fits. It's like I don't like there anymore.)

* There needs to be better language there. Like, I'm totally willing to accept her as family, but she's adopted via my sister?

[identity profile] gfish.livejournal.com 2016-12-17 03:32 am (UTC)(link)
(About to run out the door, but I'll come up with a proper answer to this. And gfish chez the net of cyphertext will always work for me.)

[identity profile] tylik.livejournal.com 2016-12-30 04:57 pm (UTC)(link)
ping?

[identity profile] mosinging1986.livejournal.com 2016-12-17 02:08 am (UTC)(link)
(Here via the Home Page)

The core problem of consciousness is that it makes absolutely no sense if you look at it from a purely materialist perspective.

Well, obviously! That's why purely materialistic worldviews make no sense and are demonstrably false.

[identity profile] gfish.livejournal.com 2016-12-17 03:39 am (UTC)(link)
I certainly can't rule it out, though. Coming from a background in the sciences, I'd personally find expanding our concept of physics slightly a lot more comfortable than grafting on metaphysical frippery. I can handle an empty, meaningless universe a lot more easily than one with occult properties that can't be objectively tested! But I'll just have to follow this rabbit hole where ever it takes me. :)

[identity profile] mosinging1986.livejournal.com 2016-12-20 03:30 am (UTC)(link)
I can handle an empty, meaningless universe a lot more easily than one with occult properties that can't be objectively tested!

I would suggest those aren't the only two options. Anyway, all the best to you on your search.

[identity profile] randomdreams.livejournal.com 2016-12-17 02:52 am (UTC)(link)
I'm not entirely sure that consciousness exists. I think our brains have a big chunk of equipment that makes snap judgments on a neural processing basis, and then a second more complicated set of equipment that attempts to match the patterns and decisions coming from the first chunk against past decisions and results, and then rationalizing the choices we make based on the snap judgments as a system for forming yet more patterns and results and establish a system that incorporates all the past choices/decisions/patterns. That process, in humans, and likely in other animals, is snazzy enough to start analyzing the analysis process, and that's what's going on when we're thinking about how we think.

[identity profile] randomdreams.livejournal.com 2016-12-17 02:53 am (UTC)(link)
(which is to say that if we have free will I suspect we virtually never actually use it: we're just choosing based on pattern-matching. But at some point, that just asks what we mean by 'free will'.)

[identity profile] tylik.livejournal.com 2016-12-17 03:19 am (UTC)(link)
Oo, so a) you read at least some of the literature and

b) some aspect of the amorphous consciousness thing you're concerned with is "free will". Let's carve that chunk off, put it on a meat tray under plastic, and put it on the table.

[identity profile] gfish.livejournal.com 2016-12-17 03:30 am (UTC)(link)
I'm definitely not talking about "free will". I think that's an empty term that can't be used in any serious way, like "omnipotence". It's just self-defeating and pointless.

[identity profile] randomdreams.livejournal.com 2016-12-17 04:08 am (UTC)(link)
Introduction was malcolm gladwell's Blink, which was largely unsatisfying, but it did lead me to a bunch of neurobiologists writing about how they didn't believe in either the concept or the reality of free will and discussing how intelligent behavior could come out of iterative pattern-matching systems that are sufficiently complex to start looking for patterns in the pattern-matching equipment.

[identity profile] tylik.livejournal.com 2016-12-17 04:14 am (UTC)(link)
You know, and I'm writing this specifically because I'm in the process of unpacking my own thought process (I'm also *still* getting over this damn virus, so a little fuzzy) my initial reaction is that I'm not really convinced that free will is a scientifically interesting question to me. I mean, it's kind of hanging it with angels dancing on the head of a pin.

I'm not certain if turning it into a testable hypothesis will fix that? Maybe? (Likely, I suppose, though some testable hypothesis are boring, and while figuring out how to design the experiments is often interesting, actually doing them is frequently tedious beyond belief. Hence, minions.)

[identity profile] randomdreams.livejournal.com 2016-12-17 04:26 am (UTC)(link)
Oh I agree. I'm willing to toss the term and focus instead on what we mean by intelligent decision-making, and consciousness, and whether those two are distinct.

I read something interesting over on g+, lemme see if I can find it, talking about a metabolic pathway in the brain that provided most of the hardware for producing identical, entangled molecules, separating them, and doing subsequent stereochemistry that could indicate spin state, meaning it is at least possible we have quantum computing capability. The inference was that maybe that was part of how we make decisions.

[identity profile] randomdreams.livejournal.com 2016-12-17 04:36 am (UTC)(link)
Ooof, finally. I post too much stuff. https://www.theatlantic.com/science/archive/2016/11/quantum-brain/506768/

[identity profile] eub.livejournal.com 2016-12-17 09:15 am (UTC)(link)
I don't know that identifying macroscopic quantum properties would even help with any of the philosophical questions, though. It can offer you observable randomness instead of determinism, but that doesn't get you free will.

And it doesn't resolve the basic problem of consciousness in this post, that it seems it can be either a causal part of physics, or causally disconnected, or a causal source or sink, and we're not happy with any of those. Well, some people would always be happy to go panpsychist, and that's fine, but it's not specially licensed by QM; consciousness is not in the math.

[identity profile] eub.livejournal.com 2016-12-17 08:58 am (UTC)(link)
" Following that line of thoughts has now lead us to the point of deciding that those thoughts were never about consciousness in the first place."

We may need to go back and talk about "about". Those thoughts can't be causally downstream of consciousness (as epiphenomenon taken to be causally downstream of physics), but does that mean they can't be about consciousness? I believe a professional epiphenomenalist would play it that way.

[identity profile] gfish.livejournal.com 2016-12-17 09:53 pm (UTC)(link)
They could accidentally be about consciousness, sure. But the input that they were trying to explain wasn't consciousness, there just happened to also be a completely invisible phenomenon a lot like the thoughts. Which there would be no way to know or prove, so it all seems pretty pointless to me. If I write about a fictional alien species and happen to get a lot of it right, am I actually writing about them? No one would claim my fiction could be used to drawn conclusions about the real aliens.

[identity profile] peteralway.livejournal.com 2016-12-17 10:44 pm (UTC)(link)
"Those thoughts can't be causally downstream of consciousness (as epiphenomenon taken to be causally downstream of physics), but does that mean they can't be about consciousness?"

Wait--if our thoughts about consciousness are upstream of the actual perception of consciousness, at least for the most part, would that explain why thinking about consciousness seems to always lead to dead ends? That the part of our mental process that percieves conciousness can't think effectively about it because it just plain doesn't think at all? And the part of our mental process that experiences conciousness can only feed a vague signal to the part of our mental process that thinks? So that the thinking part of our brain is getting bad or incomplete data on the experience of consciousness, so that the thoughts are completely unsatisfying?

[identity profile] eub.livejournal.com 2016-12-18 07:35 am (UTC)(link)
"They could accidentally be about consciousness, sure."

That's your argument, indeed, that it would be accidental. But an actual epiphenomenalist will come equipped with a colorable argument otherwise. (It sounds like you probably have some familiarity with the philosophical literature on "intentionality" which is their term for "aboutness"?)

But I'm not into this "causal cul de sac" epiphenomenalism -- I just don't see how it's of any use except as a defensive position -- so that's all I'll speak for them.
Edited 2016-12-18 07:35 (UTC)

[identity profile] sistawendy.livejournal.com 2016-12-17 10:08 pm (UTC)(link)
This reminds me of something an AI prof in grad school once said. It was fashionable for decades to ask, "Can machines be intelligent?" His take on that, which I agree with, was, "Who cares as long as they do what you want them to do and not what you don't?" There's an analogy here with the question, "What, if anything, is consciousness?" I say that the answer doesn't matter if it doesn't impact human survival, happiness, or whatever other goal you want to optimize life for.

And even if you do still care, I think the answer may be disappointing. [livejournal.com profile] randomdreams mentioned animals. There's a whole range of complexity from great apes to the organisms [livejournal.com profile] tylik studies; there may even be a few organisms whose behavior people can realistically simulate by now. The question of whether a given organism is conscious or not is going to look like the one for whether it's living or not; basic, mechanistic answers have won out for the latter.

[identity profile] eub.livejournal.com 2016-12-18 07:44 am (UTC)(link)
Well but hold on a minute, isn't that like coming in to a discussion of Jewish halacha, and saying the answer doesn't really matter? Doesn't matter to whom? It seems like something that interests [livejournal.com profile] gfish, and that's okay.

[identity profile] plantae.livejournal.com 2017-01-13 05:03 am (UTC)(link)
That was fun. Thank you.

[identity profile] gfish.livejournal.com 2017-01-13 05:27 pm (UTC)(link)
I suppose that's more than most wannabe philosophers get, so, you're welcome! :)