Are you slipping secret messages past me in the mail?

me: “…ideas about predictive models needing to contain as many data elements as the thing they’re predicting in order to be accurate…”

Mr. Cawley: “…Sorry, an unsound idea, or one misapplied. Predictive models do not need to contain as many data elements as the thing they are predicting in order to be accurate. There are all kinds of simplifying substitutions and shortcuts in formal and real behaviors. Even for every single detail. I have a really accurate description of the future value of every single cell of rule 0 after the initial condition for every initial condition regardless of size or number of steps, using just one element. When the behavior is simple it can be fully predicted without ‘one to one and onto’ modeling…”

I’m going to respond to this, four years later, because I’m poking around this forum again and, frankly, I get a little annoyed when members of the inner NKS clique take a superior tone with me (JC says I “miss basic points at the outset of the whole subject”…he’s “plowing the sea” by participating in this thread with me…he’ll “give it a try…and see if any of it sticks”…sorry you seem to have such a low opinion of me, JC; I admire your philosophical and analytical perspectives on this site and others). PJL took a similar tone with me in person at the D.C. NKS conference and Cawley does it with me in this thread. You all should be aware, as people who clearly want to promote an NKS slant in the world, that when you approach outsiders like me with that kind of tone, it’s a turn-off to your whole group. That said, I am clearly very interested in thinking about these ideas and participating with you within the context of this forum, so I’ll move on to the content of my rebuttal to part of what JC writes above:

I may not have been as clear in my reference, in 2006, as I should have been, to the idea I’m talking about it, which I heard via — I don’t know — some popularized Hawking book. The idea is that to predict an irreducible system (of the type most oft discussed in this domain) that, being there’s no shortcut-style, reductive description of the system (unlike there usually is in math and physics—math and physics *are*, essentially, reductive descriptions), that as you build a simulation of this complex system, you end up needing to make your system more and more complex (using more and more “elements”—physical elements, conceptual elements)…and that there’s a dynamic that starts to illustrate itself, wherein if you’re creating a simulation of what’s going to happen next in a complex universe, the more and more accurately you want to do that—in cases where there is no reductive description of the history or unfolded dynamics of the world—you approach a situation wherein it’s less and less like a simulation that you can run beforehand, and more and more like an exact copy of the thing you’re trying to simulate in the first place…which, when time is part of the universe…means that, less and less, you get the benefit of being able to predict events with your simulation…since the simulation takes as long to run as the universe itself.

In the part of your response that I quoted, you’re talking about simple systems, clearly, systems that can be reductively described. In my proposition about classifying one’s own complexity, or classifying a system that you cannot predict, clearly I am not talking about that kind of system.

I wasn’t as rigorous as I should have been in my original post, perhaps. What I was trying to get at, was that—I’ll make a weaker and more articulated assertion here—when one wants to figure out exactly how complex an observed system is, there are limits inherent in that: if you “cannot predict” the system such that you have no exact reductive description of its unfolded dynamics, then there are elements in the unfolded history that, since you can’t predict them, you don’t understand well enough to eliminate the possibility that they contain complex elements. If you can’t predict a system completely, if you can’t reduce it completely, then setting an upper bound for its complexity seems to me to be at best a dicey matter! (a functionally-capping upper bound…an upper bound that is lower than the highest upper bound in your classification scheme, Class IV in the case of NKS)

If I’m a teacher and I give you a test, and I have a model that allows me to always guess right before you take the test, about what you will answer on the test, then I can claim to classify your test-taking behavior in a wholly-more-secure way than if I can’t predict what you will answer on the test…because in the former case, since your behavior doesn’t deviate from my reductive description, it would be significantly harder to say that there’s anything in your behavior that’s eluding me than if your behavior deviates from my best reductive description (prediction). If you’re doing something I don’t understand, something I can’t predict, then you may very well be doing something that is highly complex, sensible, meaningful, etc., that, if I understood it, or could recognize it, or describe it, might affect my classification of your complexity (upward). I might be filling in ovals on a multiple-choice test to spell out “this class is boring” in a compressed binary format, completely ignoring the questions being asked of me. That’s an example of a system whose output (my answers on the test) looks Class III to you, but is really Class IV. So while I obviously recognize that there is a taxonomic difference between Class III and Class IV systems, the example I just gave should be sufficient reason to doubt that, in general, behavior that looks random, cannot contain complex, intelligent, or universal behavior.

Distinct from that question, in my mind, is the question: if I know the rules of the system and its initial state and I see every part of the output of the system from step 0, can an intelligent, non-random system produce behavior that looks random (Class III) from the very beginning. My example of the student differs from this in that I wasn’t observing the student from step 0, didn’t see its initial state, etc. In that example, perhaps obviously, only part of the output of the system is random. Is there a Class III-looking CA, or some other simple system, that looks random from step 0, but that actually contains nonrandom, meaningful, behavior? I certainly don’t know, or else I would post the damn thing here. It is hard for me to imagine something like an ECA that could do this…organize itself through time, having instantly assumed a random-looking output. It seems to me that there might usually be some initialization period during which the thing had to decide to, for example, write compressed, binary-encoded messages in multiple-choice answers on a test. (To be more demanding of the test example, it would have to move in the direction of there being the lookup-table part of a compressed message encoded somewhere in my test…the decision to be cryptic would have to be somewhere, right?, in the rule or in the system output(?)…and then, would it be possible for that decision or nature to itself be so cryptic that it looked random to me…(?)…that, frankly, is hard for me to imagine.) But, myself, I do not see reason enough to cast out the possibility that this could happen, that this kind of system could exist.

For one, and this is quite general, but I think relevant here: the way we’re viewing CA output is part of why it seems to have form, or to be random, to us. Even the 2d grid, widely-regarded as simple, probably one of the least-presumptive output visualization mechanisms our species can think of, contains assumptions and mappings that inform our ability to see the behavior of the system. It may be that different visualization or perception mechanisms for CAs (and other systems, obviously), when used, would force, say, the 256 ECAs into different Class I-Class IV categories. Maybe rule 110, when viewed through my network-unrolling methodology, looks like a different class than it does in the 2d-grid perception mechanism.

For another, I happen to have seen, and have posted here, years ago, systems very much like ECAs except with a denser connectivity, if you will—the “water” systems, which are like ECAs except with two rows of memory, while not fulfilling the requirements I’ve given above of a system that appears random from step 0 while actually containing highly-complex, nonrandom order, look a whole lot more like TV snow, on the whole, than any of the ECAs, while clearly not being purely random in their behavior. That doesn’t, of course, mean that there are systems with no detectable initialization period that look completely random and yet contain decidedly nonrandom and meaningful behavior, but to me it’s one reason to wonder if perhaps there might be such systems.

I suppose, in a way, that some classic PRNGs are non-CA examples of systems whose output, from step 0, even with visibility into the system rule, does not demonstrate a visible initialization period in which the system is organizing itself into a state where it can slip secret messages past me in the mail, and yet, those systems demonstrate decidedly nonrandom (cyclic) behavior, even while most people’s way of perceiving the system makes the system look completely random, through and through. It’s not intelligent behavior, as far as I know, so I don’t find that example very satisfying, myself.

Is there a system where I can know the rule, see initial state and output from step 0, that looks random from step 0 (when using the 2d grid visualization, by which we’d say it’s Class III), yet is meaningfully nonrandom when viewed in a different way? I don’t know. I’ve looked through quite a lot of CA-like systems, programmatically searching for such an example, without finding one.

You’re right, Mr. Cawley, you can classify anything you please. =) (I hope you keep doing so.) And I like the way you all classify things, Class I-IV and such. There’s still a nagging question, though, in my brain, about whether I can be sure that every Class III system is, in fact, not possibly a universal system that is just hard for me to see. Short a satisfying example, however, I certainly defer to you that what looks unintelligently random, is exactly that.

Advertisements
Are you slipping secret messages past me in the mail?

NKS and physics and GUTs (or :: the religiosity of physics)

“Now, of course, the single greatest modeling challenge for NKS is fundamental physics.” (from David Brown)

Mr. Brown, I’m going to pick on you a little bit. But it’s not personal. The sentiment you express in that sentence of yours I quoted is a thematic sentiment in a great enough segment of NKS discussion that I’ve heard, for me to want to say a bit of what I think about it.

———

With respect, I understand that the statement you made above represents a view held by many who come to NKS from physics, and while I certainly recognize that an NKS theory of our “physical” world would be a historic breakthrough of the first order, I don’t think that the above statement will be true for many people for more than about 15 years (from now)…if it’s even true now.

It would be an amazing and profound thing if we, organisms within our “physical” world, modeled the physics of our world with NKS, or modeled them completely with any methodology. It would be the ultimate look in the mirror, perhaps.

But thinking isn’t going to stop if that happens, science isn’t going to stop if that happens. Even though our world, from our point of view, has a shit-ton of atoms, modeling the particular world we’re in, while profound (from our point of view) is hardly the greatest challenge—or even the greatest modeling challenge—for NKS, or any other discipline.

Given the limit of running a simulation of a particular system from within that system…the limitation of running out of building blocks to use for the simulation of the thing as your simulation, in accuracy, in completeness, in size, approaches those of the universe you’re simulating…having a GUT, while amazing and useful and profound, won’t be the end of modeling…

Modeling the behavior of a corporation who is modeling your behavior, for competitive purposes, for example, will be more of an engineering and a theoretical challenge, I think. Modeling the behavior of the simplest organism or culture of organisms will be a greater challenge than modeling the physical universe…and before you say it, I think that is true even though the world of the corporation, simple organism, and culture I am talking about modeling are of course in actuality built on top of the substrate of our physical universe.

While true, that doesn’t matter, practically, for the majority of simulations people do now, or are going to do.

Simulating the universe based on an accurate model of physics is of course highly useful…for understanding and observing in high detail small little parts of what goes on in our world…like the first parts of XYZ-type-of-explosion, etc. And of course whoever creates such a model will be the next Newton, in terms of human history books. And that matters to people’s egos, in addition to the accomplishment having real value.

But to say it’s the greatest modeling challenge for NKS is just wrong. It might be the greatest in some philosophical sense, it might be the greatest in some sort of metaphysical sense, but in an engineering sense, in a theoretical sense, it is not the greatest challenge for NKS.

If we were living inside an Amiga (if we were complex emergent beings running on an Amiga), then us coming up with a model that matched the output of whatever processor is in an Amiga, would be profound. It would mean a lot to us. (As an aside, it wouldn’t even mean that we understood the workings of the Amiga’s processor, and it wouldn’t give us a clue as to what it might mean from some other organism’s point of view that “we were running on an Amiga”. But that’s not the point I want to assert here. What I want to assert here is that:) Once we modeled the output of the Amiga’s processor such that we completely understood the instructions that figured into the running of the universe that we were running on, there would still be lots for us to do…and I think: greater things for us to do, in terms of engineering and theory.

That’s because it doesn’t matter, in many ways, that we’re running on an Amiga. There are probably already many systems in our world (running on top of the physics of our world…systems) that I suspect will be harder to model than the physics of our world…greater challenges of modeling (in an engineering sense, in a theoretical sense) than the modeling of our particular universe. (Assuming you don’t have access to the rules of the system…which in *most* cases you won’t.) And frankly (and I know that some physicists aren’t going to like this) but: coming up with a physics GUT doesn’t mean you understand everything that is built with physics. It doesn’t even mean you can practically simulate any particular thing that happens as a result of physics. Even theoretically, there are theoretical limits on physical simulation of the universe, from within the universe—correct me if I’m wrong, please, physicists…but even if you could control a huge portion of the atoms in the universe while simulating the universe with those atoms, is it not a snake eating its tail?…is there not a simple, practical limit there on the completeness of a simulation of a thing that is running within the [limited] resources of the thing itself (such that you approach a situation where your simulation *is* the thing, and it becomes completely accurate yet fails to maintain the characteristic of a simulation wherein you can figure out some useful information about a future event *before* it happens…I’ve been told before that I (misunderstand? misapply?) this idea of Hawking’s…but I believe he very clearly says exactly this)? The physics GUT gets a lot of air-time, it’s profound, it’s elusive, it will add someone’s name to the books of human history, but…it’s not, in many ways, the end-all be-all that it is sometimes touted as. Whether it’s with NKS, or whatever theory, when someone comes up with the first widely-accepted GUT, this is what is going to happen: it’ll be on the front page of all the papers, no one will understand it, someone will get their name next to Newton’s, and then everyone (regardless of their education) will be like: so now what? And then we’ll use the GUT in select simulation projects where it will be exceedingly useful, and the majority of simulation will continue, unaffected and largely uninformed by the particular GUT. Then, some years later, someone is going to come up with another GUT, based on different theory, and they’ll work equally well and we’ll work on translating the theory of the one into the theory of the other…

Our universe is special to us because it’s ours. But systems running essentially in emulation mode on our particular hardware can be more profound to us, as engineers and theoreticians, than the physical universe. (And the fact that one is running on the other does not even mean that the substrative system has a meaningful relationship with the system it supports…knowing everything about physics does not necessarily translate into knowing anything meaningful about a particular system physics supports. This should be clear if you think about it via the Amiga analogy: Linus Torvalds’ knowledge of the Linux kernel gets him 0% of the way down the road of understanding many of the programs people run on Linux…maybe I’m running an old Apple OS 9 emulator and programming emergent, sentient beings in C on top of that. There’s no meaningful relationship, there, between the thoughts of the emergent beings and the Linux kernel. A theory of the Linux kernel will not essentially be useful or even needed in order to do the more profound (greater?) simulation and modeling of the thoughts of the emergent beings that some of us would want to do if we were other beings within that particular universe…)

I understand, I think, the weight that is put on modeling our particular universe. I agree that is profound from a philosophical point of view and from what I can really best call a metaphysical point of view—or a religious point of view…but beyond the specific religion, essentially, of our particular universe, there are all kinds of other universes to model…and because there are necessarily more than one of those emulated universes, whereas since our universe is one…the one…our *uni*verse…that verse is absolutely not the greatest one we will encounter, even though it has a special place in our understanding.

crosspost on forum.wolframscience.com

NKS and physics and GUTs (or :: the religiosity of physics)

Cells with Perspective

What if cells had perspectives on their neighbors, just as we see individuate-able agents in our world as having perspectives? Part of what makes me me is that I have ideas on given subjects that are distinct from others’ ideas on those subjects.

I was thinking about this this morning while thinking about the singularity…for as some people think we can create singular beings that will have our interests in mind and that will share our values, I think that by definition a super-human being will have perspectives distinct from ours.

These are some cellular automata whose cells have perspectives on their neighbors. The new cell values depend on remembered cell values and cell perspectives. The perspectives are morphic in that they change over time based on cell values.

This program could be slightly modified such that perspectives were also based on other perspectives, and of course many simple variations in terms of the shape of cell neighborhoods used for input, etc., are right around the corner.

This CA library is seeming kind of dated and inflexible to me these days, but here is source code containing the details of the systems pictured here.

crosspost on forum.wolframscience.com

Cells with Perspective

Let no one know what you read, what you watch.

Always consume information alone.  To prevent tracking of your mind.  And what is there to explain?  Chaos.  In all its forms.  That is what there is to explain.  Networks.  Languages that describe problems (that which we don’t understand).  And dynamics that arise from those languages.  Situations.  Descriptions.  And dynamics that arise from the descriptions. (from 2004)

Let no one know what you read, what you watch.

inhesion :: Complex Systems

inhesion is a C library for complex systems. It’s in development now.  It will have genetic algorithms, cellular automata, cultural evolutionary search, and other such systems.  The initial release, available now, has a multi-variable optimization system called cor3.  You can download a demo.

inhesion :: Complex Systems

straight talk about Pragmatic Solutions and how we (now they) missed the boat on embedded assessment

discussion of cellular automata and embedded assessment work with L. Eloe, one of the people who explained calculus to me:

My interest in CAs started as a hobby, sprung from reading Stephen Wolfram’s books, A New Kind of Science and his previous papers on cellular automata….I had heard a little about John Conway and Alan Turing’s work before that, but it was Wolfram’s writing that piqued my interest. So I just read what I could find, and after a few years of experimenting with and thinking about his work and others’ work in that domain, I had some ideas about how to extend what Wolfram calls the elementary cellular automata (http://j.mp/50WLzV). My initial work there was extending CAs with several new features (described in detail here if you’d like more detail http://j.mp/d1jtbz). But what I’ve become interested in the last few years, is using CAs and genetic algorithms together to create systems where cultural evolution finds solutions to classification problems. I made a system like this for a company in California (video intro here http://j.mp/4XaeuB), that used GAs and CAs and Bayesian probability (and a bunch of other stuff!) to, ultimately, find programmatic classification functions that were used in educational and training domains…to determine, based on the behavior of a [student] playing an [educational] game (such as Harvard’s River City game, the Army game, or some other training or educational environment)…to determine, based on a person’s behavior, something about them that would normally be determined by an interview or test. Unfortunately, that company shot themselves in the foot with mismanagement, so all that software I built for them isn’t doing anyone any good. But if I can get my wits together, I may try to re-approach that problem by myself or with some other folks. That’s what I’m doing with CAs…

I guess I’m done saying “f— you” to my last job. I will say this, as a little post-mortem, however: the owner of that company had a vision about improving the educational process. I was inspired by that vision in my interview with him. I have the skills and knowledge required to build the guts of that vision. Much of what happened at that company, unfortunately, precluded us from getting that work done, from changing the world. Our business was, or should have been, changing the world. Instead we squandered it, for a variety of reasons. And they made themselves so intolerable to me that, four months ago, I quit. Because as long as Pragmatic Solutions is managed the way it is, all that’s going to go on there is people dropping like flies because they don’t like playing Emperor’s New Clothes with Josh English, their supposed Chief Software Architect. All ego aside, straight talk only, now that I quit that company they don’t have the technical muscle to actually build embedded assessment systems, or use the ones that I already built for them. In terms of intelligence/software ability, they had M. Bakkenson, with whom I collaborated. He was muscle. Of their remaining people, they have the formidable J. Underwood from ETS/NASA fame for math expertise and guiding overall knowledge, and they have the management triad which possesses an intriguing vision (and who are all very cool, from the gentle and understated smarts to the eclectic veganism to the psychologically-whole and beautiful Wu-Tang-loving integrator of ideas). But, and I’ll blame myself here too, we squandered it. They don’t have a complete technical team to accomplish their goals—or even the ability to hire or create one, and that’s even if they get additional loans or funding (because strong technical people, in general, won’t put up with mid-level jackassery). And their key technical person (formerly theirs) is too weary of PR-SOLs systemic sicknesses to ever be willing to work with them, or anyone who has worked with them, ever again. That’s a shame.

When people who know me think that I’m overreacting to a project falling apart, you are right in some ways that I overreact. But the project I’m lamenting here isn’t creating a research tool or an insurance database—or making the next “Bejeweled” or designing the next cool pair of Nikes. It’s the potential to significantly enhance assessment techniques for education and training environments. That, for a moment, there was a team of about five folks who could have made a dent in that problem, and that they have dispersed due to our common jackassery, is truly a shame. But even though it won’t be Pragmatic Solutions who solves that problem, and even though it won’t be me, someone will solve it, so it’s not that great a tragedy.

straight talk about Pragmatic Solutions and how we (now they) missed the boat on embedded assessment