When I think about writing, I feel good.

When I think about letters, symbols, images, languages, I feel good.  I do not feel good when I think about riff-raff economics, cultural paperwork, road signs, or the false lines of municipalities.  I need to arrange my life so that as little time as possible is spent dealing with filling out forms for governments, and as much time as possible is spent creating.

When I think about writing, I feel good.

"’Normal’ behavior, the nearly universal means by which individuals in society solve given problems"

“In popular usage, eccentricity refers to unusual or odd behavior on the part of an individual. This behavior would typically be perceived as unusual or unnecessary, without being demonstrably maladaptive. Eccentricity is contrasted with ‘normal’ behavior, the nearly universal means by which individuals in society solve given problems and pursue certain priorities in everyday life. People who consistently display benignly eccentric behavior are labeled as ‘eccentrics’.” (Wikipedia)

This definition of normal behavior jumps out at me.  From an algorithmic point of view, from an evolutionary programming point of view, from an impersonal point of view, near-universal convergence in terms of how individuals solve problems, is usually a bad thing if you consider the population as a search for answers, a search for effective ways to solve problems.

"’Normal’ behavior, the nearly universal means by which individuals in society solve given problems"

Programming and writing

I’m struggling with what to do.  I don’t know if I can find a job doing software, even if I wanted to.  There are some jobs, fewer than 10 years ago, but there are some starting to show up on the websites.  If I can’t find a job doing software I’m not sure I can afford an independent life…minimum wage jobs are not possible to actually live on.  If I lived in a shelter or some subsidized housing, then it could be possible.  The idea of going to school sometimes comes into my mind…learn a lot more about some field(s), because I can, because I like to learn.  I wonder what technology will be like in 10 years, in terms of programming.  I want to make a world, a game (sort of), I want to make smart computer things.  I want to write books.  But I think I need to look at depth of endeavor.  And, if, say, I’m going to write, I need to look at creating a sustainable life that doesn’t require writing, so that I have a place to live and enough spiritual energy to write.  Which means not working a software engineering job…it takes too much from me…or I give too much to it.  I think, though, even if I just do it to my boss’s satisfaction (a lower satisfaction than my own, in every case encountered so far) that that type of work environment and work subject matter takes took much of my attention to be able to focus on a book at the same time.  Writing is a whole-life meditation, I think, for me.  I love doing work in C, I love writing English.  I love both.  I think, for my personality, it would be ideal if I didn’t have to interface with any more people than absolutely necessary, to do my work, to make my livelihood.  If I could have popularly-selling books, that would do it.  I could write, and talk to my agent, and do readings, and that might be about it.  But I cannot guarantee that I will ever get a book published.  Even quality of writing, based on authonomy comments, doesn’t mean I can push my way into the publishing industry.  Mom says it’s just a matter of time. I hope so.  I think writing novels is a great domain for me.  It fits my work style, it fits my path-of-truth values, it produces a product that is directly consumable by human beings.  If I make a software world, I want it to be the same, I think…to make a game world, or other product that interfaces with humans, instead of making libraries for other programmers.  I get scared, sometimes, to think about setting aside programming forever, or for a while, to write, because I’m afraid someone else will do what I want to do before me.  But, waiting, with technology projects, does mean that more interesting hardware will be available when I do that kind of work.  I want to do something with my life that makes sense, given my understanding of our place in history.  I believe some sort of technological singularity will be a reality, is a reality.  I like the idea of writing books because, as the technological pill eclipses its own pharmacist, books retain their relevance to people.  Reading books is a human activity.  I do think singular technologies will revolutionize book writing and reading, and I have some ideas about how I think I could contribute to that.  What I want to make sure, at this point, is that I do go deep enough into something, that I can feel satisfied, in life, with the depth I reach in my project(s).  I am more afraid of not reaching that depth than of giving up breadth—only slightly, but slightly is enough.  I also worry about politics, about nations, about where I live.  And wonder if I should live somewhere else.  Like the UK or Australia.  I love this place and will always be an American, but I need a change of scenery, I want one.  I don’t want to live this life poor, I don’t.  I want to be able to travel someday.  I want to be able to live in quiet, beautiful places far away from the majority chaos.  It is hard to think about not doing the exploration of depths of either some major logical exploration, using code, or some major dramatic and psychological and stylistic exploration, using English.  I want to do both, and maybe I can, but I feel like to go much further in life, what I need now is for one of them to go somewhere.  I don’t, frankly, count having a middle-class programmer/slave job as going somewhere.  That’s not success to me, it’s disaster.  I can write a book in anywhere from six weeks to a couple of years.  What can I do, on some kind of programming product, in that time?  I could do a lot, but you know, with programming, I mainly want to be able to play.  I don’t want any constraints on my programming project other than that I like it.  I want the same thing with writing, almost :: for with writing, I want other people to read it, I want them to access it.  With my game/world/intelligent stuff, I almost want that, but not quite.  I do think that will happen, as I make stuff.  And I have creativity that will transcend some passage of time…I will still have ideas, or have new ideas, in 10 years, that are creative and useful and cool, in whatever domain.  Is it foolish to say that for [now] I will write, and that later I will program?  I said I was going to do that, write this year and then on Jan 1, 2011, start my programming work, and then after that who knows.  That was with the assumption that TSID would be published, and that my next book will be published.  And they may be.  I just don’t know.  And I don’t want to take advantage of Mom.  And it’s hard to live with such uncertainty.  But not saving any energy for the way back is a good thing.  I am starting to understand that I will not figure everything out at once.  I know, while writing this, that I won’t figure this out tonight, that I’ll just make some progress.  I think I can find satisfaction in taking one thing and doing it to the extreme, doing it the most, the best that I can, finding—if not an edge, finding—some singular distinction at something.  If I program, I want to make it emotional…I want the product to be emotional, because that is the thing most often missing from programming projects.  If I write, I want it to be structural, algorithmic, because that is the thing I would most miss about programming.  And I know I can do both.  But I want to do one thing, for now, and take that to the max.  I think it should be writing.  I’m just having trouble setting aside programming for a while, because I have so many ideas about this world I want to program, this simulation, this chaos, that I think I can make in that domain.  I think I can do a lot in either, or both, domains.  Maybe I can do one every other decade, or every other year.  ?  I do think if I can fully commit to one, that I will get what I want in terms of depth satisfaction.

Also, I want to get away from my family.  I need to take a break from them.  Maybe I can do this :: stick with the original plan I had a while back, and write this year, and stay with Mom, and then no matter what, on Jan 1, 2011, move away, even if it means just living on the street.

I kind of want to stop communicating with everyone.  Except Suzanne, Mom, Amy.  No email except query letters…and not even them, actually, because I don’t know who else to send them to on TSID.  No facebook, no twitter, maybe no TV.  Just getting into my own head and allowing myself to rely on Mom.  And then, after that, whether I get a book published or not, getting even more alone time in the next year…either working as a dishwasher in NYC or somewhere, where no one knows where I am, or living in an apartment somewhere and writing more or programming, on my book money.

That’s what I’m going to do: write this year, listen to music, don’t do anything else.  Start my silence with the world.  Write one amazing book.  Make it something I can feel satisfied with, that I think people will love to read.  And give up on everything else, except for running, and a sustenance job.  Then, after that, move somewhere and either be poor or be rich, but be out of here, away from everyone I know.

Programming and writing

I love having a couple of monitors and a couple of keyboards going.

I love being outside, and it’s great to get a break from computers, but I grew up like this.  The screens were smaller, the processors were vastly slower, but this is what I do.  I’ve been programming since I was 7, writing since I was 8, it just feels natural to be tapping away, with a C compiler in one window and a blog in the other, writing something, coding something, this is what I do.

I love having a couple of monitors and a couple of keyboards going.

Are you slipping secret messages past me in the mail?

me: “…ideas about predictive models needing to contain as many data elements as the thing they’re predicting in order to be accurate…”

Mr. Cawley: “…Sorry, an unsound idea, or one misapplied. Predictive models do not need to contain as many data elements as the thing they are predicting in order to be accurate. There are all kinds of simplifying substitutions and shortcuts in formal and real behaviors. Even for every single detail. I have a really accurate description of the future value of every single cell of rule 0 after the initial condition for every initial condition regardless of size or number of steps, using just one element. When the behavior is simple it can be fully predicted without ‘one to one and onto’ modeling…”

I’m going to respond to this, four years later, because I’m poking around this forum again and, frankly, I get a little annoyed when members of the inner NKS clique take a superior tone with me (JC says I “miss basic points at the outset of the whole subject”…he’s “plowing the sea” by participating in this thread with me…he’ll “give it a try…and see if any of it sticks”…sorry you seem to have such a low opinion of me, JC; I admire your philosophical and analytical perspectives on this site and others). PJL took a similar tone with me in person at the D.C. NKS conference and Cawley does it with me in this thread. You all should be aware, as people who clearly want to promote an NKS slant in the world, that when you approach outsiders like me with that kind of tone, it’s a turn-off to your whole group. That said, I am clearly very interested in thinking about these ideas and participating with you within the context of this forum, so I’ll move on to the content of my rebuttal to part of what JC writes above:

I may not have been as clear in my reference, in 2006, as I should have been, to the idea I’m talking about it, which I heard via — I don’t know — some popularized Hawking book. The idea is that to predict an irreducible system (of the type most oft discussed in this domain) that, being there’s no shortcut-style, reductive description of the system (unlike there usually is in math and physics—math and physics *are*, essentially, reductive descriptions), that as you build a simulation of this complex system, you end up needing to make your system more and more complex (using more and more “elements”—physical elements, conceptual elements)…and that there’s a dynamic that starts to illustrate itself, wherein if you’re creating a simulation of what’s going to happen next in a complex universe, the more and more accurately you want to do that—in cases where there is no reductive description of the history or unfolded dynamics of the world—you approach a situation wherein it’s less and less like a simulation that you can run beforehand, and more and more like an exact copy of the thing you’re trying to simulate in the first place…which, when time is part of the universe…means that, less and less, you get the benefit of being able to predict events with your simulation…since the simulation takes as long to run as the universe itself.

In the part of your response that I quoted, you’re talking about simple systems, clearly, systems that can be reductively described. In my proposition about classifying one’s own complexity, or classifying a system that you cannot predict, clearly I am not talking about that kind of system.

I wasn’t as rigorous as I should have been in my original post, perhaps. What I was trying to get at, was that—I’ll make a weaker and more articulated assertion here—when one wants to figure out exactly how complex an observed system is, there are limits inherent in that: if you “cannot predict” the system such that you have no exact reductive description of its unfolded dynamics, then there are elements in the unfolded history that, since you can’t predict them, you don’t understand well enough to eliminate the possibility that they contain complex elements. If you can’t predict a system completely, if you can’t reduce it completely, then setting an upper bound for its complexity seems to me to be at best a dicey matter! (a functionally-capping upper bound…an upper bound that is lower than the highest upper bound in your classification scheme, Class IV in the case of NKS)

If I’m a teacher and I give you a test, and I have a model that allows me to always guess right before you take the test, about what you will answer on the test, then I can claim to classify your test-taking behavior in a wholly-more-secure way than if I can’t predict what you will answer on the test…because in the former case, since your behavior doesn’t deviate from my reductive description, it would be significantly harder to say that there’s anything in your behavior that’s eluding me than if your behavior deviates from my best reductive description (prediction). If you’re doing something I don’t understand, something I can’t predict, then you may very well be doing something that is highly complex, sensible, meaningful, etc., that, if I understood it, or could recognize it, or describe it, might affect my classification of your complexity (upward). I might be filling in ovals on a multiple-choice test to spell out “this class is boring” in a compressed binary format, completely ignoring the questions being asked of me. That’s an example of a system whose output (my answers on the test) looks Class III to you, but is really Class IV. So while I obviously recognize that there is a taxonomic difference between Class III and Class IV systems, the example I just gave should be sufficient reason to doubt that, in general, behavior that looks random, cannot contain complex, intelligent, or universal behavior.

Distinct from that question, in my mind, is the question: if I know the rules of the system and its initial state and I see every part of the output of the system from step 0, can an intelligent, non-random system produce behavior that looks random (Class III) from the very beginning. My example of the student differs from this in that I wasn’t observing the student from step 0, didn’t see its initial state, etc. In that example, perhaps obviously, only part of the output of the system is random. Is there a Class III-looking CA, or some other simple system, that looks random from step 0, but that actually contains nonrandom, meaningful, behavior? I certainly don’t know, or else I would post the damn thing here. It is hard for me to imagine something like an ECA that could do this…organize itself through time, having instantly assumed a random-looking output. It seems to me that there might usually be some initialization period during which the thing had to decide to, for example, write compressed, binary-encoded messages in multiple-choice answers on a test. (To be more demanding of the test example, it would have to move in the direction of there being the lookup-table part of a compressed message encoded somewhere in my test…the decision to be cryptic would have to be somewhere, right?, in the rule or in the system output(?)…and then, would it be possible for that decision or nature to itself be so cryptic that it looked random to me…(?)…that, frankly, is hard for me to imagine.) But, myself, I do not see reason enough to cast out the possibility that this could happen, that this kind of system could exist.

For one, and this is quite general, but I think relevant here: the way we’re viewing CA output is part of why it seems to have form, or to be random, to us. Even the 2d grid, widely-regarded as simple, probably one of the least-presumptive output visualization mechanisms our species can think of, contains assumptions and mappings that inform our ability to see the behavior of the system. It may be that different visualization or perception mechanisms for CAs (and other systems, obviously), when used, would force, say, the 256 ECAs into different Class I-Class IV categories. Maybe rule 110, when viewed through my network-unrolling methodology, looks like a different class than it does in the 2d-grid perception mechanism.

For another, I happen to have seen, and have posted here, years ago, systems very much like ECAs except with a denser connectivity, if you will—the “water” systems, which are like ECAs except with two rows of memory, while not fulfilling the requirements I’ve given above of a system that appears random from step 0 while actually containing highly-complex, nonrandom order, look a whole lot more like TV snow, on the whole, than any of the ECAs, while clearly not being purely random in their behavior. That doesn’t, of course, mean that there are systems with no detectable initialization period that look completely random and yet contain decidedly nonrandom and meaningful behavior, but to me it’s one reason to wonder if perhaps there might be such systems.

I suppose, in a way, that some classic PRNGs are non-CA examples of systems whose output, from step 0, even with visibility into the system rule, does not demonstrate a visible initialization period in which the system is organizing itself into a state where it can slip secret messages past me in the mail, and yet, those systems demonstrate decidedly nonrandom (cyclic) behavior, even while most people’s way of perceiving the system makes the system look completely random, through and through. It’s not intelligent behavior, as far as I know, so I don’t find that example very satisfying, myself.

Is there a system where I can know the rule, see initial state and output from step 0, that looks random from step 0 (when using the 2d grid visualization, by which we’d say it’s Class III), yet is meaningfully nonrandom when viewed in a different way? I don’t know. I’ve looked through quite a lot of CA-like systems, programmatically searching for such an example, without finding one.

You’re right, Mr. Cawley, you can classify anything you please. =) (I hope you keep doing so.) And I like the way you all classify things, Class I-IV and such. There’s still a nagging question, though, in my brain, about whether I can be sure that every Class III system is, in fact, not possibly a universal system that is just hard for me to see. Short a satisfying example, however, I certainly defer to you that what looks unintelligently random, is exactly that.

Are you slipping secret messages past me in the mail?

NKS and physics and GUTs (or :: the religiosity of physics)

“Now, of course, the single greatest modeling challenge for NKS is fundamental physics.” (from David Brown)

Mr. Brown, I’m going to pick on you a little bit. But it’s not personal. The sentiment you express in that sentence of yours I quoted is a thematic sentiment in a great enough segment of NKS discussion that I’ve heard, for me to want to say a bit of what I think about it.

———

With respect, I understand that the statement you made above represents a view held by many who come to NKS from physics, and while I certainly recognize that an NKS theory of our “physical” world would be a historic breakthrough of the first order, I don’t think that the above statement will be true for many people for more than about 15 years (from now)…if it’s even true now.

It would be an amazing and profound thing if we, organisms within our “physical” world, modeled the physics of our world with NKS, or modeled them completely with any methodology. It would be the ultimate look in the mirror, perhaps.

But thinking isn’t going to stop if that happens, science isn’t going to stop if that happens. Even though our world, from our point of view, has a shit-ton of atoms, modeling the particular world we’re in, while profound (from our point of view) is hardly the greatest challenge—or even the greatest modeling challenge—for NKS, or any other discipline.

Given the limit of running a simulation of a particular system from within that system…the limitation of running out of building blocks to use for the simulation of the thing as your simulation, in accuracy, in completeness, in size, approaches those of the universe you’re simulating…having a GUT, while amazing and useful and profound, won’t be the end of modeling…

Modeling the behavior of a corporation who is modeling your behavior, for competitive purposes, for example, will be more of an engineering and a theoretical challenge, I think. Modeling the behavior of the simplest organism or culture of organisms will be a greater challenge than modeling the physical universe…and before you say it, I think that is true even though the world of the corporation, simple organism, and culture I am talking about modeling are of course in actuality built on top of the substrate of our physical universe.

While true, that doesn’t matter, practically, for the majority of simulations people do now, or are going to do.

Simulating the universe based on an accurate model of physics is of course highly useful…for understanding and observing in high detail small little parts of what goes on in our world…like the first parts of XYZ-type-of-explosion, etc. And of course whoever creates such a model will be the next Newton, in terms of human history books. And that matters to people’s egos, in addition to the accomplishment having real value.

But to say it’s the greatest modeling challenge for NKS is just wrong. It might be the greatest in some philosophical sense, it might be the greatest in some sort of metaphysical sense, but in an engineering sense, in a theoretical sense, it is not the greatest challenge for NKS.

If we were living inside an Amiga (if we were complex emergent beings running on an Amiga), then us coming up with a model that matched the output of whatever processor is in an Amiga, would be profound. It would mean a lot to us. (As an aside, it wouldn’t even mean that we understood the workings of the Amiga’s processor, and it wouldn’t give us a clue as to what it might mean from some other organism’s point of view that “we were running on an Amiga”. But that’s not the point I want to assert here. What I want to assert here is that:) Once we modeled the output of the Amiga’s processor such that we completely understood the instructions that figured into the running of the universe that we were running on, there would still be lots for us to do…and I think: greater things for us to do, in terms of engineering and theory.

That’s because it doesn’t matter, in many ways, that we’re running on an Amiga. There are probably already many systems in our world (running on top of the physics of our world…systems) that I suspect will be harder to model than the physics of our world…greater challenges of modeling (in an engineering sense, in a theoretical sense) than the modeling of our particular universe. (Assuming you don’t have access to the rules of the system…which in *most* cases you won’t.) And frankly (and I know that some physicists aren’t going to like this) but: coming up with a physics GUT doesn’t mean you understand everything that is built with physics. It doesn’t even mean you can practically simulate any particular thing that happens as a result of physics. Even theoretically, there are theoretical limits on physical simulation of the universe, from within the universe—correct me if I’m wrong, please, physicists…but even if you could control a huge portion of the atoms in the universe while simulating the universe with those atoms, is it not a snake eating its tail?…is there not a simple, practical limit there on the completeness of a simulation of a thing that is running within the [limited] resources of the thing itself (such that you approach a situation where your simulation *is* the thing, and it becomes completely accurate yet fails to maintain the characteristic of a simulation wherein you can figure out some useful information about a future event *before* it happens…I’ve been told before that I (misunderstand? misapply?) this idea of Hawking’s…but I believe he very clearly says exactly this)? The physics GUT gets a lot of air-time, it’s profound, it’s elusive, it will add someone’s name to the books of human history, but…it’s not, in many ways, the end-all be-all that it is sometimes touted as. Whether it’s with NKS, or whatever theory, when someone comes up with the first widely-accepted GUT, this is what is going to happen: it’ll be on the front page of all the papers, no one will understand it, someone will get their name next to Newton’s, and then everyone (regardless of their education) will be like: so now what? And then we’ll use the GUT in select simulation projects where it will be exceedingly useful, and the majority of simulation will continue, unaffected and largely uninformed by the particular GUT. Then, some years later, someone is going to come up with another GUT, based on different theory, and they’ll work equally well and we’ll work on translating the theory of the one into the theory of the other…

Our universe is special to us because it’s ours. But systems running essentially in emulation mode on our particular hardware can be more profound to us, as engineers and theoreticians, than the physical universe. (And the fact that one is running on the other does not even mean that the substrative system has a meaningful relationship with the system it supports…knowing everything about physics does not necessarily translate into knowing anything meaningful about a particular system physics supports. This should be clear if you think about it via the Amiga analogy: Linus Torvalds’ knowledge of the Linux kernel gets him 0% of the way down the road of understanding many of the programs people run on Linux…maybe I’m running an old Apple OS 9 emulator and programming emergent, sentient beings in C on top of that. There’s no meaningful relationship, there, between the thoughts of the emergent beings and the Linux kernel. A theory of the Linux kernel will not essentially be useful or even needed in order to do the more profound (greater?) simulation and modeling of the thoughts of the emergent beings that some of us would want to do if we were other beings within that particular universe…)

I understand, I think, the weight that is put on modeling our particular universe. I agree that is profound from a philosophical point of view and from what I can really best call a metaphysical point of view—or a religious point of view…but beyond the specific religion, essentially, of our particular universe, there are all kinds of other universes to model…and because there are necessarily more than one of those emulated universes, whereas since our universe is one…the one…our *uni*verse…that verse is absolutely not the greatest one we will encounter, even though it has a special place in our understanding.

crosspost on forum.wolframscience.com

NKS and physics and GUTs (or :: the religiosity of physics)

Cells with Perspective

What if cells had perspectives on their neighbors, just as we see individuate-able agents in our world as having perspectives? Part of what makes me me is that I have ideas on given subjects that are distinct from others’ ideas on those subjects.

I was thinking about this this morning while thinking about the singularity…for as some people think we can create singular beings that will have our interests in mind and that will share our values, I think that by definition a super-human being will have perspectives distinct from ours.

These are some cellular automata whose cells have perspectives on their neighbors. The new cell values depend on remembered cell values and cell perspectives. The perspectives are morphic in that they change over time based on cell values.

This program could be slightly modified such that perspectives were also based on other perspectives, and of course many simple variations in terms of the shape of cell neighborhoods used for input, etc., are right around the corner.

This CA library is seeming kind of dated and inflexible to me these days, but here is source code containing the details of the systems pictured here.

crosspost on forum.wolframscience.com

Cells with Perspective