|
>
> Unfortunately, most models are not built atop fixed instruction sets.
> Most models are built atop a very complicated stack of abstraction
> layers, which means the effective number of "instructions" and terminals
> (data types as well as values) is infinite. If we were building our
> models in, say, assembly, I might agree with you.
>
Actually assembly is the most versatile because it is ultimately what
all computer languages have to target. Only when one has abstraction
layers is there the _possibility_ of artificially limiting things (like
saying that functions can't define functions).
> >/ General
> />/ and effective methods for global search can in fact be exactly the same
> />/ for numbers and rules: 0) create a set of starting candidates 1)
> />/ evaluate them, 2) tweak the good 3) destroy the bad, 4) go to 1.
> /
> You're playing language games. Yes, the methods _can_ be the same in
> the extreme case you lay out. But, in fact they are NOT the same in
> most cases.
>
Simulated annealing or genetic algorithms offer global optimization,
whereas common quasi-Newton methods do not. The latter are merely
efficient local optimizers and only handle simple landscapes. In any
case, it doesn't really matter because both an operand and an operator
can be encoded as a number (and are on real machines -- this encoding
gives us all the software we use). Thus, `conventional' optimizers can
in principle be used with operators. It's just that you probably
wouldn't get anywhere because conventional optimizers can't handle
non-linearity of ad-hoc functions, hard edges, etc. The rest of the
practical considerations in doing search over sequences of operators are
just that, e.g. for machine code one needs to trap exceptions and
provide time limits on execution.
> The source code is not the model, it's merely one of the
> many generators for the model.
//But I bet the midi-chlorians know where to find the whole thing.
Marcus
|