How is a vector space like an evolutionary function?

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

How is a vector space like an evolutionary function?

jon zingale
At first glance, the commonality is one of contingency. Vector spaces
are contingent on underlying fields like evolutionary functions are
contingent on underlying goals. Before jumping to the conclusion that I
believe that evolutionary functions are vector spaces, let me mention
that in place of vector spaces I could have said monoid, algebra, module,
or an entire host of other higher-order structures. What is important
here is not the particular category, but the way that these higher-order
structures are freely constructed and the way that they relate to their
associated underlying structures[⁛].

While some mathematicians will argue that these structures apriori exist,
one can just as easily interpret the goal of such a construction to be
the design of new structures. In a sense, a vector space is designed for
the needs of a mathematician and founded upon the existence of a field.

Consider the field of integers modulo 5, here named 𝔽5. This object can
be thought of as a machine that can take an expression (3x7 + 2/3),
give an interpretation (3⊗2 ⊕ 2⊗2), and evaluate the expression
(3⊗2 ⊕ 2⊗4 ≡ 4) relative to the interpretation. Now 𝔽5, is an algebraic
object and so doesn't really have a notion of distance much less richer
geometric notions like origin or dimension[ℽ]. This object can do little
more than act as a calculator that consumes expressions and returns values.
However, through the magic of a free construction, we can consider the
elements {0,1,2,3,4} of 𝔽5 as tokenized values, free from their context
to one another. Where previously they could be compared to one another:
added, multiplied, etc... now they are simply names, independent and
incomparable to one another. For clarity here, I will write them
differently as {⓪,⓵,⓶,⓷,⓸} to distinguish them from the non-tokenized
field values. "What does this buy us", you may ask? Now, when we consider
mixed expressions like 5*⓵ + 7*⓶ + 12*⓷ + 2*⓶, we can agree to sort
like things (5*⓵ + 9*⓶ + 12*⓷) and otherwise let this expression remain
irreducible. The irreducibility here buys us a notion of dimension[↑],
and we quickly find that many of the nice properties we would like of a
space are suddenly available to us. Crucially, these properties were no-
where to be found in the original underlying field. This is to say, that
these properties arise as a kind of epiphenomena wrt the underlying field.

The properties now granted to us via the inclusion of tokenized values
as generators is one half of the story. Dual to the inclusion is another
structural map named evaluation. This map, like a gen-phen map, founds
all of the higher-order operations by giving them a direct interpretation
below in the underlying field. Taken together, the inclusion map and the
evaluation map do a bit more. They assure a surprising correspondence
between the number of ways one can linearly transform spaces and the
number of ways one can map tokenized values into another. This fact is
often stated as "a linear transformation is determined by its action on
a basis".

Structures arising from constructions like the one above are ubiquitous
in mathematics and demonstrate a way that epiphenomena (vector, inner-
product, tensor, distance, origin, dimension, theorems about basis) can
arise from the design of higher-order structures while relating to the lower
-order structures they are founded upon. My hope is that drawing this
analogy will be found useful and produce a spark for those that know
evolutionary theory better than I[†].

Jon

[⁛] See the description of the free vector space construction from the
introduction to chapter 4 of 'Categories for the Working Mathematician'.

[ℽ] Some here probably wish to exclaim, "but wait, I can define a metric
on 𝔽5!" I wish to deflect this by asserting that the idea of a metric is
a geometric notion and that philosophically it may be cleaner to consider
the metric as being defined not on 𝔽5, but on 𝔽5 construed as a space,
Met(𝔽5) say.

[↑] The tokens ⓵, ⓶, and ⓷ in the expression above play the role of
independent vectors. An expression like 4*⓶ + 2*⓷ can now be interpreted
as moving 4 steps in the ⓶ direction, followed by moving 2 steps in the
⓷ direction.

[†] Just about everyone.

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

Frank Wimberly-2
Jon,

I'll think about that more.  An initial reaction is that I'm surprised that you call monoids, rings, etc "higher structures".  They have less structure than a vector space, don't they?  Is it because they're more general?

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Sun, Jul 26, 2020, 10:09 PM Jon Zingale <[hidden email]> wrote:
At first glance, the commonality is one of contingency. Vector spaces
are contingent on underlying fields like evolutionary functions are
contingent on underlying goals. Before jumping to the conclusion that I
believe that evolutionary functions are vector spaces, let me mention
that in place of vector spaces I could have said monoid, algebra, module,
or an entire host of other higher-order structures. What is important
here is not the particular category, but the way that these higher-order
structures are freely constructed and the way that they relate to their
associated underlying structures[⁛].

While some mathematicians will argue that these structures apriori exist,
one can just as easily interpret the goal of such a construction to be
the design of new structures. In a sense, a vector space is designed for
the needs of a mathematician and founded upon the existence of a field.

Consider the field of integers modulo 5, here named 𝔽5. This object can
be thought of as a machine that can take an expression (3x7 + 2/3),
give an interpretation (3⊗2 ⊕ 2⊗2), and evaluate the expression
(3⊗2 ⊕ 2⊗4 ≡ 4) relative to the interpretation. Now 𝔽5, is an algebraic
object and so doesn't really have a notion of distance much less richer
geometric notions like origin or dimension[ℽ]. This object can do little
more than act as a calculator that consumes expressions and returns values.
However, through the magic of a free construction, we can consider the
elements {0,1,2,3,4} of 𝔽5 as tokenized values, free from their context
to one another. Where previously they could be compared to one another:
added, multiplied, etc... now they are simply names, independent and
incomparable to one another. For clarity here, I will write them
differently as {⓪,⓵,⓶,⓷,⓸} to distinguish them from the non-tokenized
field values. "What does this buy us", you may ask? Now, when we consider
mixed expressions like 5*⓵ + 7*⓶ + 12*⓷ + 2*⓶, we can agree to sort
like things (5*⓵ + 9*⓶ + 12*⓷) and otherwise let this expression remain
irreducible. The irreducibility here buys us a notion of dimension[↑],
and we quickly find that many of the nice properties we would like of a
space are suddenly available to us. Crucially, these properties were no-
where to be found in the original underlying field. This is to say, that
these properties arise as a kind of epiphenomena wrt the underlying field.

The properties now granted to us via the inclusion of tokenized values
as generators is one half of the story. Dual to the inclusion is another
structural map named evaluation. This map, like a gen-phen map, founds
all of the higher-order operations by giving them a direct interpretation
below in the underlying field. Taken together, the inclusion map and the
evaluation map do a bit more. They assure a surprising correspondence
between the number of ways one can linearly transform spaces and the
number of ways one can map tokenized values into another. This fact is
often stated as "a linear transformation is determined by its action on
a basis".

Structures arising from constructions like the one above are ubiquitous
in mathematics and demonstrate a way that epiphenomena (vector, inner-
product, tensor, distance, origin, dimension, theorems about basis) can
arise from the design of higher-order structures while relating to the lower
-order structures they are founded upon. My hope is that drawing this
analogy will be found useful and produce a spark for those that know
evolutionary theory better than I[†].

Jon

[⁛] See the description of the free vector space construction from the
introduction to chapter 4 of 'Categories for the Working Mathematician'.

[ℽ] Some here probably wish to exclaim, "but wait, I can define a metric
on 𝔽5!" I wish to deflect this by asserting that the idea of a metric is
a geometric notion and that philosophically it may be cleaner to consider
the metric as being defined not on 𝔽5, but on 𝔽5 construed as a space,
Met(𝔽5) say.

[↑] The tokens ⓵, ⓶, and ⓷ in the expression above play the role of
independent vectors. An expression like 4*⓶ + 2*⓷ can now be interpreted
as moving 4 steps in the ⓶ direction, followed by moving 2 steps in the
⓷ direction.

[†] Just about everyone.
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

gepr
In reply to this post by jon zingale
Hm. I'm not sure epiphenomena is the right word. When you say "these [e.g. sorted token] properties were nowhere to be found in the underlying field", I'm not sure that's quite true. There's a sense in which each token is grouped by addition already 4+1=4+1+4+1 ... All you (seem to) have done is remove the reduction rule. E.g. 4+1↛0. I.e. the structure of the groupings seems, at least somewhat, causally related to the underlying field.

Hopping, then, up to the premature registration of and reliance upon some structure like a vector space or seed spreading, I think it might be important to relax your claim that the higher order structures are *epi*phenomenal. I.e. allow that there *might* be some causal relation to the underlying mechanism(s) and small-scoped goals to the function, but the project is to find out *if* that's the case and if so, what is that causal relationship.

To try to be a little clearer, it may be important to start out with the falsifiable claim that they're purely epi, then try to constructively demonstrate particulars of the forward map (from generator structure to phenomenal structure).

On 7/26/20 9:08 PM, Jon Zingale wrote:

> At first glance, the commonality is one of contingency. /Vector spaces/
> are contingent on underlying /fields/ like /evolutionary functions/ are
> contingent on /underlying goals/. Before jumping to the conclusion that I
> believe that evolutionary functions /are/ vector spaces, let me mention
> that in place of vector spaces I could have said monoid, algebra, module,
> or an entire host of other higher-order structures. What is important
> here is not the particular category, but the way that these higher-order
> structures are /freely/ constructed and the way that they relate to their
> associated underlying structures[⁛].
>
> While some mathematicians will argue that these structures /apriori/ exist,
> one can just as easily interpret the goal of such a construction to be
> the design of new structures. In a sense, a vector space is designed for
> the needs of a mathematician and founded upon the existence of a field.
>
> Consider the field of integers modulo 5, here named 𝔽5. This object can
> be thought of as a machine that can take an expression (3x7 + 2/3),
> give an interpretation (3⊗2 ⊕ 2⊗2), and evaluate the expression
> (3⊗2 ⊕ 2⊗4 ≡ 4) relative to the interpretation. Now 𝔽5, is an /algebraic/
> object and so doesn't really have a notion of distance much less richer
> /geometric/ notions like origin or dimension[ℽ]. This object can do little
> more than act as a calculator that consumes expressions and returns values.
> However, through the magic of a /free/ construction, we can consider the
> elements {0,1,2,3,4} of 𝔽5 as tokenized values, free from their context
> to one another. Where previously they could be compared to one another:
> added, multiplied, etc... now they are simply /names/, /independent/ and
> /incomparable/ to one another. For clarity here, I will write them
> differently as {⓪,⓵,⓶,⓷,⓸} to distinguish them from the non-tokenized
> field values. "What does this buy us", you may ask? Now, when we consider
> mixed expressions like 5*⓵ + 7*⓶ + 12*⓷ + 2*⓶, we can agree to sort
> like things (5*⓵ + 9*⓶ + 12*⓷) and otherwise let this expression remain
> /irreducible/. The /irreducibility/ here buys us a notion of dimension[↑],
> and we quickly find that many of the nice properties we would like of a
> space are suddenly available to us. Crucially, these properties were no-
> where to be found in the original underlying field. This is to say, that
> these properties arise as a kind of /epiphenomena/ wrt the underlying field.
>
> The properties now granted to us via the /inclusion of tokenized values/
> /as generators/ is one half of the story. Dual to the inclusion is another
> structural map named evaluation. This map, like a gen-phen map, /founds/
> all of the higher-order operations by giving them a direct interpretation
> below in the underlying field. Taken together, the inclusion map and the
> evaluation map do a bit more. They assure a surprising correspondence
> between the number of ways one can linearly transform spaces and the
> number of ways one can map tokenized values into another. This fact is
> often stated as "a linear transformation is determined by its action on
> a basis".
>
> Structures arising from constructions like the one above are ubiquitous
> in mathematics and demonstrate a way that epiphenomena (vector, inner-
> product, tensor, distance, origin, dimension, theorems about basis) can
> arise from the design of higher-order structures while relating to the lower
> -order structures they are founded upon. My hope is that drawing this
> analogy will be found useful and produce a spark for those that know
> evolutionary theory better than I[†].


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

jon zingale
In reply to this post by Frank Wimberly-2
I probably should have added an abstract :) What I am getting at here is that
*free* object constructions build higher-order structures relative to those
they are built from and relative to the target of their associated
right-adjoint (often the category of sets via a forgetful functor). That
these higher-order structures then support notions that may not exist in a
direct way relative to the structures they are built from, one can view
these newly supported notions as a kind of *epiphenomena* relative to the
underlying structure.

While it is not meaningful to speak about the length of a set, we can
support a higher-order monoidal structure where length is a meaningful
notion. For a monoid relevant example[𝜆], consider the following recursive
definition of the length of a list:

len :: [a] -> Int
len [] = 0
len [t] = 1
len (t:ts) = len [t] + len ts

What I seek to show is that this definition follows directly from an
adjunction that constructs *free* monoids from sets.

A functor F :: Set -> Mon builds from the elements of a set X, the set of
all possible words on X and equips this set with a multiplication (++) and
an identity element ([]). This functor has a right 'forgetful' functor which
forgets the monoidal structure and returns the set of all possible words on
X, X*. We can then look at morphisms from this object (X*, ++, []) into a
natural number monoid (ℕ, +, 0), taking concatenation to addition and the
empty word to the additive identity (the first and third rule in the
definition above). Now, the second rule (len [t] = 1) is a little arbitrary
and finds its meaning in the interaction between the underlying category of
sets and the higher-order monoidal category. There in Set, we find a
composition that ultimately gives len its character. The composition of
Functors (G∘F) gives a natural map, η, which includes X into X* as an
inclusion of generators. Looking at η: X -> X* and the map const1 :: X -> ℕ
(Where ℕ is given by the same forgetful functor G and all elements of X are
mapped to the number 1), we find that the unique function G(len) :: X* -> ℕ
that makes the triangle commute and respects the functorial conditions gives
via the monoidal structure the notion of length we seek. The structural
functors (G and F) can be seen as *founding* a category of monoids upon a
category of sets, and dually *structuring* the category of sets by the
category of monoids.

[𝜆] This example is from Benjamin Pierce's 'Category Theory for Computer
Scientists'.



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

gepr
Now I'm even more worried that epiphenomena is not the right concept, even in it's (I think) less common Pyrrhonian form. To the extent that the phenomenal layer can be treated as (at least somewhat) independent of its generative layer or, further, the extent to which the outer layer might "structure" the inner layer, I think they've graduated to primary phenomena.

On 7/27/20 9:56 AM, jon zingale wrote:

> That
> these higher-order structures then support notions that may not exist in a
> direct way relative to the structures they are built from, one can view
> these newly supported notions as a kind of *epiphenomena* relative to the
> underlying structure.
>
> [...] The structural
> functors (G and F) can be seen as *founding* a category of monoids upon a
> category of sets, and dually *structuring* the category of sets by the
> category of monoids.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

jon zingale
To the extent that I understand Nick's idea, a satisfactory theory of
evolutionary function must admit a means for describing epiphenomena
and crucially this epiphenomena must be non-mysterious. The theory may
be considered useful if it distinguishes goals from designs, and presents
testable relations between the two[⇅]. Further, Nick offers a model, by
way of Elliot Sober, of a child's toy that demonstrates a special case
of epiphenomena called a spandrel[Ϡ]. I argue here that Sober's model
can be characterized within a class of algorithms whose analysis yields a
way to reason about epiphenomena and whose structural relations (given
by a free construction) are foundational to the class.

In Sober's model[†], we are given a bucket with circular, triangular,
and square-shaped holes in the lid. Additionally, there are matching
circular, triangular, and square-shaped blocks. Each block type has an
associated color: red, yellow, and blue. By some form of magic, as the
blocks enter the bucket through their associated hole in the lid, the
blocks become sorted in the bucket. Circular blocks find their way to
the bottom, triangular blocks to the middle, and the square ones to the
top. Epiphenomenologically, the blocks are found to be sorted by color.

The key shuffle is a shuffling algorithm that produces a shuffled list from
a given one. Given an ordered list of things, we write on each thing a
random number. Next, we sort the list of things by the random numbers,
placing a smaller number to the left of any larger. Lastly, we erase the
numbers from the things and we are left with our things in a shuffled
order. Epiphenomenologically, the things are found to be shuffled.

Common to both is exploitation of structural relations between a list
of things and a list of pairs of things. In the key shuffle case, we
imagine the shuffle to 'live' in a category of lists:

keyShuffle :: [a] -> [a],

but the algorithm itself relies on a category of lists of pairs that is
not apparent from the type signature alone.

intermediary :: [(r, a)] -> [(r, a)]

Through this additional structure, the shuffle manifests via sorting, the
sorting of the random numbers causes a shuffling of the given things.
In the case of Sober's algorithm, the pairing of color and shape is
decidedly more direct. The sorting of shapes gives rise to a sorting of
colors, but again, it is effected through a structural map (ideally a
monomorphism) identifying shape with color. The parallel can be drawn
more explicitly:

Sober: Maps from colors to shapes are monomorphisms,
sorting on shapes gives distinctly sorted colors.

Shuffle: Maps from place-values to random numbers are monomorphisms,
sorting on random numbers gives distinctly shuffled place-values.

Of course, the monomorphism condition is something of a red herring.
To mix things up a bit, consider what happens when instead of color we
assign to each shape-type a prime number, and write on each block a number
that is divided by the prime associated with each shape-type. Now again,
say, all the circles end up at the bottom and the numbers on these blocks
are all divisible by 2, the triangles with numbers divisible by 3, and
the squares with numbers divisible by 5. Now, things are more confusing
because we could have written the number 10 on a square block and found
that though it was divisible by 2, the block found its way to the top.

Composites: Maps from numbers to shapes are non-monomorphisms[⋔],
sorting on shapes gives non-distinctly sorted numbers.

More interestingly, and closer to the real-life examples discussed by
Eric and Nick in vFriam, for a large number of blocks and randomly
assigned numbers to shapes (respecting the divisibility constraint), we
find that all of the circles carry numbers divisible by two, and fewer
squares carry the same. Still, there will be a bunch of mixture. On the
one hand, mixture, on the other, meaningful and quantifiable distinctions.

To my mind, it is the structural relationship between a category of
lists of things and a category of lists of pairs of things that tie these
examples together[⋌]. Here, the *design* of an algorithm that shuffles
things and the *design* of a child's toy that *epiphenomenologically*
'sorts for color' are possible via the exploitation of intermediary types,
and the structural relations between them. I wish that I could have
written about this more definitively, but I feared not writing anything
at all. My apologies, quite a bit more could and should have been said.

[Ϡ] From Wikipedia: https://en.wikipedia.org/wiki/Spandrel_(biology)

[⇅] Computer science is flush with examples of algorithms where the
instruction set given to a computer gives rise to surprising behavior
exploited by an algorithmancer to satisfy a particular task. The creative
act of designing algorithms 'lives' between two worlds, one of pointers
and operations and another of pleasing effects. See the reference, in
the above post, to the styrofoam herding robot and the algorithms in this
post for examples of *design*. That these algorithms exploit epiphenomena
to perform tasks and that they are amenable to analysis, makes them
exemplary for the design and construction of a general theory.

[†] Taken on faith here as I have never read Sober's actual account.
Instead, the model is recounted from my conversations with Nick.

[⋔] Not only non-monomorphism, but there are many possible
choices of representative. That a given composite could be mapped
to one or another shape is what gives this example its peculiarity.

[⋌] Hiding in the background are additional structural maps,
𝛿 :: X -> X∏X and 𝜀 :: (A∏B, A∏B) -> (A,B), which serve to found the
free construction relied on here and similar to the map in the earlier
posts. Again, note that while the adjoints *found* the relationship
between the higher-order structure and the lower, determinations
between the related categories can often be fairly loose. Consider the
functorial relation between a vector space and its dual, the relationship
exists whether or not we specify a basis. For a different and novel
example, see Example 2.2.1 on page 55 of Emily Riehl's book:
http://www.math.jhu.edu/~eriehl/context.pdf



--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: How is a vector space like an evolutionary function?

jon zingale
For those algorithmically inclined readers, I coded up the examples in
Haskell. While the soberSort, keyShuffle, and compositeSort could each
be written in a few lines, I took the time to build out a KeySortable
type class and wrote the examples relative to it. There are 4 files
involved: Spandel.hs, MixedSpandrel.hs, Shape.hs, and Sortable.hs

The first two simply implement the examples. Shape.hs defines a notion
of ordered shapes and provides a generator for producing random lists
of shapes. Sortable.hs has the class implementation details. There you
will find that I have defined a _Pair_ datatype and then extend the Ord
class to it (ordering based on the first in the pair, the 'primary
phenomena') and to the Bifunctor class. Lastly, I define a KeySortable
class, contingent on Bifunctor, and extend it to include _Pair_. Aside
from the class constraint, creating an instance of KeySortable needs
the usual product maps (η, 𝜀), but then gets _sort_ and _shuffle_ for
free. For interested readers, the code can be found here:

https://github.com/jonzingale/Haskell/tree/master/epiphenomena




--
Sent from: http://friam.471366.n2.nabble.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/