Model of induction

classic Classic list List threaded Threaded
41 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Nick Thompson

Oh c__p, Roger. Even I should have seen that coming. 

 

Yes, Nick, what ever do you MEAN by a GENERATED RANDOM number? 

 

Seems like an oxymoron, doesn’t it?

 

Ok.  Can’t I just ask that we stipulate that the stream of numbers on the screen of the computer is random and let it go at that? 

 

Nick

 

PS  Roger, I hear that the high temp in Boston will be 19 degrees?  How is it in the bubble? 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Eric Charles
Sent: Tuesday, December 13, 2016 6:50 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Model of induction

 

Roger, this seems to get the heart of the matter! I think we must wonder your final sentence is not begging the question: "This was discovered because the random numbers were used in simulations which failed to simulate the random processes they were designed to simulate."

 

I'm not saying that is it begging the question, I'm just saying it seems to me like we are peering deep into the rabbit hole. Presumably, we must have rather extreme confidence that the process we are trying to simulate is, in fact, "truly random", AND rather extreme confidence that our simulation it is not simply having a "bad run", as one would expect any random system to have every so often.  Maybe our simulation is doing great, but the process we are trying to simulate is not random in several subtle ways we have not anticipated. How would we know?

 

(P.S. In hindsight, this is either right at the heart of the matter, or a complete tangent, and I'm not as confident which it is as I was when I started replying.)

 

 



-----------
Eric P. Charles, Ph.D.
Supervisory Survey Statistician

U.S. Marine Corps

 

On Tue, Dec 13, 2016 at 8:24 AM, Roger Critchlow <[hidden email]> wrote:

You have left the model for the untainted computers unspecified, but let's say that they are producing uniform pseudo-random numbers over some interval, like 0 .. 1.  Then your question becomes how do we distinguish the tainted computers, which are only simulating a uniform distribution?

 

This problem encapsulates the history of pseudo-random number generation algorithms.  A researcher named George Marsaglia spent a good part of his career developing algorithms which detected flaws in pseudo-random number generators.  The battery of tests is described here, https://en.wikipedia.org/wiki/Diehard_tests, so I won't go over them, but it's a good list.

 

But, as Marsaglia reported in http://www.ics.uci.edu/~fowlkes/class/cs177/marsaglia.pdf, we don't even know all the ways a pseudo-random number generator can go wrong, we discover the catalog of faults as we go merrily assuming that the algorithm is producing numbers with the properties of our ideal distribution.  This was discovered because the random numbers were used in simulations which failed to simulate the random processes they were designed to simulate.

 

-- rec --

 

 

On Mon, Dec 12, 2016 at 4:45 PM, Nick Thompson <[hidden email]> wrote:

Everybody,

 

As usual, when we “citizens” ask mathematical questions, we throw in WAY too much surplus meaning. 

 

Thanks for all your fine-tuned efforts to straighten me out. 

 

Let’s take out all the colorful stuff and try again.  Imagine a thousand computers, each generating a list of random numbers.  Now imagine that for some small quantity of these computers, the numbers generated are in n a normal (Poisson?) distribution with mean mu and standard deviation s.  Now, the problem is how to detect these non-random computers and estimate the values of mu and s. 

 

Let’s leave aside for the moment what kind of –duction that is.  I shouldn’t have thrown that in.  And  besides, I’ve had enough humiliation for one day. 

 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Monday, December 12, 2016 12:06 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Model of induction

 

Mathematical induction is a method for proving theorems.  "Scientific induction" is a method for accumulating evidence to support one hypothesis or another; no proof involved, or possible.

 

Frank

Frank Wimberly
Phone <a href="tel:(505)%20670-9918" target="_blank">(505) 670-9918

 

On Dec 12, 2016 11:44 AM, "Owen Densmore" <[hidden email]> wrote:

What's the difference between mathematical induction and scientific?

 

   -- Owen

 

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <[hidden email]> wrote:

Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like abduction (AAA-2) to me - ie developing an educated guess as to which might be the winning wheel. Enough funds should find it with some degree of certainty but that may be a different question and should use different statistics because the 'longest run' is a poor metric compared to say net winnings or average rate of winning. A long run is itself a data point and the premise in red (below) is false.

Waiting for wisdom to kick in. R

PS FWIW the article does not contain the phrase 'scientific induction' R

 

On 12/12/16 12:31 AM, Nick Thompson wrote:

Dear Wise Persons,

 

Would the following work? 

 

Imagine you enter a casino that has a thousand roulette tables.  The rumor circulates around the casino that one of the wheels is loaded.  So, you call up a thousand of your friends and you all work together to find the loaded wheel.  Why, because if you use your knowledge to play that wheel you will make a LOT of money.  Now the problem you all face, of course, is that a run of successes is not an infallible sign of a loaded wheel.  In fact, given randomness, it is assured that with a thousand players playing a thousand wheels as fast as they can, there will be random long runs of successes.  But the longer a run of success continues, the greater is the probability that the wheel that produces those successes is biased.  So, your team of players would be paid, on this account, for beginning to focus its play on those wheels with the longest runs.

 

FWIW, this, I think, is Peirce’s model of scientific induction. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

-- 
Cirrillian 
Web Design & Development
Santa Fe, NM
http://cirrillian.com
<a href="tel:(281)%20989-6272" target="_blank">281-989-6272 (cell)
Member Design Corps of Santa Fe


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Owen Densmore
Administrator
Domain Specific Random Number Generators? Kinda interesting idea.

   -- Owen


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Marcus G. Daniels

If you can write down a Hamiltonian for your domain-specific problem, the D-Wave could sample from that Boltzmann distribution.

 

From: Friam [mailto:[hidden email]] On Behalf Of Owen Densmore
Sent: Tuesday, December 13, 2016 11:18 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Model of induction

 

Domain Specific Random Number Generators? Kinda interesting idea.

 

   -- Owen

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Grant Holland
In reply to this post by gepr
Glenn,

This topic was well-developed in the last century. The probabilists
argued the issues thoroughly. But I find what the philosophers of
science have to say about the subject a little more pertinent to what
you are asking, since your discussion seems to be somewhat ontological.
In particular I'm thinking of Peirce, Popper and especially Mario Bunge.
The latter two had to account for quantum theory, so are a little more
pertinent - and interesting. I can give you more specific references if
you are interested.

Take care,

Grant


On 12/12/16 4:47 PM, glen ☣ wrote:

> I have a large stash of nonsense I could write that might be on topic.  But the topic coincides with an argument I had about 2 weeks ago.  My opponent said something generalizing about the use of statistics and I made a comment (I thought was funny, but apparently not) that I don't really know what statistics _is_.  I also made the mistake of claiming that I _do_ know what probability theory is. [sigh]  Fast forward through lots of nonsense to the gist:
>
> My opponent claims that time (the experience of, the passage of, etc.) is required by probability theory.  He seemed to hinge his entire argument on the vernacular concept of an "event".  My argument was that, akin to the idea that we discover (rather than invent) math theorems, probability theory was all about counting -- or measurement.  So, it's all already there, including things like power sets.  There's no need for time to pass in order to measure the size of any given subset of the possibility space.
>
> In any case, I'm a bit of a jerk, obviously.  So, I just assumed I was right and didn't look anything up.  But after this conversation here, I decided to spend lunch doing so.  And ran across the idea that probability is the forward map (given the generator, what phenomena will emerge?) and statistics is the inverse map (given the phenomena you see, what's the generator?).  And although neither of these really require time, per se, there is a definite role for [ir]reversibility or at least asymmetry.
>
> So, does anyone here have an opinion on the ontological status of one or both probability and/or statistics?  Am I demonstrating my ignorance by suggesting the "events" we study in probability are not (identical to) the events we experience in space & time?
>
>
> On 12/11/2016 11:31 PM, Nick Thompson wrote:
>> Would the following work?
>>
>> */Imagine you enter a casino that has a thousand roulette tables.  The rumor circulates around the casino that one of the wheels is loaded.  So, you call up a thousand of your friends and you all work together to find the loaded wheel.  Why, because if you use your knowledge to play that wheel you will make a LOT of money.  Now the problem you all face, of course, is that a run of successes is not an infallible sign of a loaded wheel.  In fact, given randomness, it is assured that with a thousand players playing a thousand wheels as fast as they can, there will be random long runs of successes.  But the longer a run of success continues, the greater is the probability that the wheel that produces those successes is biased.  So, your team of players would be paid, on this account, for beginning to focus its play on those wheels with the longest runs. /*
>>
>>  
>>
>> FWIW, this, I think, is Peirce’s model of scientific induction.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

gepr

Yes, definitely.  I intend to bring up deterministic stochasticity >8^D the next time I see him.  So a discussion of it in the context QM would be helpful.

On 12/13/2016 10:54 AM, Grant Holland wrote:
> This topic was well-developed in the last century. The probabilists argued the issues thoroughly. But I find what the philosophers of science have to say about the subject a little more pertinent to what you are asking, since your discussion seems to be somewhat ontological. In particular I'm thinking of Peirce, Popper and especially Mario Bunge. The latter two had to account for quantum theory, so are a little more pertinent - and interesting. I can give you more specific references if you are interested.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Grant Holland

Glen,

On closer reading of the issue you are interested in, and upon re-consulting the sources I was thinking of (Bunge and Popper), I can see that neither of those sources directly address the question of whether time must be involved in order for probability theory to come into play. Nevertheless, I  think you may be interested in these two sources anyway.

The works that I've been reading from these two folks are: Causality and Modern Science by Mario Bunge and The Logic of Scientific Discovery by Karl Popper. Bunge takes (positive) probability to essentially be the complement of causation. Thus his book ends up being very much about probability. Popper has an eighty page section on probability and is well worth reading from a philosophy of science perspective. I recommend both of these sources.

While I'm at it, let me add my two cents worth to the question concerning the difference between probability and statistics. In my view, Probability Theory should be  defined as "the study of probability spaces". Its not often defined that way - usually something about "random variables" appears in the definition. But the subject of probability spaces is more inclusive, so I prefer it.

Secondly, its reasonable to say that a probability space defines "events" (at least in the finite case) as essentially a set of combinations of the sample space (with a few more specifications). Nothing is said in this definition that requires that "the event must occur in the future". But it seems that many people (students) insist that it has to - or else they can't seem to wrap their minds around it. I usually just let them believe that "the event has to be in the future" and let it go at that. But there is nothing in the definition of an event in a probability space that requires anything about time.

I regard the discipline of statistics (of the Fisher/Neyman type) as the study of a particular class of problems pertaining to probability distributions and joint distributions: for example, test of hypotheses, analysis of variance, and other problems. Statistics makes some very specific assumptions that probability theory does not always make: such as that there is an underlying theoretical distribution that exhibits "parameters" against which are compared "sample distributions" that exhibit corresponding "statistics". Moreover, the sweet spot of statistics, as I see it, is the moment and central moment functionals that, essentially, measure chance variation of random variables.

I admit that some folks would say that probability theory is no more inclusive than I described statistics as being. But I think that it is. Admittedly, what I have just said is more along the lines of "what it is to me" - a statement of preference, rather than an ontic argument that "this is what it is".

As long as we're all having a good time...

Grant

On 12/13/16 12:03 PM, glen ☣ wrote:
Yes, definitely.  I intend to bring up deterministic stochasticity >8^D the next time I see him.  So a discussion of it in the context QM would be helpful.

On 12/13/2016 10:54 AM, Grant Holland wrote:
This topic was well-developed in the last century. The probabilists argued the issues thoroughly. But I find what the philosophers of science have to say about the subject a little more pertinent to what you are asking, since your discussion seems to be somewhat ontological. In particular I'm thinking of Peirce, Popper and especially Mario Bunge. The latter two had to account for quantum theory, so are a little more pertinent - and interesting. I can give you more specific references if you are interested.

    


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Robert Wall
Hi Glen, 

I feel a bit like Nick says he feels when immersed in the stream of such erudite responses to each of your seemingly related, but thread-separated questions.  As always, though, when reading the posted responses in this forum, I learn a lot from the various and remarkable ways questions can be interpreted based on individual experiences.  Perhaps this props up the idea of social constructivism more than Platonism.  So, if you can bear with me, my response here is more of a summary of my takeaways from the variety of responses to your two respective questions, with my own interpretations thrown in and based on my own experiences. 

Taking each question separately ...

Imagine a thousand computers, each generating a list of random numbers.  Now imagine that for some small quantity of these computers, the numbers generated are in n a normal (Poisson?) distribution with mean mu and standard deviation s.  Now, the problem is how to detect these non-random computers and estimate the values of mu and s.

Nick's question seems to be about how to determine non-random event generators from independent streams of reportedly random processes.  This is not really difficult to do and doesn't require any assumptions about underlying probability distributions other than that each number in the stream is equally likely as any other number in the stream [i.e., uniformly distributed in probability space] and that the cumulative probability over all possible outcomes sums to unity: the very definition of a random variable ... a non-deterministic event--an observation--mapped to a number line or a categorical bin.  A random variable has both mathematical and philosophical properties, as we have heard in this thread.

For Nick's question, I think that Roger has provided the most practical answer with Marsaglia's Die Hard battery of tests for randomness.  In my professional life, I used these tests to prepare, for example, a QC procedure for ensuring our hashing algorithms remained random allocators after each new build of our software suite.  For example, a simple test called the "poker test" using the Chi-squared distribution could be used to satisfy Nick's question with the power of the test (i.e., reducing the probability of rejecting the null hypothesis of randomness when it is true; thus perhaps finding more non-random processes than really exist) increasing with larger sample sizes ... longer runs.

So, does anyone here have an opinion on the ontological status of one or both probability and/or statistics?  Am I demonstrating my ignorance by suggesting the "events" we study in probability are not (identical to) the events we experience in space & time?

At the risk of exposing my own ignorance, I'll also say your question has to do with the ontological status of any random "event" when treated in any estimation experiments or likelihood computation; that is, are proposed probability events or measured statistical events real? 

For example--examples are always good to help clarify the question--is the likelihood of a lung cancer event given a history of smoking pointing to some reality that will actually occur with a certain amount of uncertainty? In a population of smokers, yes.  For an individual smoker, no. In the language of probability and statistics, we say that in a population of smokers we expect this reality to be observed with a certain amount of certainty (probability). To be sure, these tests would likely involve several levels of contingencies to tame troublesome confounding variables (e.g., age, length of time, smoking rate). Don't want to get into multi-variate statistics, though. 

Obviously, time is involved here but doesn't have to be (e.g., the probability of drawing four aces from a trial of five random draws). An event is an observation in, say, a nonparametric Fisher exact test of significance against the null hypothesis of, say, a person that smokes will contract lung cancer, which we can make contingent on, say, the number of years of smoking. Epidemiological studies can be very complex, so maybe not the best of examples ...

So, since probability and statistics both deal with the idea of an event--as your "opponent" insists--events are just observations that the event of interest [e.g., four of a kind] occurred; so I would say epistemologically they are real experiences with a potential (probability) based on either controlled randomized experiments of observational experience.  But is a potential ontologically real?  🤔

Asking if those events come with ontologically real probabilistic properties is another, perhaps, different question?  This gets into worldview notions of determinism and randomness. We tend to say that if a human cannot predict the event in advance, it is random ... enough. If it can be predicted based, say, on known initial conditions, then using probability theory here is misplaced. Still, there are chaotic non-random events that are not practically predictable ... they seem random ... enough.  Santa Fe science writer and book author George Johnson gets into this in his book Fire in the Mind.

I would just close with another comment, this time regarding Roger's recounting of Marsaglia's report on the issues with pseudo-random number generators.  RANDU was used on mainframes for years but was subsequently found to be seriously flawed. If I remember correctly, the rand() function used in C applications was also found to be deficient.  Likely, this is why we need these a battery of randomness tests to be sure. But there has been a great deal of research in this area and things have improved dramatically.  

There are even so-called true random number generators that "tap" into off-computer and decidedly random-event sources like atmospheric noise [or even quantum-level events].  But even here, some folks who's worldview see the universe as deterministic would say that these generators are not truly random either. Chaotic, yes.  But, not random. I say, likely random enough.

Finally, I would say that we can use number generators that are random enough for our own purposes. In fact, for running simulation models, say, to compare competing alternatives for decision support, we need to use pseudo-random number generators in order to be able to gain a sizable reduction in the (random) variance of the results. This would tend to sharpen up our test of significance in comparing the resulting output statistics as well. 

Kind of a fun topic.  Hope this adds a little of its own sharpness to the discussion and doesn't just add variance. 🤔 If y' all deem not, I will expect some change from my $0.02. 🤐

Cheers,

Robert W.

On Tue, Dec 13, 2016 at 3:42 PM, Grant Holland <[hidden email]> wrote:

Glen,

On closer reading of the issue you are interested in, and upon re-consulting the sources I was thinking of (Bunge and Popper), I can see that neither of those sources directly address the question of whether time must be involved in order for probability theory to come into play. Nevertheless, I  think you may be interested in these two sources anyway.

The works that I've been reading from these two folks are: Causality and Modern Science by Mario Bunge and The Logic of Scientific Discovery by Karl Popper. Bunge takes (positive) probability to essentially be the complement of causation. Thus his book ends up being very much about probability. Popper has an eighty page section on probability and is well worth reading from a philosophy of science perspective. I recommend both of these sources.

While I'm at it, let me add my two cents worth to the question concerning the difference between probability and statistics. In my view, Probability Theory should be  defined as "the study of probability spaces". Its not often defined that way - usually something about "random variables" appears in the definition. But the subject of probability spaces is more inclusive, so I prefer it.

Secondly, its reasonable to say that a probability space defines "events" (at least in the finite case) as essentially a set of combinations of the sample space (with a few more specifications). Nothing is said in this definition that requires that "the event must occur in the future". But it seems that many people (students) insist that it has to - or else they can't seem to wrap their minds around it. I usually just let them believe that "the event has to be in the future" and let it go at that. But there is nothing in the definition of an event in a probability space that requires anything about time.

I regard the discipline of statistics (of the Fisher/Neyman type) as the study of a particular class of problems pertaining to probability distributions and joint distributions: for example, test of hypotheses, analysis of variance, and other problems. Statistics makes some very specific assumptions that probability theory does not always make: such as that there is an underlying theoretical distribution that exhibits "parameters" against which are compared "sample distributions" that exhibit corresponding "statistics". Moreover, the sweet spot of statistics, as I see it, is the moment and central moment functionals that, essentially, measure chance variation of random variables.

I admit that some folks would say that probability theory is no more inclusive than I described statistics as being. But I think that it is. Admittedly, what I have just said is more along the lines of "what it is to me" - a statement of preference, rather than an ontic argument that "this is what it is".

As long as we're all having a good time...

Grant

On 12/13/16 12:03 PM, glen ☣ wrote:
Yes, definitely.  I intend to bring up deterministic stochasticity >8^D the next time I see him.  So a discussion of it in the context QM would be helpful.

On 12/13/2016 10:54 AM, Grant Holland wrote:
This topic was well-developed in the last century. The probabilists argued the issues thoroughly. But I find what the philosophers of science have to say about the subject a little more pertinent to what you are asking, since your discussion seems to be somewhat ontological. In particular I'm thinking of Peirce, Popper and especially Mario Bunge. The latter two had to account for quantum theory, so are a little more pertinent - and interesting. I can give you more specific references if you are interested.

    


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Russell Standish-2
In reply to this post by Nick Thompson
On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
>  
>
> Let’s take out all the colorful stuff and try again.  Imagine a thousand computers, each generating a list of random numbers.  Now imagine that for some small quantity of these computers, the numbers generated are in n a normal (Poisson?) distribution with mean mu and standard deviation s.  Now, the problem is how to detect these non-random computers and estimate the values of mu and s.  
>

Your question comes down to: given a set of statistical distributions
(ie models), which model best fits a given data source. In your case,
presumably you have two models - a uniform distribution and a normal
(or Poisson - they're two different distibutions resulting from
additive versus multiplicative processes respectively) distribution.

The paper to read on this topic is

@Article{Clauset-etal07,
  author =       {Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman},
  title =        {Power-law Distributions in Empirical Data},
  journal =      {SIAM Review},
  volume = 51,
  pages = {661-703},
  year =         2009,
  note =         {arXiv:0706.1062}
}

Almost everyone doing work in Complex Systems theory with power laws
has been doing it wrong! The way it should be done is to compare a
metric called "likelihood" calculated over the data and a model, for
the different models in question.

I was scheduled to give a talk "Perils of Power Laws" at a local
Complex Systems conference in 2007. Originally, when I proposed the
topic, I planned to synthesise and collect some of my war stories
relating to power law problems - but a couple of months before the
conference, someone showed me Clauset's paper. I was so impressed by
it, not only superseding anything I could do on the timescale, but
also I felt was so important for my colleagues to know about that I
took the unprecedented step of presenting someone else's paper at the
conference. With full attribution, of course. I still feel it was the
most important paper in my field of 2007, and one of the most
important papers of this century. Even though it didn't officially get
published until 2009 :).

Nick's question is unrelated to the question of how to detect whether
a source is random or not. A non-uniform random source is one that can
be transformed into a uniform random source by a computable
transformation, so uniformity is not really a test of randomness.

Detecting whether a source is random or not is not a computational
feasible task. All one can do is prove that a given source is
non-random (by providing an effective generator of the data), but you
can never prove a source is truly random, except by exhaustive testing
of all Turing machines less than the data's complexity, which suffers
from combinatoric computational complexity.

Cheers

--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Nick Thompson
Hi, Russell S.,

It's a long time since the old days of the Three Russell's, isn't it?  Where have all the Russell's gone?  Good to hear from you.

This has been a humbling experience.  My brother was a mathematician and he used to frown every time asked him what I thought was a simple mathematical question.  

So ... with my heart in my hands ... please tell me, why a string of 100 one's , followed by a string of 100 2's, ..., followed by a string of 100 zero's wouldn’t be regarded as random.  There must be something more than uniform distribution, eh?    

Is there a halting problem lurking here?  

Nick

Nicholas S. Thompson
Emeritus Professor of Psychology and Biology
Clark University
http://home.earthlink.net/~nickthompson/naturaldesigns/

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of Russell Standish
Sent: Tuesday, December 13, 2016 7:59 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Model of induction

On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
>  
>
> Let’s take out all the colorful stuff and try again.  Imagine a thousand computers, each generating a list of random numbers.  Now imagine that for some small quantity of these computers, the numbers generated are in n a normal (Poisson?) distribution with mean mu and standard deviation s.  Now, the problem is how to detect these non-random computers and estimate the values of mu and s.  
>

Your question comes down to: given a set of statistical distributions (ie models), which model best fits a given data source. In your case, presumably you have two models - a uniform distribution and a normal (or Poisson - they're two different distibutions resulting from additive versus multiplicative processes respectively) distribution.

The paper to read on this topic is

@Article{Clauset-etal07,
  author =       {Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman},
  title =        {Power-law Distributions in Empirical Data},
  journal =      {SIAM Review},
  volume = 51,
  pages = {661-703},
  year =         2009,
  note =         {arXiv:0706.1062}
}

Almost everyone doing work in Complex Systems theory with power laws has been doing it wrong! The way it should be done is to compare a metric called "likelihood" calculated over the data and a model, for the different models in question.

I was scheduled to give a talk "Perils of Power Laws" at a local Complex Systems conference in 2007. Originally, when I proposed the topic, I planned to synthesise and collect some of my war stories relating to power law problems - but a couple of months before the conference, someone showed me Clauset's paper. I was so impressed by it, not only superseding anything I could do on the timescale, but also I felt was so important for my colleagues to know about that I took the unprecedented step of presenting someone else's paper at the conference. With full attribution, of course. I still feel it was the most important paper in my field of 2007, and one of the most important papers of this century. Even though it didn't officially get published until 2009 :).

Nick's question is unrelated to the question of how to detect whether a source is random or not. A non-uniform random source is one that can be transformed into a uniform random source by a computable transformation, so uniformity is not really a test of randomness.

Detecting whether a source is random or not is not a computational feasible task. All one can do is prove that a given source is non-random (by providing an effective generator of the data), but you can never prove a source is truly random, except by exhaustive testing of all Turing machines less than the data's complexity, which suffers from combinatoric computational complexity.

Cheers

--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Owen Densmore
Administrator
All three (Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman) have given great courses at the SFI summer school.

On Tue, Dec 13, 2016 at 8:41 PM, Nick Thompson <[hidden email]> wrote:
Hi, Russell S.,

It's a long time since the old days of the Three Russell's, isn't it?  Where have all the Russell's gone?  Good to hear from you.

This has been a humbling experience.  My brother was a mathematician and he used to frown every time asked him what I thought was a simple mathematical question.

So ... with my heart in my hands ... please tell me, why a string of 100 one's , followed by a string of 100 2's, ..., followed by a string of 100 zero's wouldn’t be regarded as random.  There must be something more than uniform distribution, eh?

Is there a halting problem lurking here?

Nick

Nicholas S. Thompson
Emeritus Professor of Psychology and Biology
Clark University
http://home.earthlink.net/~nickthompson/naturaldesigns/

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of Russell Standish
Sent: Tuesday, December 13, 2016 7:59 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Model of induction

On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
>
>
> Let’s take out all the colorful stuff and try again.  Imagine a thousand computers, each generating a list of random numbers.  Now imagine that for some small quantity of these computers, the numbers generated are in n a normal (Poisson?) distribution with mean mu and standard deviation s.  Now, the problem is how to detect these non-random computers and estimate the values of mu and s.
>

Your question comes down to: given a set of statistical distributions (ie models), which model best fits a given data source. In your case, presumably you have two models - a uniform distribution and a normal (or Poisson - they're two different distibutions resulting from additive versus multiplicative processes respectively) distribution.

The paper to read on this topic is

@Article{Clauset-etal07,
  author =       {Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman},
  title =        {Power-law Distributions in Empirical Data},
  journal =      {SIAM Review},
  volume = 51,
  pages = {661-703},
  year =         2009,
  note =         {arXiv:0706.1062}
}

Almost everyone doing work in Complex Systems theory with power laws has been doing it wrong! The way it should be done is to compare a metric called "likelihood" calculated over the data and a model, for the different models in question.

I was scheduled to give a talk "Perils of Power Laws" at a local Complex Systems conference in 2007. Originally, when I proposed the topic, I planned to synthesise and collect some of my war stories relating to power law problems - but a couple of months before the conference, someone showed me Clauset's paper. I was so impressed by it, not only superseding anything I could do on the timescale, but also I felt was so important for my colleagues to know about that I took the unprecedented step of presenting someone else's paper at the conference. With full attribution, of course. I still feel it was the most important paper in my field of 2007, and one of the most important papers of this century. Even though it didn't officially get published until 2009 :).

Nick's question is unrelated to the question of how to detect whether a source is random or not. A non-uniform random source is one that can be transformed into a uniform random source by a computable transformation, so uniformity is not really a test of randomness.

Detecting whether a source is random or not is not a computational feasible task. All one can do is prove that a given source is non-random (by providing an effective generator of the data), but you can never prove a source is truly random, except by exhaustive testing of all Turing machines less than the data's complexity, which suffers from combinatoric computational complexity.

Cheers

--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

gepr
In reply to this post by Robert Wall

Thanks!  Everything you say seems to land squarely in my opponent's camp, with the focus on the concept of an action or event, requiring some sort of partially ordered index (like time).  But you included the clause "but doesn't have to be".  I'd like to hear more about what you conceive probability theory to be without events, actions, time, etc.

For the sake of this argument, anyway, my concept is affine to Grant's: "the study of probability spaces".  Probability, to me, is just the study of the sizes of sets where all the sizes are normalized to the [0,1] interval.  We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets.  Likewise, the "events" of probability are merely analogous to the events we experience in subjective time.  Those "events" are (various) properties or predicates that hold over whatever set of sets is under consideration.  Those "events" don't _happen_.  They simply _are_.

Since your language seems to depend on the idea that those predicates must _happen_ (i.e. at one point, they are potential or imaginary, and the next they are actual or factual), yet you say they don't have to, I'd like to hear you explain how "they don't have to".  What are these "events" absent time (or another such partially ordered index)?

p.s. FWIW, I have the same problem with the concept of "function" and asymmetric transformations.  I accept the idea of a non-invertible function.  But by accepting that, am I forced to admit something like time?  Or, asked another way: As all the no-go theorem provers keep telling us (Tarski, Gödel, Wolpert, Arrow, ...), are we doomed to a "turtles all the way down" perspective?


On 12/13/2016 05:03 PM, Robert Wall wrote:
> At the risk of exposing my own ignorance, I'll also say your question has to do with the ontological status of any random "event" when treated in any estimation experiments or likelihood computation; that is, are proposed probability events or measured statistical events real?
>
> For example--examples are always good to help clarify the question--is the likelihood of a lung cancer event given a history of smoking pointing to some reality that will actually occur with a certain amount of uncertainty? In a population of smokers, yes.  For an individual smoker, no. In the language of probability and statistics, we say that in a population of smokers we /expect /this reality to be observed with a certain amount of certainty (probability). To be sure, these tests would likely involve several levels of contingencies to tame troublesome confounding variables (e.g., age, length of time, smoking rate). Don't want to get into multi-variate statistics, though.
>
> Obviously, time is involved here but doesn't have to be (e.g., the probability of drawing four aces from a trial of five random draws). An event is an observation in, say, a nonparametric Fisher exact test of significance against the null hypothesis of, say, a person that smokes will contract lung cancer, which we can make contingent on, say, the number of years of smoking. Epidemiological studies can be very complex, so maybe not the best of examples ...
>
> So, since probability and statistics both deal with the idea of an event--as your "opponent" insists--events are just observations that the event of interest [e.g., four of a kind] occurred; so I would say epistemologically they are real experiences with a potential (probability) based on either controlled randomized experiments of observational experience.  But is a potential ontologically real?  🤔
>
> Asking if those events come with ontologically real probabilistic properties is another, perhaps, different question?  This gets into worldview notions of determinism and randomness. We tend to say that if a human cannot predict the event in advance, it is random ... enough. If it can be predicted based, say, on known initial conditions, then using probability theory here is misplaced. Still, there are chaotic non-random events that are not practically predictable ... they seem random ... enough.  Santa Fe science writer and book author George Johnson gets into this in his book /Fire in the Mind/.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Eric Charles-2
Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.

Glen states: We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets.

I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.

I fully admit that we can model the system without reference to time, if we want to. Such efforts might yield keen insights. If Glen had said that we can usefully model what we are interested in as an organized set with such-and-such properties, and time no where to be found, that might seem pretty reasonable. But that would be a formal model produced for specific purposes, not the actual phenomenon of interest. Everything interesting that we want to describe as "probable" and all the conclusions we want to come to "statistically" are, for the lab scientist, time dependent phenomena. (I assert.)



-----------
Eric P. Charles, Ph.D.
Supervisory Survey Statistician
U.S. Marine Corps

On Wed, Dec 14, 2016 at 12:16 PM, glen ☣ <[hidden email]> wrote:

Thanks!  Everything you say seems to land squarely in my opponent's camp, with the focus on the concept of an action or event, requiring some sort of partially ordered index (like time).  But you included the clause "but doesn't have to be".  I'd like to hear more about what you conceive probability theory to be without events, actions, time, etc.

For the sake of this argument, anyway, my concept is affine to Grant's: "the study of probability spaces".  Probability, to me, is just the study of the sizes of sets where all the sizes are normalized to the [0,1] interval.  We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets.  Likewise, the "events" of probability are merely analogous to the events we experience in subjective time.  Those "events" are (various) properties or predicates that hold over whatever set of sets is under consideration.  Those "events" don't _happen_.  They simply _are_.

Since your language seems to depend on the idea that those predicates must _happen_ (i.e. at one point, they are potential or imaginary, and the next they are actual or factual), yet you say they don't have to, I'd like to hear you explain how "they don't have to".  What are these "events" absent time (or another such partially ordered index)?

p.s. FWIW, I have the same problem with the concept of "function" and asymmetric transformations.  I accept the idea of a non-invertible function.  But by accepting that, am I forced to admit something like time?  Or, asked another way: As all the no-go theorem provers keep telling us (Tarski, Gödel, Wolpert, Arrow, ...), are we doomed to a "turtles all the way down" perspective?


On 12/13/2016 05:03 PM, Robert Wall wrote:
> At the risk of exposing my own ignorance, I'll also say your question has to do with the ontological status of any random "event" when treated in any estimation experiments or likelihood computation; that is, are proposed probability events or measured statistical events real?
>
> For example--examples are always good to help clarify the question--is the likelihood of a lung cancer event given a history of smoking pointing to some reality that will actually occur with a certain amount of uncertainty? In a population of smokers, yes.  For an individual smoker, no. In the language of probability and statistics, we say that in a population of smokers we /expect /this reality to be observed with a certain amount of certainty (probability). To be sure, these tests would likely involve several levels of contingencies to tame troublesome confounding variables (e.g., age, length of time, smoking rate). Don't want to get into multi-variate statistics, though.
>
> Obviously, time is involved here but doesn't have to be (e.g., the probability of drawing four aces from a trial of five random draws). An event is an observation in, say, a nonparametric Fisher exact test of significance against the null hypothesis of, say, a person that smokes will contract lung cancer, which we can make contingent on, say, the number of years of smoking. Epidemiological studies can be very complex, so maybe not the best of examples ...
>
> So, since probability and statistics both deal with the idea of an event--as your "opponent" insists--events are just observations that the event of interest [e.g., four of a kind] occurred; so I would say epistemologically they are real experiences with a potential (probability) based on either controlled randomized experiments of observational experience.  But is a potential ontologically real?  🤔
>
> Asking if those events come with ontologically real probabilistic properties is another, perhaps, different question?  This gets into worldview notions of determinism and randomness. We tend to say that if a human cannot predict the event in advance, it is random ... enough. If it can be predicted based, say, on known initial conditions, then using probability theory here is misplaced. Still, there are chaotic non-random events that are not practically predictable ... they seem random ... enough.  Santa Fe science writer and book author George Johnson gets into this in his book /Fire in the Mind/.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

gepr

Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) of Platonic math ... and how weird mathematicians sound (to me) when they say we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
> Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.
>
> Glen states: /We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets./
>
> I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.
>
> I fully admit that we can model the system without reference to time, if we want to. Such efforts might yield keen insights. If Glen had said that we can usefully model what we are interested in as an organized set with such-and-such properties, and time no where to be found, that might seem pretty reasonable. But that would be a formal model produced for specific purposes, not the actual phenomenon of interest. Everything interesting that we want to describe as "probable" and all the conclusions we want to come to "statistically" are, for the lab scientist, time dependent phenomena. (I assert.)

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Grant Holland

And I completely agree with Eric. But we can language it real simply and intuitively by just looking at what a probability space is. For further simplicity lets keep it to a finite probability space. (Neither a finite nor an infinite one says anything about "time".)

A finite probability space has 3 elements: 1) a set of sample points called "the sample space", 2) a set of events, and 3) a set of probabilities for the events. (An infinite probability space is strongly similar.)

But what is this "set of events"? That's the question that is being discussed on this thread. It turns out that the events for a finite space is nothing more than the set of all possible combinations of the sample points. (Formally the event set is something called a "sigma algebra", but no matter.) So, an event scan be thought of simply all combinations of the sample points.

Notice that it is the events that have probabilities - not the sample points. Of course it turns out that each of the sample points happens to be a  (trivial) combination of the sample space - therefore it has a probability too!

So, the events already have probabilities by virtue of just being in a probability space. They don't have to be "selected", "chosen" or any such thing. They "just sit there" and have probabilities - all of them. The notion of time is never mentioned or required.

Admittedly, this formal (mathematical) definition of "event" is not equivalent to the one that you will find in everyday usage. The everyday one does involve time. So you could say that everyday usage of "event" is "an application" of the formal "event" used in probability theory. This confusion between the everyday "event" and the formal "event" may be the root of the issue.

Jus' sayin'.

Grant


On 12/14/16 11:36 AM, glen ☣ wrote:
Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) of Platonic math ... and how weird mathematicians sound (to me) when they say we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.

Glen states: /We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets./

I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.

I fully admit that we can model the system without reference to time, if we want to. Such efforts might yield keen insights. If Glen had said that we can usefully model what we are interested in as an organized set with such-and-such properties, and time no where to be found, that might seem pretty reasonable. But that would be a formal model produced for specific purposes, not the actual phenomenon of interest. Everything interesting that we want to describe as "probable" and all the conclusions we want to come to "statistically" are, for the lab scientist, time dependent phenomena. (I assert.)

    


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Frank Wimberly-2
In reply to this post by gepr
Don't think about choosing.  The axiom of choice says that there is a function from each set (subset) to an element of itself, as I recall.

Frank


Frank C. Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505

[hidden email]     [hidden email]
Phone:  (505) 995-8715      Cell:  (505) 670-9918

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Wednesday, December 14, 2016 11:36 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] probability vs. statistics (was Re: Model of induction)


Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) of Platonic math ... and how weird mathematicians sound (to me) when they say we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:

> Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.
>
> Glen states: /We talk of "selecting" or "choosing" subsets or elements
> from larger sets.  But such "selection" isn't an action in time.  Such
> "selection" is an already extant property of that organization of
> sets./
>
> I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.
>
> I fully admit that we can model the system without reference to time,
> if we want to. Such efforts might yield keen insights. If Glen had
> said that we can usefully model what we are interested in as an
> organized set with such-and-such properties, and time no where to be
> found, that might seem pretty reasonable. But that would be a formal
> model produced for specific purposes, not the actual phenomenon of
> interest. Everything interesting that we want to describe as
> "probable" and all the conclusions we want to come to "statistically"
> are, for the lab scientist, time dependent phenomena. (I assert.)

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Robert Wall
In reply to this post by Grant Holland
Hi Glen, et al,

Thanks for cashing mu $0.02 check. :-) 

When I wrote that "but it doesn't have to be" I wasn't asserting that probability theory is devoid of events.  Events are fundamental to probability theory.  They are the outcomes to which probability is assigned.  In a nutshell, the practice of probability theory is the mapping of the events--outcomes-- from random processes to numbers, thus making the practice purposefully mathematical.  And in this regard, we speak of a mathematical entity dubbed a random variable in order to carry out the calculus of probability and statistics. 

A random variable is like any other variable in mathematics, but with specific properties concerning the values it can take on.  A random variable is considered "discrete" if it can take on only countable, distinct, or separate values [e.g., the sum of the values of a roll of seven die].  Otherwise, a random variable can be considered "continuous" if it can take on any value in an interval [e.g., the mass of an animal].  But, a random variable is a real-valued function--a one-to-one mapping of a random process to a number line.

This is arguably as long about way of explaining [muck?!] why I said "but it doesn't have to be ... time." Time doesn't have to be involved such that the random variable does not have to distributed in time but often can be, such as in reliability theory--for, example, the probability that a device will survive for at least T cycles or months. 

Yes to your and Grant's notion that thinking in terms of probability spaces is a good way of thinking of probability and statistic and this mapping, as mathematically we are doing convolutions of distributions [spaces?] when modeling independent, usually identically distributed random trials [activities]. But, let's not confuse the mathematical modeling with the selection process of, say picking four of a kind from a deck of 52 cards. All we are interested in doing is mapping the outcomes--events-- to possibilities over which the probabilities all sum or integrate to no more than unity. The activity gets mapped in the treatment of the random variable in the mapping [e..g., the number of trials]. So, for example, rolling 6s six times in a row is not a function of time, but of six discrete, independent and identically distributed trials. For the computed probability, in this case, it doesn't matter how long it took to roll the dice six times. 

I am thinking that this is the way your "opponent" is thinking about the problem and suspect that he has been formally trained to see it this way.  Not the only way but a classical way.

When Eric talks about the historic difference between scientists, mathematicians, and statisticians practicing probability theory and statistics, these differences quickly disappeared when the idea of uncertainty bubbled up into the models found in the fields of physics, economics, measurement theory, decision theory, etc.  No longer could the world be completely described by the classical system dynamic models.  Maybe before Gauss even (the late-1700s), who was a polymath to be sure, error terms were starting to be added to their equations and had to be estimated.

As to my language of "when" an event occurs with some calculated likelihood, it can be a description or a prediction. The researcher may be asking like Nick is [kind of?] asking in the other thread, what is the likelihood of my getting this many 1s in a row if the process is supposedly generating discrete random numbers between, say, one and five? In this case, a psychologically unexpected event has happened. Or in planning his experiment in advance, he may just want to set a halting threshold for determining that any machine that gives him the same N consecutive numbers in a row to be suspect. In that case, the event hasn't happened but has a finite potential for happening and we want to detect that if it happens ... too much. 

Those "events" don't _happen_.  They simply _are_

This bit seems more philosophical than something a statistician would likely [no pun intended] worry about. Admittedly, my choice of words--throughout my post--could have been more precise, but I would not have said that "events simply are."  When discussing the nature of time in a "block universe," maybe that could be said, but I would have been in Henri Bergson's corner [to my peril, of course] in the 1922 debate between Bergson and Albert Einstein on the subject of time. :-) Curiously, Bergson's idea of time is coming back--see Time Reborn (2013) by Lee Smolin.  But this is likely not what you meant. However, you are an out-of-the-closet Platonist by your own admission. No worries; I have friends who are Platonists, most of them being mathematicians or philosophers or believe the brain to be a computer, but not typically computational scientists and certainly not cognitive scientists. :-) No such thing as computational philosophy ... yet. Hmmm. 

BTW, a Random Variable--continuous or discrete--does not have to be uniformly distributed, but you want that in a stream of equally-likely numbers input into a Monte Carlo simulation or when computing invertible probability distributions [not all are invertible as you say] to also feed a Monte Carlo simulation. I am pretty sure you have done this, from past discussions. 

In response to Grant, I would say that we are way beyond the times when we could easily distinguish between mathematicians, statisticians, and scientists. We have computational biology, computational physics, computational economics, computational finance, etc. all of which have elements of computational statistics.  Computational statistics--a subset of the field in which I practiced--is a rising and inclusive field.  Take a look at the curriculum at George Mason University, for example. I mean, can we say that we need to distinguish between mathematicians and statisticians anymore? To be a statistician these days you need to be able to derive maximum likelihood estimators, for example.  To be a mathematician, ... well this is from the Univeristy of Oxford:

All over the world, human beings create an immense and ever-increasing volume of data, with new kinds of data regularly emerging from science and industry. A new understanding of the value of these data to society has emerged, and, with it, a new and leading role for Statistics. In order to produce sensible theories and draw accurate conclusions from data, cutting-edge statistical methods are needed. These methods use advanced mathematical ideas combined with modern computational techniques, which require expert knowledge and experience to apply. A degree in Mathematics and Statistics equips you with the skills required for developing and implementing these methods, and provides a fascinating combination of deep and mathematically well-grounded method-building and wide-ranging applied work with data.

Finally and relatedly, I have been trying to follow Nick's evolving query to the forum, but it seems--to me--like he is looking for a way to prove that a generator of numbers is not random.  As someone else has already mentioned, one cannot really do this, that is, prove that a sequence is not random ... almost like trying to prove that God does not exist  When you think of it, a series of 100 rolls of a dice that is all fives, say, is as equally likely as ANY other specific sequence of rolls or (1/6)^100. So you can't derive anything about randomness by just looking at the numbers.  Humans are not good at differentiating between randomness and just chaos. Essentially, anything is possible ... but "how likely?" is the right question.

A good--simple way [the die-hard test is not simple]-- to sense the lack of randomness in a stream of numbers is to compare the results with a theoretical distribution that is random using the Chi-squared distribution.  The Poker Test fits this criterion of simple, but effective. If the number generator is dealing hands, say--like with four 8s--in a proportion that is not at all likely, then one should be suspicious.  But you could not say that it could never happen. The Chi-squared distribution is skewed positively with a tail that goes to infinity. But the thickness of the tails can be decreased with more trials [hands] or so-called degrees of freedom. It's a pretty cool way to do this and is easily accomplished computationally.  

Hope this clarifies a few things at least. Sorry for the long explanation.  I guess I cannot help myself ... :-( 

Cheers,

Robert W.

On Wed, Dec 14, 2016 at 1:40 PM, Grant Holland <[hidden email]> wrote:

And I completely agree with Eric. But we can language it real simply and intuitively by just looking at what a probability space is. For further simplicity lets keep it to a finite probability space. (Neither a finite nor an infinite one says anything about "time".)

A finite probability space has 3 elements: 1) a set of sample points called "the sample space", 2) a set of events, and 3) a set of probabilities for the events. (An infinite probability space is strongly similar.)

But what is this "set of events"? That's the question that is being discussed on this thread. It turns out that the events for a finite space is nothing more than the set of all possible combinations of the sample points. (Formally the event set is something called a "sigma algebra", but no matter.) So, an event scan be thought of simply all combinations of the sample points.

Notice that it is the events that have probabilities - not the sample points. Of course it turns out that each of the sample points happens to be a  (trivial) combination of the sample space - therefore it has a probability too!

So, the events already have probabilities by virtue of just being in a probability space. They don't have to be "selected", "chosen" or any such thing. They "just sit there" and have probabilities - all of them. The notion of time is never mentioned or required.

Admittedly, this formal (mathematical) definition of "event" is not equivalent to the one that you will find in everyday usage. The everyday one does involve time. So you could say that everyday usage of "event" is "an application" of the formal "event" used in probability theory. This confusion between the everyday "event" and the formal "event" may be the root of the issue.

Jus' sayin'.

Grant


On 12/14/16 11:36 AM, glen ☣ wrote:
Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) of Platonic math ... and how weird mathematicians sound (to me) when they say we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.

Glen states: /We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such "selection" isn't an action in time.  Such "selection" is an already extant property of that organization of sets./

I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.

I fully admit that we can model the system without reference to time, if we want to. Such efforts might yield keen insights. If Glen had said that we can usefully model what we are interested in as an organized set with such-and-such properties, and time no where to be found, that might seem pretty reasonable. But that would be a formal model produced for specific purposes, not the actual phenomenon of interest. Everything interesting that we want to describe as "probable" and all the conclusions we want to come to "statistically" are, for the lab scientist, time dependent phenomena. (I assert.)

    


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Model of induction

Russell Standish-2
In reply to this post by Nick Thompson
On Tue, Dec 13, 2016 at 08:41:12PM -0700, Nick Thompson wrote:
> Hi, Russell S.,
>
> It's a long time since the old days of the Three Russell's, isn't it?  Where have all the Russell's gone?  Good to hear from you.
>
> This has been a humbling experience.  My brother was a mathematician and he used to frown every time asked him what I thought was a simple mathematical question.  
>
> So ... with my heart in my hands ... please tell me, why a string of 100 one's , followed by a string of 100 2's, ..., followed by a string of 100 zero's wouldn’t be regarded as random.  There must be something more than uniform distribution, eh?    
>

Yes - the modern notion of a random string is that it is
uncompressible by a Turing machine shorter than itself.

Obviously, you can exploit nonuniformity to provide a compression - eg
the way that 'e' and 't' are represented by single . and -
respectively provides a compression of random English language
phrases. Hence why uniformity is one test of randomness

That is why non-uniform random, whilst a thing, must be defined by an
algorithmic transformation to a uniform random thing (the
algorithmically uncompressible things mentioned above).

> Is there a halting problem lurking here?  
>

Absolutely.

--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

Robert Wall
In reply to this post by Frank Wimberly-2
Glen,

Okay, given some of the later postings against the original question, I am thinking that your question may have morphed or that I have completely misunderstood what you are asking. Not sure. For example, somehow we have gone from probability theory and its ontological status to the Banach-Tarski Theorem and the Axiom of Choice.  This seems like a non-sequitur, but not sure.  First off, a theory is inductive, whereas, a theorem is deductive; so that is my first disconnect. So I don't understand how we got here ... but this often happens to me.  :-( 

Then we go to what I think is a refinement of the original question. Yes?  (I am just trying to navigate the thinking to get to the core issue, that I seem to be missing): 

But what is this "set of events"? That's the question that is being discussed on this thread. It turns out that the events for a finite space is nothing more than the set of all possible combinations of the sample points. (Formally the event set is something called a "sigma algebra", but no matter.) So, an event scan be thought of simply all combinations of the sample points. 

and then to:

So, the events already have probabilities by virtue of just being in a probability space. They don't have to be "selected", "chosen" or any such thing. They "just sit there" and have probabilities - all of them. The notion of time is never mentioned or required.

An event is not all the combinations of the sample points.  As Grant has said, an event [outcome] has probability depending on how it is arbitrarily configured from the event space by the researcher.  Moreover, there is an important distinction to be made between the distribution of values [e.g., the numbers on each side of a dice being equally likely] and the sampling distribution that is dependent on how the event is composed in a trial sequence.  The sampling distribution is the mathematical result of the convolution of probabilities when choosing N independent, usually identically-distributed random picks from the parent distribution. 

Another example might be helpful: I think you are trying to define the sample space like with an urn of 10 balls with three red and seven white.  An event, in that case, would be something like picking three balls all red.  We could easily compute the probability of this event by using hypergeometric arithmetic; this is because of the sample space changing if you do not replace any balls after each pick. But, there is a finite number of other possible events in this scenario of picking three things from a bin of ten things. To be sure, though, this statistical problem does not relate at all to the paradoxical Axiom of Choice ... unless I am still missing something.  We are not interested in slicing and dicing [no pun intended] a probability space of a certain size in a way for coming up with, say, two identical but mutually exclusive probability spaces of the same size. This would make no sense, IMHO. 

Events are just the outcome(s) one is interested in computing the probability for.  They don't exist--as selections, in the way that I think you mean--until they are formulated by the researcher ... not trying to conjure up anything spooky here between the observer and the experiment as at the quantum level. :-) Nor are these events--not being mathematical entities of any type--something to be discovered in some platonic math sense [I mistakenly called you a Platonist, but on rereading the thread, I think you are not. Sorry. But the world wouldn't be as interesting without Platonists. :-) ].  

For example, there is the possible event of being dealt four aces in one hand of five cards and for which I can assign a probability given the conceptual structure of the probability space: a deck of cards. This is nothing more than laying out the number of possible [combinations--so order doesn't matter] of hands (a sample) and determining how many ways I could be dealt four aces [just one] ... then dividing the latter by the former.  This is an example of a categorical probability space, where the events are all the various ways [combinations] one can be dealt five cards from a deck of 52. We could go on to define these into categories like two of a kind, three of a kind, and so forth. Each of those events can be then assigned a probability. 

and then:

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?

The Axiom of Choice is a paradox that seems to get into trouble with set-cardinality, where it comes to infinite sets.  To me is nothing more than a mathematical curiosity that has no impact on the practical world. So I don't think this is helpful to your cause. But I would be more than curious to see how you think it might be. I am more an applied mathematician|statistician than anything like a theoretical mathematician; though, I have happily worked with many of the latter ... and hopefully the reverse was true. :-)

Okay, back to your observation: the fact that it is possible to choose a particular event from the set of all possible events in the event space is a trivial requirement.  I cannot, for example, pick a black ball--an impossible event--from the previous urn of only red and white balls.  So being able to choose three red balls from that urn makes the event "choosable."  Is that event then distinct from that same event that has been "chosen?"  At the classical level--as opposed to the quantum level--I cannot see any meaningful distinction EXCEPT to say that the former event is a possibility and the second event is a realization ... and that the way such events get discussed in practical probability and statistics. There is no spooky agent that needs to get factored into the calculus, IMHO.

Somehow, I still feel I am missing something. Maybe you can figure it out, but it may not be all that important, and your question may have already been addressed satisfactorily by the other responses posted to the thread. 

Cheers

On Wed, Dec 14, 2016 at 2:41 PM, Frank Wimberly <[hidden email]> wrote:
Don't think about choosing.  The axiom of choice says that there is a function from each set (subset) to an element of itself, as I recall.

Frank


Frank C. Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505

[hidden email]     [hidden email]
Phone:  <a href="tel:%28505%29%20995-8715" value="+15059958715">(505) 995-8715      Cell:  <a href="tel:%28505%29%20670-9918" value="+15056709918">(505) 670-9918

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen ?
Sent: Wednesday, December 14, 2016 11:36 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] probability vs. statistics (was Re: Model of induction)


Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) of Platonic math ... and how weird mathematicians sound (to me) when they say we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
> Ack! Well... I guess now we're in the muck of what the heck probability and statistics are for mathematicians vs. scientists. Of note, my understanding is that statistics was a field for at least a few decades before it was specified in a formal enough way to be invited into the hallows of mathematics departments, and that it is still frequently viewed with suspicion there.
>
> Glen states: /We talk of "selecting" or "choosing" subsets or elements
> from larger sets.  But such "selection" isn't an action in time.  Such
> "selection" is an already extant property of that organization of
> sets./
>
> I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking about an action in time. Often I'm talking about an action that I personally performed. "You are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the second hand of my watch as soon as you walk in, maybe I write down a number "arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and whatever it is I want to measure about you hasn't happened yet.
>
> I fully admit that we can model the system without reference to time,
> if we want to. Such efforts might yield keen insights. If Glen had
> said that we can usefully model what we are interested in as an
> organized set with such-and-such properties, and time no where to be
> found, that might seem pretty reasonable. But that would be a formal
> model produced for specific purposes, not the actual phenomenon of
> interest. Everything interesting that we want to describe as
> "probable" and all the conclusions we want to come to "statistically"
> are, for the lab scientist, time dependent phenomena. (I assert.)

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

gepr
In reply to this post by Frank Wimberly-2

Well, sure.  But the point is that the axiom of choice asserts, merely, the existence of the ability to choose a subset.  They call them "choice functions", as if there exists some "chooser".  But there's no sense of time (before the choice function is applied versus after it's applied).  The name "choice" is a misleading misnomer.

And that's my point.  Probability theory is a special case of measure theory.  Calling the set measures "probabilities" is an antiquated, misleading, and unfortunate name.

On 12/14/2016 01:41 PM, Frank Wimberly wrote:
> Don't think about choosing.  The axiom of choice says that there is a function from each set (subset) to an element of itself, as I recall.

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: probability vs. statistics (was Re: Model of induction)

gepr
In reply to this post by Robert Wall

Well, my question hasn't been addressed satisfactorily.  But I sincerely appreciate all the different ways everyone has tried to talk about it.  My question is about language, not math or statistics.  I'm adept enough at those.  What I'm having trouble with in the argument (the guy's name is Steve, btw) is my inability to communicate the measure theory conception of probability theory in plain English.  (He's not a mathematician, either.)

I'm especially appreciative of what you, Eric, and Grant have laid out from the practical "just get 'er done" perspective.  The reason my initial (failed) joke about not understanding what statistics _is_, but claiming to understand what probability theory _is_, was a joke, is because both are so heavily applied and so lightly ontological.  Were I able to tell the joke so that Steve saw the Platonic vs. constructivist, noun vs. verb, (false) dichotomy implied, then I wouldn't find myself having to explain it.  I would have avoided the need to make the Platonic view explicit ... which would have been good because I'm not a Platonist.

On 12/14/2016 05:05 PM, Robert Wall wrote:
> Somehow, I still feel I am missing something. Maybe you can figure it out, but it may not be all that important, and your question may have already been addressed satisfactorily by the other responses posted to the thread.


--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
123