[Blindmath] (no subject)

Jonathan Godfrey a.j.godfrey at massey.ac.nz
Tue Sep 15 03:34:44 UTC 2009


Hi,

This can be  done via a likelihood ratio test if you must.

If you've fitted the model using regression though it gets 
considerably simpler. If you have two models. Mod1 and Mod2 then the 
additional parameter in Mod2 can be tested using a simple F-test. 
This is often even further simplified as the relevant test is 
achieved by looking at the p-value  for the extra term in the larger model.

Time to take this off list now I think.

Jonathan

At 12:18 p.m. 15/09/2009, you wrote:
>Hi,
>  First, thank you sincerely for the information. A slight 
> clarification, and maybe this will make my reasoning for using this 
> analysis a little better,or potentially a lot worse. I am open to 
> any arguments either way as I am always eager to learn.
>
>  What I am actually attempting to do is to see if  removing one 
> parameter from a 4-parameter model will yield a null result when 
> compared to the 4-parameter model. Variance testing (e.g., ANOVA 
> results) with the data suggest that holding one particular 
> parameter constant does not change the results. Thus, I am planning 
> to use the statistic to essentially demonstrate that allowing the 
> additional parameter to vary yields no changes in  the data.
>Many thanks,
>Chris
>
>
>Christine M. Szostak
>Graduate Student
>Language Perception Laboratory
>Department of Psychology, Cognitive Area
>The Ohio State University
>Columbus, Ohio
>szostak.1 at osu.edu
>----- Original Message ----- From: "Jonathan Godfrey" 
><a.j.godfrey at massey.ac.nz>
>To: "Blind Math list for those interested in mathematics" 
><blindmath at nfbnet.org>
>Sent: Monday, September 14, 2009 4:56 PM
>Subject: Re: [Blindmath] (no subject)
>
>
>>Hi Christine et al,
>>
>>I have several comments and concerns and then the details below.
>>
>>First your question about accessibility to information is well 
>>directed in terms of sending it to this list. My problem is that 
>>the reason for asking the question is in my opinion misguided on 
>>statistical grounds.
>>
>>A quote often attributed to George Box goes something like "All 
>>models are wrong, but some are useful." It's usually misquoted 
>>(including here) and was actually first published in a NASA report 
>>before it appeared in a Box article.
>>
>>A chi-square test can tell you if a model is useful - well more 
>>exactly it will tell you if it is not useful. This is a direct 
>>consequence of the way we do hypothesis tests.
>>
>>Having decided that two or more models are useful though, the 
>>chi-square test becomes irrelevant for deciding which model is the 
>>one to use. You must have other reasons for even considering 
>>various models and their usefulness to your situation. To go back 
>>to the Box quote, you might find out that two models are actually 
>>valid and useful. You then need to ask if they yield differing 
>>results in terms of what you want to do with them. If they don't 
>>differ then it probably doesn't matter which one you use. If they 
>>do differ then you have a problem with choosing the one to apply on 
>>the grounds of the assumptions made behind each model (and there 
>>are always assumptions).
>>Ultimately we do not want our opinion to depend on the assumptions we make.
>>
>>Now to remind you of the basics of chi-square testing.
>>
>>1. Under any model, you should know how many observed values fall 
>>into well defined classes. In situations where you are counting 
>>things then this is easy. If the response is continuous then you 
>>must be careful determining the cutoffs between the classes.
>>2. You must now consider how many observations you would expect to 
>>fall into each of the classes defined in step 1.
>>3. If any of the classes have expected values less than 5, you will 
>>need to merge classes. This normally occurs at the extremes. Merge 
>>until all expected values are >=5. Of course you will need to merge 
>>the observed counts as well to match.
>>4. The chi-square value is the sum of (O-E)^2/E where O is observed 
>>(step 1) and E is expected (step 2).
>>5. This should follow a chi-square distribution with n-p-1 degrees 
>>of freedom. n is the number of classes after merging, p is the 
>>number of parameters you estimated under each of your models.
>>6. Either
>>6a. Using some software (EXCEL will do) find the p-value for the 
>>chi-square test statistic found in step 4, compare this to the 
>>pre-determined level of significance you are happy with - normally 
>>0.05 is applied. If your p-value is less than 0.05 then your model 
>>is not useful.
>>or
>>6b. determine the critical value for the chi-square distribution 
>>with the right degrees  of freedom found in step 5. If your test 
>>statistic is >= critical value then your model is not useful.
>>
>>Jonathan
>>
>>
>>
>>
>>
>>
>>At 04:29 a.m. 11/09/2009, you wrote:
>>>Hi All,
>>>   Do any of you happen to know where I might be able to obtain 
>>> speech-software friendly information on running a Chi Square 
>>> Analysis (e.g., the actual statistics involved). It has been a 
>>> long time since I have done such an analysis and I need to do so 
>>> to compare which of a few mathematical models provides a best-fit 
>>> to some behavioral data I have collected in Cognitive Psychology.
>>>Many thanks,
>>>Christine
>>>Christine M. Szostak
>>>Graduate Student
>>>Language Perception Laboratory
>>>Department of Psychology, Cognitive Area
>>>The Ohio State University
>>>Columbus, Ohio
>>>szostak.1 at osu.edu
>>>_______________________________________________
>>>Blindmath mailing list
>>>Blindmath at nfbnet.org
>>>http://www.nfbnet.org/mailman/listinfo/blindmath_nfbnet.org
>>>To unsubscribe, change your list options or get your account info 
>>>for Blindmath:
>>>http://www.nfbnet.org/mailman/options/blindmath_nfbnet.org/a.j.godfrey%40massey.ac.nz
>>>
>>>No virus found in this incoming message.
>>>Checked by AVG - www.avg.com
>>>Version: 8.5.409 / Virus Database: 270.13.90/2361 - Release Date: 
>>>09/10/09 18:12:00
>>
>>_____
>>Dr A. Jonathan R. Godfrey
>>Lecturer in Statistics
>>Institute of Fundamental Sciences
>>Massey University
>>Palmerston North
>>
>>Room: AH2.82
>>Phone: +64-6-356 9099 ext 7705
>>Mobile: +64-29-538-9814
>>Home Address: 22 Bond St, Palm. Nth.
>>Home Phone: +64-6-353 2224 (Just think FLEABAG)
>>
>>_______________________________________________
>>Blindmath mailing list
>>Blindmath at nfbnet.org
>>http://www.nfbnet.org/mailman/listinfo/blindmath_nfbnet.org
>>To unsubscribe, change your list options or get your account info 
>>for Blindmath:
>>http://www.nfbnet.org/mailman/options/blindmath_nfbnet.org/szostak.1%40osu.edu
>
>
>
>_______________________________________________
>Blindmath mailing list
>Blindmath at nfbnet.org
>http://www.nfbnet.org/mailman/listinfo/blindmath_nfbnet.org
>To unsubscribe, change your list options or get your account info 
>for Blindmath:
>http://www.nfbnet.org/mailman/options/blindmath_nfbnet.org/a.j.godfrey%40massey.ac.nz

_____
Dr A. Jonathan R. Godfrey
Lecturer in Statistics
Institute of Fundamental Sciences
Massey University
Palmerston North
Phone: +64-6-356 9099 ext 7705
Mobile: +64-29-538-9814
Room: AH2.82

Home Address: 22 Bond St, Palm. Nth.
Home Phone: +64-6-353 2224 (or FleaBag if you prefer to remember it that way)



More information about the BlindMath mailing list