The number of patterns you could find in a three item test is 2^3=8 (relatively low) and in 10 item test is 2^10= 1024, and this is only in dichotomous items. I just wanna help to emphasize the complexity and explosion of patterns in this types of models, Hope it helps :)
Hey, this is a great video and resource! Thanks so much for the clear explanation! I had a few questions for you. Q1). At 33:38 you mention the transformation constant (D) usually being 1.7. I was wondering if you could provide more information/explain what the transformation constant is doing/actually is? And I was wondering why people select 1.7 as the conventional transformation constant? What would happen if you selected, 2 for instance, as the transformation constant? Q2). I'm interested in running a Polytomorous IRT with a Generalized Partial Credit Model to reduce the number of items on a large battery of questions with some existing data. I was wondering if there was any specific resources (e.g. textbooks or websites) you would recommend that may help? I've been searching the web for resources, and I'm finding it hard to find more accessible materials.
Q1) Honestly, don't remember. Here's an article that appears to explain it: file.scirp.org/Html/3-1240938_79609.htm Q2) Not sure here either - most of the stuff I'm seeing when I search is more math-y than scale development focused.
You would need to use the different model types discussed for polytomous items, like a graded response model, etc. There's a few videos on the channel including: ruclips.net/video/VtWsyUCGfhg/видео.html and ruclips.net/video/SbuQi2xi0os/видео.html
28:00 Figure 2, you said blue line measures people that perform higher better, and red line measures people that perform lower better, but shouldn't it be that blue line measures both people perform high and low better because low performers tend to get it wrong while high performers get it correct, red line basically shows that you don't need to be good to get this item correct. Or are you saying there should be another item whose 50% is right at 0 that really sets apart the items that capture more about either high performers or low performers?
and what is b exactly on the graph? if it's the x-value corresponding to the 50% probability of getting the item correct then lower x-value would imply an easier question
Maybe the word "discriminate" would be better here - the blue line discriminates better at a higher theta (ability) while the red line discriminates better at a lower theta. B is the point on ability where it is 50/50 on the sigmoid curve. The slide should also say "larger more negative b" indicate easier questions.
The number of patterns you could find in a three item test is 2^3=8 (relatively low) and in 10 item test is 2^10= 1024, and this is only in dichotomous items. I just wanna help to emphasize the complexity and explosion of patterns in this types of models, Hope it helps :)
1:09 I haven't had laughed that much about likert scales
thank you so much dear, now i totally understand IRT and appreciate your effort.
Thanks!
Hey, this is a great video and resource! Thanks so much for the clear explanation! I had a few questions for you.
Q1). At 33:38 you mention the transformation constant (D) usually being 1.7. I was wondering if you could provide more information/explain what the transformation constant is doing/actually is? And I was wondering why people select 1.7 as the conventional transformation constant? What would happen if you selected, 2 for instance, as the transformation constant?
Q2). I'm interested in running a Polytomorous IRT with a Generalized Partial Credit Model to reduce the number of items on a large battery of questions with some existing data. I was wondering if there was any specific resources (e.g. textbooks or websites) you would recommend that may help? I've been searching the web for resources, and I'm finding it hard to find more accessible materials.
Q1) Honestly, don't remember. Here's an article that appears to explain it: file.scirp.org/Html/3-1240938_79609.htm
Q2) Not sure here either - most of the stuff I'm seeing when I search is more math-y than scale development focused.
Really appreciated this, thank you.
This is excellent, thank you. Is there a way to use Item Factor Analyses for polytomous items? If so, how do I access this through R?
You would need to use the different model types discussed for polytomous items, like a graded response model, etc. There's a few videos on the channel including: ruclips.net/video/VtWsyUCGfhg/видео.html and ruclips.net/video/SbuQi2xi0os/видео.html
28:00 Figure 2, you said blue line measures people that perform higher better, and red line measures people that perform lower better, but shouldn't it be that blue line measures both people perform high and low better because low performers tend to get it wrong while high performers get it correct, red line basically shows that you don't need to be good to get this item correct. Or are you saying there should be another item whose 50% is right at 0 that really sets apart the items that capture more about either high performers or low performers?
and what is b exactly on the graph? if it's the x-value corresponding to the 50% probability of getting the item correct then lower x-value would imply an easier question
Maybe the word "discriminate" would be better here - the blue line discriminates better at a higher theta (ability) while the red line discriminates better at a lower theta. B is the point on ability where it is 50/50 on the sigmoid curve. The slide should also say "larger more negative b" indicate easier questions.
thanks for that Video - it helps a lot!
Thanks!
Does anybody know about the python implementation of it?
Maybe the catsim package: pythonhosted.org/catsim/introduction.html?
Isn't it the case that a larger b = harder questions? 27:00
In theory, if you have questions with right answers, you could consider them "harder".