David Coffaro great bargain wines March 5, 2009
Posted by cnchapman in Blogroll, Wine.Tags: Wine
1 comment so far
I haven’t written about wine yet on this blog, but it’s a big hobby of mine — both enjoying and also growing my own grapes for winemaking. One of my favorite wineries is a small, little-known winery in Sonoma County: David Coffaro winery. They make fabulous wines for people who love tasty, fruit-forward, big red wines. On top of that, Coffaro is a critic of high-priced wines. He prices his to make a “fair” return, no more. While many Dry Creek Valley Zinfandels are $40 or more, for instance, his are $28, or under $20 if purchased on futures.
In his New California Wine, Matt Kramer of Wine Spectator wrote that Coffaro “never makes a bad wine … I have friends who routinely order six or eight cases … they’re all lovely: intense, free of intrusive oakiness, and purely made.”
If you’re visiting Sonoma, they do a great tasting including both released wines and often a barrel tasting. If not, check out the online ordering and give them a chance. I recommend to pick 4 different blends or Zins, a great investment of about $100. My personal favorites are the Zinfandels, Block 4 field blend, and Escuro, a dark, rich, blend. David Coffaro Winery home page. Cheers! (And no, I have no connection to them, am just a fan, and tired of too much overpriced wine!)
Assessing persona prevalence empirically February 21, 2009
Posted by cnchapman in Blogroll, Market research, Technology, User research.1 comment so far
I just obtained permission to post our latest paper on Personas. We argued previously that the personas method should not be considered to be scientific, and that a complete persona almost certainly describes few people or no one at all. In the new paper, we present a complete formal model, and evaluate the prevalence of “persona-like descriptions” with both analytical methods and empirical data. Full paper on persona prevalence.
There are two key implications here: (1) if you want to claim that a persona describes real people, you need strong multivariate evidence. (2) Without such evidence, we provide a formula you can use that will give a better estimate than simply assuming something. We show how this formula has a better than chance agreement with 60000 randomly generated persona-like descriptions in real data with up to 10000 respondents.
None of this says that personas are not inspiring or useful. It just says that they cannot be assumed to have verifiable information content, unless that is demonstrated empirically. As for alternatives to answer key design and business questions using empirical data, check out our paper on quantitative methods for product definition.
Personas February 8, 2009
Posted by cnchapman in Blogroll, Market research, Technology, User research.add a comment
One of my papers from 2 years ago is still causing discussion: “The Personas’ New Clothes: Methodological and Practical Arguments against a Popular Method” by me and Russ Milham. Email from researchers I didn’t know led me to look up citations, and the article appears to be commonly cited when people present criticism of the personas method. Google search. The paper itself is here.
There are a few misunderstandings of our position out there. Our basic argument is simple. Persona authors often make two claims: (1) personas present real information about users; and (2) using personas leads to better products. In a nutshell, we argue that neither claim has been supported by empirical evidence; rather, the claims for personas’ utility are based on anecdotes, generally from their own authors or other interested parties (such as consultants selling them).
This does not mean that personas are bad, but they cannot be taken at face value. As researchers, we suggest that persona authors should either provide better evidence (and we suggest how) or make weaker claims.
Some persona users don’t make claims about their personas’ usefulness or correspondence to reality; they simply say that personas might be helpful for inspiration for some people or teams. We take no issue with that, as long as they don’t forget those caveats and reify the persona. Unfortunately it is probably very difficult for people to read a persona and not think that it describes a user group.
We’ve recently published empirical work on (quasi-)persona prevalence using several large datasets, demonstrating that once a description has more than a few attributes it describes few if any actual people. I’ll put that paper up as soon as I get reprint permission. (If you have access to HFES archives, it is “Quantitative Evaluation of Personas as Information”, Christopher N. Chapman, Edwin Love, Russell P. Milham, Paul ElRif, James L. Alford, from HFES conference 2008, New York.)
What should one do instead of personas? I advocate stronger empirical methods that have more demonstrable validity.
New papers on user research February 7, 2009
Posted by cnchapman in Market research, Technology, User research.add a comment
Just uploaded 2 new papers on user research. First is work on a multi-factorial product interest scale, designed to be easily administered in survey format and applicable to consumer products. See the abstract on my “papers” page, or get the file directly: wip337-chapman.pdf
Second is an overview of quantitative methods that are helpful in early evaluation of product needs and strategy. The abstract is on my “papers” page, or the complete file is chapman-love-alford-quantitative-early-phase-ur-reprint.pdf
I’ll be uploading more papers soon.
Printing in landscape from Mac to HP LaserJet 1200 on Linux print server November 27, 2008
Posted by cnchapman in Technology.1 comment so far
Yes, this is a very specific post but having seen many questions about this online, I wanted to post my solution.
First the setup: HP LaserJet 1200 connected to a Linux machine running CUPS print server, sharing the printer out to my home network. Client machines include a Mac OS 10.3 notebook, Win XP desktop, and Win Vista notebook. All are set to use the HP 1200 Postscript and/or PCL driver that came with OS.
The problem: Printing from the Mac to the HP1200 in landscape mode (in any App: Word, Excel, iCal, etc) prints in portrait mode instead, with the edges of the page truncated. Could not find a driver update, and deleting/reinstalling does not fix it.
Solution that worked for me:
(1) go to system properties | printers and “add new printer”. Add it as a “windows printer”, browsing to the workgroup and picking it. The printer should be detected and show up. (If you’re not using the Linux CUPS server, this step will differ. Browse to the printer in the way that fits your setup.)
(2) Give it a name you’ll remember, such as “HP1200v2”. Now the key part: for printer model/driver, do NOT use the LaserJet 1200 driver. Instead, use the “HP LaserJet 6 gimp-beta” driver. This should be available by default in Mac OS.
(3) Click OK, etc., to finish. Test it. Go back and delete your older HP1200 printer setup, and make the new one the default.
My new (old) book available on Freud’s critique of religion December 4, 2007
Posted by cnchapman in Philosophy, Psychology.add a comment
In 1988-89, I wrote a book-length text on Freud’s theory and criticism of religion, when I was studying at Harvard University. A brief version of the thesis was published in 1997 (Psychoanalysis and Contemporary Thought, 20:1), but I never found a chance to update the entire work sufficiently to publish it with a typical academic publisher.
The main argument, which is still insufficiently recognized in discussion of Freud’s ideas on religion, is that Freud’s criticism continued to use his early psychoanalytic ideas that were later amended. If one views religion through his later psychoanalytic theory of anxiety, not all religious behavior would have to be viewed negatively. I demonstrate this through consideration of the modern theology of Paul Tillich.
Rather than let the manuscript languish, my author friend Greg Spira convinced me to make it available through the publisher Lulu. I’m also posting the PDF here for free. I recommend getting the printed version, since I think it’s easier to read. Either way, I hope it provides something of interest until I’m finally able to revise and bring it up to date.
Print version:
http://www.lulu.com/content/1443759
Thoughts on IQ testing for young children and school applications March 10, 2007
Posted by cnchapman in Psychology.add a comment
After having gone through an extensive process of school applications with family, I’d like to address the question of IQ testing for school admissions. IQ tests are used by some public gifted programs as well as some private schools. This is a controversial issue, so I’ll simply issue a blanket apology in advance!
I can address this from two kinds of experience: (1) I was a clinical psychologist before moving into technology research and had extensive training and experience in both adult and child psychological assessment; (2) my own child recently took a standard assessment instrument to apply to school. This is a long post, but there are many important issues here and I’d like to try to be very clear. Note that I’m not an active psychologist any more (it’s been 7 years since I moved to a research position), so take my opinions as educated but not as professional advice.
BTW, in the main part of the post, I leave aside most questions about the validity of IQ assessment. People’s opinions are strong and vary for good reason. If interested, see the “Appendix” at the very end of this post for my position on those issues. Also the comments here are specifically directed at IQ testing for admissions purposes; they do not necessarily apply to IQ assessment used for diagnostic or counseling purposes.
First, there is the question of when to do an IQ assessment. There are assessments for kids as young as 3. However, there are a few things to consider. Perhaps the most important thing to consider is this: cognitive abilities develop quite differently both within and between children. It is not until around age 10 that one can assume that all cognitive abilities will have caught up to one another and more or less stabilized with relative strengths and weaknesses that they will retain later.
What does that mean? Think of physical growth as an analogy. Just as kids go through growing spurts, cognitive abilities likewise can develop in bursts (the “switch” of speech turning on at age 1-3 is one good example). Just because a kid is “behind” at age 4 doesn’t mean that he or she will be at age 8. The same is true of different abilities compared to one another: for some kids, vocabulary may develop faster, while for others it may take longer, even if they ultimately arrive at the same place. In short, an earlier assessment is informative, but more variance can be expected the younger the child is.
Another implication of this is the following: unless a child shows clear deficits within an assessment battery, the main information one gets is that things are “OK” with reference to some norm. It is much more difficult to read significance into the differences between scores (e.g., verbal vs. math) when a child is very young. So the information gained from testing is limited, apart from diagnostics. This is why so much assessment at very young ages is focused on identifying problems and how to address them. That is very different than, say, an assessment with a 12 YO where the relative strengths and weaknesses of skills can be more clearly and reliably assessed.
The second thing to consider is what action one would take on the basis of an assessment. Suppose a school has a particular cutoff (I’ll say more about that below). Given the variance factor, if you’re within 5-10 IQ points or so above or below that, it can be difficult to know what to expect from another test a year later. So the expected outcome can be difficult to plan ahead of time.
My suggestion for any assessment battery is to discuss what you’re looking for with the psychologist who administers the test. In particular, if the main purpose is for school admission, I would suggest careful consideration of which assessment to administer. For instance, if you want to predict a Wechsler score (WPPSI, WISC), it would be most predictive to give a Wechsler test – but this has to be weighed against the fact that the schools will know about the repeated assessment, and that you’d learn less over the course of things than if you gave different assessments (e.g., Wechsler and Stanford-Binet).
I would offer one strong suggestion about an assessment: do not use the words “IQ” or “test” with a child. This process should be viewed as fun and enjoyable, not as something to stress about. If they feel that they’re being tested, it may affect the results negatively. I would describe it instead as something like, “you’ll get to do a lot of different activities, such as looking at pictures and playing with blocks.” The psychologist can take it from there.
Third is the question of how the issues around assessment relates to schools and choosing among them. I don’t have a good answer there, except that I think it is matter of the fit between the child’s temperament, the family’s goals, and the school’s emphases. Family goals differ a lot in education: some families view academic rigor as the sine qua non of schools and want a school to emphasize reading, math, and other such performance measures. Other families are interested to encourage artistic, physical, and emotional development along with academics.
Likewise, there is the question of individual needs of the child. Some gifted children have special emotional needs and benefit from being in homogeneous groups with other gifted kids, while others may do better in a more mixed environment where they interact with a diversity of others. Although no doubt kids can go “faster” to learn academic material in an accelerated environment, that doesn’t mean that every gifted child should do so. There is plenty of time in life (high school, college, grad school) to learn academic material; it just depends on the goals and temperament of the child.
In visiting a number of schools, I’ve seen significant variation on all of those dimensions. Some emphasize academics, while others emphasize as “balanced” curriculum. Some focus on building a close-knit environment where elementary students spend most of their time with 1 teacher, while others have kids changing activities and teachers multiple times a day (sort of like high school).
Finally, it’s unsolicited, but I would share one concern I have as a former psychologist: I think it is questionable whether IQ tests should be used to enforce a cut-off criterion for admissions. IQ tests were developed primarily to conduct assessment of individual strengths and weaknesses – not to serve as selection criteria. This is markedly different than, say, the SAT, which has been explicitly designed for selection purposes. Suppose a school has a cutoff of, say, 96th percentile on an IQ test. How is it that they determined that 96th percentile was the correct cutoff? How do they know that the test is reliable? Individual scores can easily vary by 5-10 points from time to time.
Also, how do they know that the test used for selection has been administered appropriately? Simply using a licensed psychologist is not enough. Selection tests such as the SAT are kept secure and secret until administered in order that everyone taking them is on a level playing field. IQ tests, however, are not secret – it is easy to take the same test multiple times, or with a bit of research even to find the items and their answers (the list of items does not change from person to person or time to time).
My take on it is that some schools like to use the IQ cutoff for two reasons: (1) it makes admissions easier for themselves (esp. in public school gifted programs, where they don’t want to argue endlessly about decisions), and (2) for marketing (it makes people feel good to be in the top X%). That doesn’t mean a school is wrong to use them for those purposes, nor that the school is “bad” for using them in an arguably questionable way. Rather it’s just that one should be clear about what’s happening.
A better approach, IMHO, is when IQ assessment is used not for cutoff purposes but rather as a way to get a better picture of individual strengths & weaknesses. For instance, one school explained that they don’t want classrooms full of kids who are all stronger on one dimension (e.g., language) vs. another (e.g., math), and they use the results both to understand more about each child’s needs and to select balanced classes. Given that young children’s performance varies so much in developmental course, that usage is also suspect in practice, but it makes much better sense than an arbitrary cutoff. As always, I’d certainly suggest to discuss those concerns with both one’s psychologist and the school.
Appendix: My $0.02 on IQ “validity”
As a former psychologist, I’d summarize my take on IQ like this: IQ is scientifically “real” in the sense that it can be measured repeatedly and reliably. IQ is “not real” if one takes “real” to mean that it completely defines one’s intelligence or potential. IQ is at least partially determined by both genetics and environment, but researchers differ over how much of a contribution each makes (not to mention that they are highly confounded). Studies of identical twins (including ones separated at birth) suggest that it IQ somewhere around 40-70% determined by genetics – which means 30-60% determined by environment. IQ can be modestly enhanced on the positive side through better environments, but when there is a stable family structure with decent environment, trying to do much more is unlikely to change it.
Some researchers and others have argued that IQ tests demonstrate “cultural bias” (e.g., to the advantage of traditional & suburban families, European heritage, etc), but the research evidence for that is not completely clear. Top-notch assessment developers (e.g., Wechsler, Stanford-Binet, Kaufman) are very sensitive to those issues in the latest versions of tests and have attempted to address them. (I don’t want to get too deeply into the cultural issues here – but it’s something to discuss with a psychologist, if that is of concern). Again, it’s important to realize that IQ doesn’t measure everything – it’s just one particular subset of cognitive skills (which vary somewhat from one test to another).
IQ is one of the best predictors — but not perfect, of course — of later success in strongly academic subjects such as math, vocabulary, etc. However, those are only modestly predictive of other kinds of success in life because there are many other ways to succeed or fail. IQ is modestly affected by “practice effects” of taking the tests repeatedly, but psychologists are on the lookout for that and there are some ways to control for it (e.g., by using a different test, or reporting that it has been taken before).
Once one achieves a “high enough” IQ (top 15% or so), then virtually all potential careers are open in terms of intellectual ability – the differentiating factors are likely to be personality, opportunity, motivation, etc. For instance, the average IQ of attorneys and physicians is said to be the same as that of professors at Ivy League colleges (it’s supposedly between 120-126). In other words, once that level is reached, the differentiators lie elsewhere. There are a few exceptions (e.g., physicists and philosophers are likely to be very high), but even then it is not IQ that is the ultimate predictor of either interest or success.
Even within the areas that IQ tests assess, some skills are more important than others for various professions. For instance, it would be difficult for someone with relatively lower verbal performance to be an attorney or writer, but that wouldn’t necessarily stop them from being, say, an engineer or an accountant. It’s also important to remember that the vast majority of people are within 1 standard deviation of “average” (85-115) and do quite well in life (or, rather, whatever problems they have lie elsewhere).
Finally, there are many other kinds of “intelligence”. That is not to deny that IQ is “real” in some abstract sense, but there are also other abilities that can be just as important: “emotional intelligence”, artistic and musical ability, physical/mechanical aptitude, and so forth. Our society tends to focus strangely on “intellectual horsepower”: society mocks it on the one hand, yet regards it as supreme in some other ways (e.g., in some kinds of academic selection). It is feared and worried about — perhaps because it is rather observable and has the property of being important for some kinds of success yet is largely fixed early in life, which goes against the American egalitarian view.
In short, don’t take IQ for more than it is, or even worry about it until you have a reason to do so. At the same time, that doesn’t imply that one must deny whatever it does mean.
Update on the Toshiba IK-WB 15a August 14, 2006
Posted by cnchapman in Technology.1 comment so far
As promised, I wanted to share some of my experience with the new IKWB 15a. In a nutshell, after using it for 3 days, it seems to be everything that the 11a camera should have been. The 15a is very stable (no reboots yet) and responsive. Images load and are streamed rapidly. Image quality is very good. Low light performance is especially good — with auto B&W on, it switches from color to B&W images in low light. These yield crisp, satisfactory photos outside in the middle of the night with street lights on.
Note that I use the camera exclusively with Cat 5 cabling, so performance comments are about 10/100 Ethernet access, not Wi-Fi.
There are a number of changes to the firmware, mostly to add a few functions and make the HTML pages easier to navigate. One change that I especially appreciated: the FTP recording option can now use a fixed file name. This option has the camera initiate an FTP upload of images either on a schedule or in case of an alarm such as motion detection. In the 11a camera, the uploaded files were named with a time & date scheme, such as “LV-NWCAM1-20060815-010544.jpg”. In the 15a, that is still an option, but it is also possible to give a fixed name to the file, which is overwritten on each upload. For instance, something like “webcam.jpg”. That is helpful if you want to post the image directly to a web server.
In my case, this FTP recording feature would make it possible to feed the image directly into a directory where “motion” can examine it. In other words, it makes the most essential feature of my self-written recording program (see previous posts) unnecessary! On the other hand, you’d have to set up and run an FTP server, which I prefer not to do (even on my LAN) because of security holes. Might be a future possibility to put a standalone FTP server in my network DMZ, though.
One change on imaging is that there is no longer an option for 800×600 images. Maximum is the same at 1280×960, but the next lower option is 640×480. That is OK with me, since 800×600 doesn’t work well with “motion” anyway. The faster responsiveness of the 15a makes the 1280×960 images stream very nicely, either as mjpgs or static jpgs.
There is also additional attention to detail on the hardware side. The size and appearance of the camera are unchanged, but I noticed two nice details. First, the 15a camera comes with an extension power cord on the DC side. This is handy for exterior mounting in particular — you can put the extension cord through a wall and then if the AC/DC brick ever dies, you won’t have to remove the cord from the wall in order to replace the tranformer. Second, there is a small wrapping strip that fits inside the power/ethernet connector area on the camera, and wraps around the power cord. This helps hold it in place, so the power connector won’t come loose.
Overall, I’m very satisfied with it, and recommend it highly. The combination of great pictures, interior/exterior mounting, improved firmware, and a nice lineup of features make it a great deal at its price point (I found a great deal (now gone) at $479, but anywhere up to $550-600 would be satisfactory).
Problem with Delphi Indy 10 components August 13, 2006
Posted by cnchapman in Technology.2 comments
I added a new camera to my setup this weekend, and got one of the new Toshiba IK-WB 15a cameras — that’s the new model that replaces the IKWB 11 line. So far, it seems like a very good camera. It’s more responsive and seems more stable than the IKWB 11a series.
However, when I pointed my self-written camera recording program to it, I couldn’t get a picture. I guessed that it might be due to a new authentication scheme in the camera, and perhaps my older Indy HTTP components (used in my program) were not compatible. So I upgraded Indy 10.0.76 to the latest 10.1.5. (http://www.indyproject.org/download/Files/Indy10.html)
When I did that and ran Delphi again, I got this error: “Cannot load package ‘IndySystem50 ‘ . It contains unit ‘filectrl’ which is also contained in package ‘Vclx50’“. Searching the Delphi 5 help boards found no solution to the problem. Lots of people have had this problem, but no one posted a solution. As it turns out, all I needed to do was reinstall Indy 10.0.76. There appears to be a bug in the 10.1.5 compiled packages, at least for Delphi 5. Use 10.0.76 instead.
As it turns out, the authentication problem with the new 15a camera was simply that it requires a user name and password to grab images (which 11a did not). In my program, I just added a user name and password to the “user” and “password” properties of the Indy HTTP component. Works great now!
More on the new Toshiba IKWB 15a camera soon …
Creating a home security camera system, part 3 February 10, 2006
Posted by cnchapman in Technology.2 comments
Getting the picture. In this part, I talk about how to get the images from the Toshiba camera so they can be used by other programs for archiving and motion detection. This was trickier than I anticipated.
As mentioned earlier, I selected the Toshiba IK-WB11A camera for the visual part of the network. The camera worked well right out of the box. There are various options that can be set on it and are self-explanatory.
My first requirement for this system was to store pictures continually, regardless of whether motion was detected in the images. That way, regardless of whether the motion detection algorithm works or failes, I will always have a real time backup (backed up in two places). To do that, I needed to access the images in real time.
This posed a problem: how to access the Toshiba’s photos without using a web browser or their software. The documentation does not describe how to capture the current image as a simple JPG file. Eventually I found the answer from another user online: access the current image as :http://your.ip.address/__live.jpg?&&&
With that working, I wrote a Windows program using Delphi that would retrieve the images every second (or at any specified interval) and save them to the network locations. This uses the Delphi Indy socket components for the http download, http://www.indyproject.org/. I won’t post my code, as there are good examples online already, such as http://www.swissdelphicenter.ch/torry/showcode.php?id=2391.
Unfortunately, if you want to archive the images indefinitely, you will need to invest in a program to read the image files and save them. There are a lot of ways to solve that, ranging from writing your own (as I did) to buying one online. The Toshiba camera offers the onboard ability to detect motion and send email. However, I did not want to bog it down sending mail and the like — I wanted a fast, responsive camera, not a camera trying to be a server.
Now the second problem arose: I wanted to do motion detection on the live images. If there was motion detected from one frame to the next, I wanted the frame with motion to be stored on an external machine (not in my house) in real time and also backed up to various local locations. I set up “motion” on my linux server to monitor the images. It likes to look at a live stream, not at sequentially saved files with different names. So I pointed it to the camera URL as above — but it didn’t work. It seemed that it could not authenticate successfully against the camera, and thus could not get the image.
I solved this with a two-step approach. First, I modified my file-saving program to save the latest image for a camera to a specific, unchanging file name (in addition to the incrementing file names for the archived images). Second, I installed the Abyss web server http://www.aprelium.com/ (free for personal use) and configured it to serve up the current camera image files on my network. Now Motion could look at the file for a given camera and run motion detection perfectly.
Here’s a screenshot from my application, showing how it archives the images to multiple places and to the web server location. (Note that the picture quality is low because the hard drive box is very close to the lens.)
This also solved one of my other requirements: to be able to monitor cameras. I can simply point a web browser to the appropriate file on the web site. Of course that only runs locally, since I don’t want to serve up images on the Internet. (That would be easy to configure with Abyss, just not necessary for me since any with motion detected are stored externally anyway).