I selected web analytics and content analysis as the two analysis methods for this assignment.
Web analytics, a subdivision of which is referred to as search-log analysis (Morville&Rosenfeld, 2007. p. 248), has pros and cons. On the pro side, it provides useful data about how users actually use the search function. From a design perspective, it yields useful information such as what browser was used, what page was visited and when, and what key search terms were used (Lynch and Horton, 2008. p. 58). On the con side, web analytics (or its variations) does not yield the same information that could be derived from direct interaction with users to know their needs directly (Morville&Rosenfeld, p. 38). Web analytics requires significant preparation and training in order to use sophisticated tools and processes. Data collected and analyzed may include most common search queries and reconstruction of elements of a web site session.
Content analysis, in this case, surveys, also has pros and cons. Pros include low cost, straight forward analysis, and minimum preparation required (McNeely&Kolah, 2012). Cons include the unscientific nature of the data received, the inability of the survey instrument to elicit user opinions of hypothetical changes or improvements, and the failure to capture the gap between user reported behavior and reported opinions and perceptions (McNeely&Kolah, 2012). McNeely and Kolah also make reference to social desirability bias, the tendency of the interview subject to try to please the interviewer, that could also be a factor in survey results. Content analysis of this type does not entail significant preparation before designing the method. Data collected might include user experience aspects and existing behavior patterns and trends.
Considering the web design process in the slides, web analytics might be most useful and appropriate in the analysis phase, learning about the users and conducting task analysis. Content analysis might be most useful in the design and administrative (test and refine) stages.
Jasmine: I didn’t think about it as I was reading the articles and chapters, but somehow your explanation reminded me of the “balanced scorecard” approach to measuring program performance in government during the Bush II years. It was a sort of benchmarking exercise, that was less competitive across agencies and more competitive across business units within an agency (or at least that’s how it was where I worked). A red light on your scorecard meant your unit fell short of the goal and you had to stay late every night to try to get to yellow. A yellow light meant progress, but no cigars. And a green light meant performance pay for your boss, and at the worker level, promotions and cool next assignments.
I like your explanation of focus groups, Kirsten. It makes me reconsider one of the articles (McNeely & Kolah) that made a reference to social desirabiity bias with respect to expert interviews that also might be applicable to focus groups. Of course any degree of homogeneity among a group could give rise to biases, and group think, though not mentioned, could also play a part in skewing collective responses.
“A term first coined by Irving Janus, groupthink is the condition caused by social forces that causes contributors to focus on homogenous ideas and even unconsciously agree with faulty thinking. In order to collaborate effectively in a group setting we need to develop techniques to avoid this issue. If we look to high-performing co-located teams for inspiration, their environment facilitates collaboration when appropriate rather than at some artificially selected time.”