[John's Entries]

29 October 2003

A Couple of Items from APS

A couple of issues have come up around the APS interface and reports:

  • Where do you display cues? How should cues be formatted? Should example cues be displayed differently than meta cues?
  • What styles are applied to the section headers within findings reports?

Thanks, --JB

23 October 2003

Adventures in TWiki Land

Earlier this week, Colin and I had a personal growth opportunity to post some changes to documentation on TWiki. Of course, when we went to peform this feat, none of our partners in crime were around, you know, the usual suspects: Chris, Jessica, and Laura. After a couple of false starts, we were successful. I thought I should describe our steps, so that if I forget in the future, I can come back here and refresh my memory. If any of you have an easier way of posting, please comment. Although I like TWiki, it's organization and processes seem a little topsy. So here's what we did (eventually) to post an updated document on TWiki for APS:

  1. Go to TWiki at http://collaborator.
  2. In the left navigation menu, click the eSolutions link.
  3. On the eSolutions Welcome page, click the eSolutions Groups link in the right column under Contents.
  4. On the eSolutions Group Pages page, click the User Research and Interaction Design link in the Business Groups and Links section at the top of the page.
  5. You might want to bookmark the UserResearchAndInteractionDesign page.
  6. On the UserResearchAndInteractionDesign page click the Attach icon and link in the page header.
  7. On the Existing Attachments page, click the Action link for the document that you want to update.
  8. At the bottom of the screen is a Local File field. Click the Browse button to locate the updated document that you want to upload to TWiki from your computer's hard drive.
  9. After you have located the file (and the Local File field is filled in), click the Upload button at the top of the page.
  10. Poof!

Wasn't that easy?

17 October 2003

Book List

Publication Author Whose Cube?
Requirements
Writing Effective Use Cases Cockburn JS
Exploring Requirements: Quality before Design Gause and Weinberg JS
Information Architecture
Information Architecture Garrett JS
Practical Information Architecture Reiss JS
Information Architecture for the WWW Rosenfeld and Morville JS
Design
About Face 2.0: The Essentials of Interaction Design Cooper and Reimann JB, JS
Inmates Are Running the Asylum, The Cooper JB, JS
Don't Make Me Think Krug JB, JS
Designing Web Usability Nielsen JS
Paper Prototyping Snyder JB, JS
Design of Sites, The Van Duyne JB, JS
Web Standards
Web Design on a Shoestring Bickner JB
Web Bloopers Kauffman JS
Web Design WOW! Book, The Merritt JS
Click Here Pirouz JS
Designing with Web Standards Zeldman JB
CSS and HTML
Cascading Style Sheets: Separating Content from Presentation Briggs, et al. JB
HTML 4 for the World Wide Web, Fourth Edition Castro JB
Cascading Style Sheets: Designing for the Web, 2nd Edition Lie and Bos JB (not my book...)
Web Style Guide Lynch and Horton JS
Cascading Style Sheets: The Definitive Guide Meyer JB
CSS Pocket Reference Meyer JB
Eric Meyer on CSS Meyer JS
HTML: The Definitive Guide Musciano and Kennedy JB
Scripting
Beginning PHP4 Choi et al. JB
JavaScript: The Definitive Guide Flanagan JB
PHP Essentials Meloni JB
MySQL
MySQL DuBois JB
Build Your Own Database Driven Web Site Using PHP and MySQL, Second Edition Yank JB
But Wait... There's More!
Persuasive Technology: Using Computers to Change What We Think and Do Fogg JB

15 October 2003

Books and Book Lists

I've just started reading Jeffrey Zeldman's book, Designing with Web Standards. It's looking to be a good read (yeah, I got no life). I think Chris has this book, too. I also purchased Eric Meyer's CSS Style Sheets, the Definitive Guide, and it's pocket reference. If you want to borrow these books, just let me know.

I know at one time we were going to inventory the books we had among our various desks. I also have quite a few books at home that others might want to borrow (and even read). I think it would be useful to maintain a book list here on URID.org. I'll even volunteer to do it, if you all think it would be worthwhile.

2 October 2003

Some Experiments with HTML/CSS Prototypes

A couple of weeks ago, I had a discussion with Erika and Bill about the shortcomings of wireframes used in evaluations. I put in my two bits that we should really test with HTML prototypes. Bill pointed out that HTML prototypes had a couple of drawbacks in our work here: they require additional resources to create, and people who see them tend to think that they are the "real thing" -- decision makers can be seduced by the prototype and believe that the design is already baked.

So I think the requirements for HTML prototyping comes down to this: we have to build the HTML prototypes very quickly on the cheap, and they can't be mistaken for the final design.

Friday or Monday, Chris sent me some "cool" links that address these issues head on. Chris, I don't know where you find these websites, but they are gems. The upshot of this is that I could quickly create an interface, including form elements. Being the geek that I am, I wanted to know how effectively this could be applied to a template and across a web site. My personal website (which you should never visit at work) is template driven. In about 15 minutes I designed a new layout template and applied it to my web site. The results, while aesthetically challenged, showed to me that an HTML prototyping approach is feasible.

So I guess I'd better list my caveats. I acknowledge that I have a fair amount of experience with HTML and CSS. If you want to build HTML prototypes quickly with the tools, you need to have an understanding of CSS and HTML.

I suggest the books, Cascading Style Sheets: Separating Content from Presentation, Briggs et al., Glasshaus, 2002, and Cascading Style Sheets, Designing for the Web 2nd ed., Lie and Bos, Addison-Wesley 1999. For HTML, I always suggest Werbach's Barebones.txt.

And now, Chris' super-cool links that changed my life:

All of these tools generate HTML code and the underlying CSS. These really are better than sliced bread!

29 September 2003

Flavors of Remotely Evalutating Applications

We've now had two experiences with remote users looking at and responding to wireframes. While it may have felt a little bumpy, we should declare victory, document our process, and move forward in the bright, brave light of few resources for user research.

As Erika pointed out in a discussion that she and I had after Friday's session, what she was doing was market research, not user research. Because of the constraints of the evaluation situation that we were in, we were collecting user responses, not user behavior. That is not to say, however, that we don't gain valuable insight from the technique that she employed.

In the sessions that we've had so far, we have to remember that no matter how low-fidelity the evaluation, or constrained the process, we gain value from communicating directly with users. We also gain experience and confidence in our ability to facilitate.

Some Evaluation Techniques

I'm proposing a couple of dimensions for evaluation that I think our team should consider:

  • Facilitation dimension
  • Stimulus dimension
Facilitation Dimension

The facilitation dimension extends from a facilitated phone conversation in which the facilitator cannot see the user or the user interaction to a full-blown usability test in which the facilitator is looking at the user and observing the user's interaction.

Phone Facilitation. The facilitator talks with the user over a phone line. The facilitator cannot see the user or the user's behavior. In the APS example, Jessica and Erika talked the user through the wireframes, asking the user questions, and documenting the user's responses. While it's evident that phone facilitation does identify issues, the facilitator is flying blind. Here are some issues:

  • The facilitator cannot validate user behavior.
  • The facilitator may have to coach the user to elicit a response.
  • User responses tend to be more about the application functions and less about the user experience.
  • Evaluation results are not valid and may not be actionable.
  • The evaluation is relatively inexpensive.
  • We need to have a discussion about the kinds of questions that are appropriate for this kind of situation.

NetRaker Facilitation. Using NetRaker, the facilitator can see the user's interaction with the stimulus, but may miss other important behavioral cues. NetRaker should make it possible to validate user behavior. Its clickstream capabilities will also provide valuable information. Issues:

  • While not flying blind, the facilitator still does not have a complete picture of the user experience.
  • Because the evaluation can be conducted in the user's workplace, the evaluation may have an added realism that a usability test doesn't have.
  • The evaluation protocol can be modeled on a usability test protocol.
  • The results may be valid and actionable.
  • The evaluation requires resources, but can probably be absorbed without it becoming a separate project expense.

Usability Test. Certainly, our team has the most experience with formal usability testing. AIR facilitators know what they are doing. Tests are well documented with notes, tapes, and reports. While this kind of evaluation has enormous constraints, it also provides authoritative results:

  • The testing situation is artificial
  • Fannie Mae doesn't have lab facilities.
  • Because we farm the testing out, AIR provides an expert seal of approval--it's not just us saying something.
  • The results are valid and actionable.
  • Usability tests are difficult to set up and expensive to run. We can't hide these in project expenses.
Stimulus Dimension

The stimulus dimension extends from wireframes through a full-blown application, or non-interactive to interactive.

Wireframes. Wireframes document the application interface, but don't provide any interaction:

  • The lack of user interaction with the prototypes is a major drawback.
  • Because wireframes are produced in the course of our work, they are always available for evaluation.
  • We discovered in the APS evaluations, that the "flatness" of the wireframes and the constraints of their HTML rendering made it difficult for users to see options, links, and buttons.
  • Wireframes are amenable to remote evaluation, but don't have high enough fidelity to be used in a usability test.
  • Users could not see the contents of drop down lists.

Paper Prototype. Paper prototypes have traction during the early design process. We should consider adding them to our evaluation tool suite.

  • Paper prototyping may be too expensive to test with end users because of recruiting and facilities requirements.
  • Paper prototyping require face-to-face facilitation (although we might experiment with remote evaluation via videoconferencing).
  • Paper prototypes are useful for discovering usability issues with "surrogate" users (business owners and subject matter experts).
  • Paper prototypes are inexpensive to devise, and infintely malleable during the design/evaluation process.
  • Paper prototypes are not easily adapted to remote evalutation techniques.

HTML Prototype. HTML prototypes provide interaction and navigation, providing opportunities for validation not possible with wireframes.

  • HTML prototypes require extra resources that our team doesn't have.
  • Developing an HTML prototype template would help mitigate the resource issue.
  • Prototype evaluation can validate user behavior.
  • Prototypes can lull observers into believing that application development is much further along than it actually is.
  • HTML prototypes are amenable to the full range of evaluation techniques.

Application Prototype. Application prototypes can be used to evaluate limited application functionality.

  • If a development team uses iterative development, an application prototype provides an opportunity for validating the user's experience with prototype's functionality.
  • Results can be folded into a subsequent iteration.
  • URID resources would be used only to evaluate the prototype, not to develop it.
  • Technical difficulties are the order of the day when working with application prototypes. They tend to be buggy, and setting up an evaluation requires working through firewall, environment, user id, and test data issues.
  • Application prototypes are amenable to the full range of evaluation techniques.

Working Application. Evaluating a working application provides the most complete coverage of functionality.

  • URID resources would be used only to evaluate the application.
  • Any evaluation results would have to be folded into the next (or subsequent) release of the product.
  • There will be some technical issues, but probably not on the scale of testing an application prototype that's in a testing environment.
  • Applications are amenable to the full range of evaluation techniques.

Analysis

Use of resources moves from phone facilitation -> usability test and from wireframes -> HTML prototyping -> working application. Validity goes in the same direction. However, actionability doesn't. Usability findings are more actionable early in the development process. The evaluation models we employ should provide:

  • no or little additional resource (at least for URID) commitment
  • facilitation that captures significant user behavior
  • findings that are readily actionable by the development team.

For the most part, NetRaker provides a facilitation model that captures important user behavior without the expense of a full-fledged usability test. An augmented wireframe or HTML prototype provides some degree of user interaction (I vote for the HTML prototype). Wireframe and prototype development also occur at the point in the development process where the results from a usability evaluation will be the most actionable.

26 September 2003

Loan Origination Overview

I posted a PowerPoint slide on Twiki that is a start of an explanation of the origination process. Some of the elements on the slide are conjecture; most of the elements are not. This is a 30,000 ft (9144.08 m) view. Please comment.

Training Course: Introduction to the Real Estate Finance Industry

I just got back from the intro mortgage banking class. It’s a good class (pretty intense) with some good instructors from the industry. While parts of the class were pretty esoteric (present valuation models of servicing portfolios, for example), the class also provided an overview of the players in the industry, their (love/hate) relationships, and some of their general processes. I recommend that you take the course soon, because I think it gives us an excellent background for our observational research with industry professionals.

The class is intense with lots of reading material. It’s mostly lecture in format; so be prepared to sit for long periods of time. The interchange with instructors and students was open and encouraged. Sometimes this got in the way of the class message, because some FM people like to hear themselves talk. On the other hand it was great to hear the mortgage banker perspective, then the FM response. In the process, you learn a lot about FM’s business, too, which can’t hurt.

If you want more information about the class, I have the notes and the textbook at my desk. I’d be happy to talk with you about it. My advice: enroll and attend. Some last words: the food was good. The instructors were engaging. The shuttle out to Herndon was a little piece of heaven.