Testing websites for accessibility with automated tools provides much useful information – especially with respect to conformance to technical standards. Unfortunately websites can conform to standards and still prove highly unusable for people with disabilities. This session proposes that for accessibility evaluations to be fully effective they should include contributions from three types of examination: (1) use of automated tools where appropriate, (2) human inspection of the underlying code used in the site, and (3) having examiners with disabilities do hands-on user testing of key portions of the site following a use-case oriented protocol. User testing by people with disabilities not only quickly identifies technical barriers but also surfaces significant usability issues resulting from errors of omission and points of ambiguity that will often be missed by automated testing and by human testing done by those not facing the challenges that users with disabilities must contend with. This session will discuss the critical importance of including skilled user testing when evaluation websites and web-based applications for accessibility.