Jump to Content
T2RERC  


abstract

introduction

method

status

references


home > research > efficacy studies

R4: Efficacy Studies

Status

 

Last Updated on: October 11, 2005

Project start date: October 1, 2003.

Anticipated Outputs and Outcomes

The anticipated findings of the collective efficacy studies include evaluative conclusions about the quality and value (merit and worth) of the products transferred by the T2RERC from the consumer perspective. Product Efficacy is a major indicator of process effectiveness. Thus the findings of the efficacy studies imply a validation (or not) of the T2RERC transfer process.

Through iterative testing and refinement of protocols and instruments that occur during the project, we expect to develop a resource guide for product evaluators. The guide will bear a set of indicators of product quality and product value as well as the corresponding assessment protocols and instruments. The indicators should be grounded both in end user needs and designer insight, and should be device independent and generic enough to cover product diversity. The accompanying guidelines should instruct their application to specific product cases through branching out criteria. The basic first draft of the guide was put together in Year One.

Phase I of each case study will develop the indicators and the corresponding assessment protocols and instruments. Phase II (Lab trials) will produce evidence about the T 2RERC product's performance (effectiveness and efficiency) and its usability to the consumer in comparison with the product's most equivalent marketplace competitor, when available. Phase III (Home trials) will obtain similar evidence in comparison with the product's critical competitors, i.e., the other alternatives available at user's home. It will obtain further evidence on the product's value by assessing its abandonment (or acceptance) and actual use by the consumer. It will include the product's versatility for different life roles of the user and its affordability. The cumulative findings will evidence the impact of the transfer process on the quality of life of the consumers with disabilities.

Project Update:

Year One :

We chose the Automatic Jar Opener by Black & Decker as our first case for the efficacy study. This device is an innovative mainstream product designed to accommodate persons with limited hand function including the elderly and brought to market through Black and Decker in our last funding cycle. Its transfer path very closely followed the idealized technology transfer process and it would be the ideal candidate to permit valid evaluative conclusions about the effectiveness of the technology transfer model. Additionally, in light of the product's success in the mainstream market in terms of sales, it would be interesting to see how well it is valued by our specific target population, i.e., consumers with disabilities. We selected the "Open-up" commercial jar opener as the comparison product to the Lids-off in the laboratory trials. As per criteria proposed, this was the most equivalent commercial product present in the market at the time the T 2RERC product was transferred. Consumers used their current alternative jar opening strategies for their home trial comparisons.

After preparatory activities for project operations and having fulfilled the University IRB requirements concerning human subject involvement, we developed a set of indicators as well as generic protocols and instruments for the study of product efficacy, using the Lids-Off as our frame of reference or pilot case.

Indicator development: We developed a list of product efficacy indicators (measures of quality and value), centered on device performance, device usability, user satisfaction, user acceptance and device use. We drew our core list from empirical observations of consumers performing the task of "opening jars". We then refined and enhanced these core findings based on literature review and an analysis of records & tools from our two previous funding cycles, mainly the studies related to the Consumer Ideal Product (CIP) project.

Our empirical data gathering followed a "task analysis" approach, where a team of design experts and the research team of the study independently observed the "task of jar opening", analyzed the "user-device-environment" interactions and came up with relevant indicators. We arrived at the final listing through triangulation of individual listings. This was our way of ensuring that both designer and user perspectives went into defining indicators of quality and value.

The research team actually interviewed consumers and video-recorded these scenarios. Five consumers with disabilities with limited hand function were recruited. The research team visited each one at their residence, interviewed them as to their current method of jar opening, the benefits and the challenges it presented and asked them to demonstrate how they performed the jar-opening task, with detailed explanation of why they were doing what they were doing. This gave us a chance to obtain the user insights. The research team then observed the video recordings and derived a set of preliminary indicators, first individually and then by triangulation. User perspective mainly shed light on these indicators. In a later session, three design experts were invited to observe the video recordings and they systematically worked through the five cases and arrived at sets of indicators. By triangulating among these, the researchers drew a unified set of indicators. This set especially took the designer insight into account.

We combined the indicators resulting from the task analyses with those drawn from human factors literature, as well as previous work on product evaluation and universal design, into a single consolidated listing. We further classified them under the eight consumer need categories used by T 2RERC. We thus attempted a two dimensional classification using the eight consumer categories on one dimension of the matrix and the seven Universal Design (UD) principles on the other, and housed the indicators in the appropriate cells of this matrix.

Instrument and Protocol development: The distribution of indicators in the two-dimensional matrix this gave us an initial basis to separate and distribute the indicators between the laboratory and home settings. Not surprisingly, indicators in some of the consumer categories (Ex. operability, comfort, safety) loaded on all the seven UD principles, while the others did so to a lesser extent. This suggested the orientation of the former set towards measuring usability and the feasibility of observing them in the laboratory setting, as opposed to the orientation of the latter set toward measuring use and therefore more appropriate to guide home trial observations. We then used the relevant indicators, generated questions and developed instruments corresponding to the two settings. The set of instruments consisted of: (1) Consumer, Observer and the Interview questionnaires for the laboratory setting; and (2) Weekly Log sheet, End-of-Trial questionnaire, Mid trial Interview and End-of-trial Interviews for the home trials. Protocols for the actual conduct of the laboratory and the home trials were simultaneously developed and described.

The instruments and protocols were ready for expert reviews followed by a pilot run in early Year Two. This also marked the initial step in the development of a Product Evaluation Resource Guide to be completed over the project's funding period.

Year Two : (Oct. 04-Sep.05)

As described above, we developed lists of efficacy indicators centered on device performance, device usability, user satisfaction, user acceptance and device use, using our first selected product for testing (the Lids-Off automatic jar opener transferred through Black & Decker), as a frame of reference. We developed the corresponding first drafts of instruments as well, for use with the laboratory and home trials. These were our Phase I outputs.

Early in Year Two, we completed Expert review of the indicators and instruments during the month of October 2004.Final drafts were prepared based on expert feedback.

The actual efficacy trials began in November 2004 and are scheduled to be completed in July 2005. 50 participants with limited hand function were recruited based on specific inclusion and exclusion criteria. The study period for each participant is 6 months, beginning with a laboratory trial of Lids-Off jar opener at the WNY Independent Living Center set up for individual try out of the Lids-Off jar opener as well as the Open-Up jar opener (competing product). Individual trials were video-taped and also observed by a research team member. Participants responded to a questionnaire on aspects of performance of each device on the 5 jars they operated. Each participant was also interviewed for comparative judgment on the performance of the two devices. This was followed by home trial of Lids-Off, each participant giving us weekly feedback on questionnaires for the first 7 weeks. Data included judgments in comparison to other alternatives they used to open jars.

At the end of the first two months, we assessed the value of the device to the participant by proposing that they buy the device as part of the financial compensation due to them. The final telephone interviews will be made in July when the last of the participants will complete the study period, to verify the status of use/abandon during the last four months At this time we will repeat the offer and compensate them according to their decision.

All participants were tracked for their varied and continuous input. All but two completed the study, and thirty seven (37) of these 48 valued the device by actually buying it as part of their compensation. Data on laboratory trial questionnaires and the weekly home trial questionnaires have been received from all 48 participants. We are in the process of data recording and analyzing the data collected from questionnaires, interviews and phone calls. Qualitative data have been recorded on excel spreadsheets and are being triangulated with quantitative data being processed by SPSS program for Repeated ANOVA analyses.

We expect conclusion of results and reporting early in Year Three. We also plan to present the preliminary findings and methodology of the first Efficacy Study at the Annual Conference of the American Evaluation Association to be held during October 26-30 in Toronto, Canada.

[ Top of Page ]