Jump to Content
T2RERC  

home > publications > newsletters > Assistive Technology Transfer Update

Assistive Technology Transfer Update

 

Title: Strategic goal 3: Evaluation and Research
Author: Vathsala I Stone
Published: 2000
Publication: Assistive Technology Transfer Update: Vol. 2 Issue 1 (Spring) Annual Report, 1998-1999

Abstract

The third goal of the Rehabilitation Engineering Research Center on Technology Transfer (T2RERC) is that of Evaluation & Research. This project evaluates each of the steps taken to transfer technology and complete the goals set for T2RERC. The efficiency with which the Stakeholder's Forums are carried out and general feedback from the participants are included in this article along with comprehensive charts and graphs.

[ Top of Page ]

Full Text

Objective:

Evaluate internal and external models, methods and outcomes of technology transfer for continuous program improvement. Include self-study and external reviews by stakeholders. Project Leader: Dr. Vathsala Stone.

Process:

Specific protocols are set up to generate and utilize information for a dual purpose:

  • Evaluate the program's value - to guide continuous program improvement through formative evaluation and to describe the program's overall efficiency and effectiveness through summative evaluation; includes assessment of program merit (quality of inputs, processes and outcomes) and of program worth (effects/ impacts on its internal and external environments);
  • Evaluate the program's efficiency and effectiveness - assess the technology transfer (Demand Pull and Supply Push) models under implementation for feasibility, effectiveness and validity, as well as conduct research to benchmark their performance and to identify factors critical to the technology transfer process.

For additional information on the process, please see:
http://cosmos.buffalo.edu/t2rerc/programs/research-evaluation/

Progress:

Performance Benchmarking Study: Table 5 in the attachment summarizes our analyses on the personnel time tracking data gathered throughout the year. It is our basis for the Performance Benchmarking Study and serves in part for model evaluation. The table reports both cumulative and quarterly breakdown of results on personnel time. The first column shows the different action plans or subprojects. The numbers in the next four columns show the "person-hours" spent by these subprojects during the four quarters, respectively. The fifth column presents the total person-hours for the year spent by each subproject, and the sixth column gives the same information in percentages.

Reading down columns 5 and 6 we get an idea of what percentage of time ("person-hours") was used by the individual action plans (subprojects). It is only appropriate that Supply-Push and Demand Pull, our two major subprojects, consumed the most in terms of personnel-time, as these columns show. Next in order comes Project Support, which includes personnel training as well as logistical support (initial start-up activities, equipment/workstation set-ups). Dissemination and Evaluation/Research follow Project Support in this sequence of personnel time consumption. Indeed, the project has utilized the least of personnel time on its activities that build strategic partnerships and provide external technical assistance. The pattern of personnel time utilization by the subprojects, therefore, very closely reflects the expected pattern of resource distribution respecting the priorities set among the six strategic project goals. Overall, the findings confirm the efficacy of the project's resource allocation.

Model evaluation and improvement: So far, we described our progress regarding the summative or overall model evaluation. What about evaluation and improvement of our model processes? Our main data for this purpose comes from evaluative feedback through industry, researchers and end-users (our participating stakeholders), who play a dual role. They provide input to our processes and do ongoing evaluations of our outcomes. The relevant sections under strategic goals 1 and 2 have reported progress in this regard. In particular, we summarize here findings from the evaluation of the Stakeholder Forum on Wheeled Mobility Technology held this year.

Mixed stakeholder groups - end-users with disabilities, manufacturers, technology producers and resource providers - participated in each of the four sessions that ran simultaneously each day. A team of two evaluators made direct observation of the interactions at these sessions, circulating from session to session and recording the unique features of each session. They observed what styles the moderators used, the way they used the audio-visual aids, and how effectively the team coordinated its three roles in order to monitor discussions, technical content and summarizing notes. They carried their observations back and forth between the live sessions, reinforcing their strengths and correcting process errors. Additionally, all participants evaluated the quality of each individual session, on a survey form, against their own expectations. We analyzed their responses and comments at the end of the day and fed them back to the moderating teams immediately, which enabled them to modify their second day sessions in accordance with the needs perceived on the first day.

Table 1 and Table 2 summarize the participants' evaluations of the Forum on the surveys. In all, 46 participants consistently filled out all the evaluation forms expressing their satisfaction levels and making additional comments. They evaluated each individual session on content (if the topics were relevant, if the discussions went deep enough), purpose (how well it was achieved) and personal satisfaction (about feeling comfortable and being able to contribute). Overall, perceptions were very positive about all sessions, although not uniformly high across all of them, given the participant mix and differences in moderator styles. The evaluations ranged from reasonably satisfactory (3.5 points on a 5 point scale) to highly satisfactory (4.8 points). Except for one group, the second session scored higher than the first one, indicating improvement of session performance and thus validating the value of on-site feedback to moderators.

Different stakeholder groups valued different aspects of the Forum as its strength. The opportunity to network with, and learn about, the other stakeholder groups was upheld as the Forum's strength by all groups (91%, indicated this as a key Forum benefit). Consumers appreciated learning about the technologies underlying their products, and "needed more of these forums in the future." Technology Producers appreciated the opportunity to "meet a diverse group of people with common mobility interest," describing the Forum as "unifying personnel of different fields under one roof which will and should help getting a proper network." The Manufacturer group, in addition, appreciated the opportunity of exposure to the consumer group, particularly impressed with the "mix of manufacturers, researchers, clinicians, and the most important group, the users." A Resource Provider, in appreciating the networking opportunity, acknowledged getting "good insight to issues dealing with wheel chairs".

Two other benefits or impacts of the Forum that the total group of participants acknowledged were: exposure to new or innovative technologies (63%), and being able to identify the needs for new products or technology (57%). Six out of eight manufacturers felt they were able to identify new business opportunities and ten out of eighteen consumers felt they helped shape the direction for new product development.

Continuous Project Review and Process Improvement: We review our summary tables on personnel time distribution and outcomes at quarterly meetings for internal quality control and systematic program improvement. As follow-up measures, we either consolidate or make changes, as necessary, in our day to day practice and methods. We also disseminate the findings externally to relevant stakeholders. Thus, we use them during the process for quality assurance, and after the process to verify and document program effectiveness.

This feedback has resulted in process revisions such as:

  • We developed a system to track project personnel time by subproject, and, in the case of the Supply Push model, by each individual device processed. Our tracking data on individual devices recorded separately (time, outcome and critical factors) provide the data for device case studies.
  • We have formally incorporated in-service training as a necessary tool for quality assurance.
  • We adjusted our personnel loading charts for year two, and re-allocated available resources, based on the actual results in year one.

The Evaluation and Research program also provides research internships for persons with disabilities. The expected benefits are three-fold: the project benefits from the intern's expertise as an experienced end-user; the intern's professional skills are improved; and the intern's new skills will strengthen the parent institution's evaluation and research capabilities.

[ Top of Page ]