Jump to Content
T2RERC  

home > publications > conference papers > abstract

Full Text

 

Title: Evaluating Technology Transfer: A Case Study in Technology for Persons with Disabilities
Author: Vathsala I Stone
Published: 2002
Publication: Annual Meeting of the American Evaluation Association: Washington, DC

This paper summarizes the current status of a study that is validating an innovative approach to technology transfer at the University at Buffalo's Center for Assistive Technology. The study is part of the Rehabilitation Research Engineering Center on Technology Transfer (T2RERC), a five-year program funded by the National Institute for Disability and Rehabilitation Research (NIDRR) for transferring needed technologies and products into the marketplace for persons with disabilities. The comprehensive and systematic technology transfer approach, object of pilot testing in this study, responds to an urgent practitioner need for fully developed and systematically tested models. Stakeholder involvement is an essential element of the proposed transfer process, and the program assumes a brokering role providing the needed technical and market research for each transfer case, and establishing the needed strategic partnerships. Evaluation closely monitors and enlightens each transfer case, working to ensure and verify the success of the transfer. Data are both qualitative and quantitative, and include both product and process evaluation. Individual case findings lead to case studies. Additionally, data integration across successive model iterations generates a validated and refined model version.

Top of Page ]

Introduction

The focus of this paper is the research and development program that is currently in implementation by the Rehabilitation Engineering Research Center on Technology Transfer (T2RERC) of the University at Buffalo's Center for Assistive Technology. The program seeks to improve the quality of life for persons with disabilities, by transferring needed technologies and products into the Assistive Technology (A/T) market. An assistive device, as defined by the Technology Related Assistance for Individuals with Disabilities Act of 1988 is "Any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified or customized, that is used to increase, maintain or improve functional capabilities of individuals with disabilities." The Program is funded by the National Institute on Disability and Rehabilitation Research (NIDRR), and has just completed four years of its five-year funding cycle.

In order to accomplish transfers of technologies and products, and in an ongoing effort to establish technology transfer as a systematic business strategy, the program has proposed and has been implementing an innovative approach to technology transfer (Lane, 1999). This approach offers a systematized and structured framework for the practice of technology transfer, as a first step toward the systematic development and consolidation of a model of technology transfer that is currently in need by practitioners. Thus the program conducts research on the proposed model at the same time as it implements the model to accomplish transfer goals. This research seeks to validate the model to the context of Assistive Technology (A/T) and study its feasibility, effectiveness, efficiency and its context relevance. The present paper summarizes and discusses the methodology and the status of findings from our four years experience. Although process details of the model are part of the study, description of the transfer experience and lessons around the process are omitted here, as they are focused in a sequel paper currently under preparation by the program.

Top of Page ]

Purpose of the Study

The purpose of the study has to do with an empirical pilot testing of the proposed framework, based on its application to the transfer of technologies and products to the A/T marketplace. As will be described later, the model operates in the program applying both the Demand-Pull (DP) and Supply-Push (SP) processes, the two driving forces of the overall transfer process. The purpose then, is to conduct a systematic evaluation (Stufflebeam & Shinkfield, 1985) of the proposed model through empirical observation of both these processes.

Top of Page ]

The T2RERC Program: Context, Goals, and the Operating Model

Technology Transfer is at its infancy as a discipline. Lane (1999) notes that although technology transfer is a popular concept and technology transfer processes have been implemented in the field of A/T, the concept is far from being useful to the field because there is lack of consensus or conceptual models in the supporting literature. The author further notes that, in the absence of a solid foundation in literature, "technology transfer" has become "synonymous with a wide range of activities", both "technology" and "transfer" defined differently by different authors. Related theories such as the diffusion of innovation (Rogers, 1995) or analysis of models in related fields such as presented by Baskerville, et al, (1997, 2001) on information technology diffusion could offer some guidance to the practitioner regarding a theoretical framework.

These models include in some way the various essential elements that play a key role in technology transfer. However, no one model offers a coverage of all key elements integrated into a fully developed, systematized framework that could guide a practitioner's deliberate effort to transfer technology effectively. Practitioners have tended to isolate select elements from these models for focus in their own individual work. Development of a working model useful for effective technology transfer in practice is urgently needed. The comprehensive framework proposed by Lane (1999), as mentioned earlier, attempts to fill this need by providing an initial step toward a systematized model. We present below the characteristics of this model that includes the two initiating forces of technology transfer – the DP and the SP processes, focus of our study.

Characteristics of the T2RERC Model

Lane (1999) conceptualizes technology transfer as a single process that basically involves transformation of a technology into a product. It begins with an idea for a new product (supply push-SP), or a need to improve the functions of an existing product (demand pull -DP). It ends with a new or improved product available in the marketplace. Although SP and DP have different origin, they span a wide range of activities of the same process. These activities link three key events – idea, prototype and product; and fall under four categories - technology applications, technology R&D, product R&D and product commercialization. The dynamics of these activities involve five stakeholder groups – technology producers, technology consumers, product producers, product consumers and resource providers. Thus, technology transfer is characterized by one process brought on by two initiating forces that link three key events through four sets of activities in which five stakeholder groups are involved. (For details, see Figs. 1-6 in Lane, 1999). The model is comprehensive and attempts to systematize earlier research findings through a structured process. However, it is necessarily a deliberate simplification of a complex process, to communicate the core information in the context of multiple definitions, and to initiate analysis that validates the basic elements involved. It includes all significant stakeholder groups in the process as a necessary condition for effective transfer, and recognizes the need for data collected from actual conduct of technology transfer activity as the basis for its consolidation.

Program Operations

The goal of accomplishing the transfer of technologies and devices is respectively achieved through the two major projects of the program – the Demand-Pull (DP) and the Supply Push (SP) projects. The former systematically accomplishes the "pulling" of needed technologies into the A/T market starting with a research of the (consumer and manufacturer) needs for technology in a significant industry segment. Expert tech developers and researchers, in interaction with appropriate business and industry representatives as well as end-users, then identify and validate concrete and pertinent technology needs. These are then developed into problem statements, for which technology solutions are solicited and evaluated, and the selected solutions with high relevance and market potential are licensed to appropriate industry leaders through negotiation. So far the project has focused its efforts and effected transfers in three significant technology areas– Wheeled Mobility, Hearing Enhancement and Communication Enhancement. Technology for Visual impairment is currently underway.

The SP project, on the other hand, starts with an invention with potential for an A/T device and takes it systematically through several stages of modification before it is licensed to appropriate manufacturers. First, an improved and relevant product design is obtained that meets criteria raised by findings from technical evaluation, market research and end-user needs identification. The resulting prototype information including all pertinent market and technical research input is complied in formal commercialization packages that are used to negotiate with pertinent companies to license the product.

The DP and SP operating protocols carefully reflect the above steps, and are formally organized in terms of Carriers or potential facilitators of the transfer process. Essentially, each carrier is an intermediate output in the process that contains the findings from the preceding evaluative activities, such that the subsequent activities can build upon them. Thus, carriers are critical steps in the process and are deliberately obtained to "carry" the transfer process forward.
Table I summarizes the carriers defined by the DP and SP processes.

Table I: Supply Push and Demand Pull Carriers
 

Activities

Carriers

Demand Pull

  1. Select Industry Segment
  2. Identify Technology Needs (Market research)
  3. Validate Technology Needs
  4. Locate Technology solutions
  5. Transfer Technology Solutions
  6. Expand Commercialization
  1. Industry Profile
  2. White Paper
  3. Problem Statements
  4. Dissemination Vehicle
  5. Commercialization/Marketing Plan
  6. Post transfer support mechanism

Supply Push

  1. Solicit, screen and select device from inventors
  2. Evaluate device in lieu of agent agreement to license device
  3. Primary & Secondary Market Research + technical evaluation of Device
  4. Organize Commercialization Info for product producers
  5. Marketing Plan
  1. Device Intake Pkg.+ accept/reject letter
  2. Agent Agreement
  3. Design needs and priorities established
  4. Commercialization Package
  5. Licensing Agreement

The operating model allows for and expects to encounter Barriers or impediments to the process and proposes to overcome them through identifying and devising potentially effective practices. Such practices, through iteration and testing over several different transfer situations, can then lead to Best Practices.

The Research and Evaluation (R&E) project thus becomes a third major component of the T2RERC Program. A major goal is to optimize the model. First, it identifies best practices and incorporates them in the model in an ongoing refinement effort. Best Practices reflect both methodology changes resulting from overcoming specific barriers or by a general modification in the model itself, such as elimination or substitution of a Carrier. Second, it "benchmarks" (OED, 2002) the model process for verifying its performance. By observing how the model operates vis-à-vis the proposed framework and describing the model's overall efficiency in terms of how resources distribute across the model carriers, various benchmarks can be established based on several iterations. In the current cycle, the program is focused on internal benchmarking (McNair and Liebfried, 1992; Camp, 1989; 1995) or an initial "stock-taking" and consolidation of the model itself through process quality control, as opposed to competitive benchmarking that would allow for perfecting the model through establishment of Best Practices benchmarked against external, exemplary models. Lack of an equivalent technology transfer model that offers a process that is comparable in its entirety is one reason for this approach, as pointed out earlier.

A multi-disciplinary team characterizes the Program staff, and strategic partnership building is an essential feature. Different RERC programs in the relevant technology area are our DP partners each year. Partnerships are also built as we involve all five stakeholders groups (researchers, technology developers, clinicians, industry and business representatives and end-users with disabilities) throughout both processes. These groups both give input to our Carriers and evaluate them as fit. Primary market research including consumer need identification is fundamental to both processes. Methods include expert interviews, forums, (alpha and beta) focus groups, panels and surveys with purposive samples carefully structured for maximizing stakeholder expertise input.

Top of Page ]

The Problem and Evaluative Questions

The basic problem, as pointed out earlier, is one that concerns the nature of technology transfer – should it be treated as serendipity that can only be structured in retrospect or can it be captured into a deliberate model that is goal directed, controlled and monitored? If the latter is possible, is a comprehensive model integrating all key elements into a systematic and structured process the answer to effective technology transfer in practice?

The basic research hypothesis is that a deliberate and structured process with characteristics such as proposed in the model will have an effect on transfer of technologies and products. The question therefore is one of the effectiveness of the proposed model in the context of its application (A/T). The related program goal of optimizing the model implies an in-depth description of the model processes.

The following are the corresponding Evaluative Questions raised in this study:

  • How effective is the proposed model?
  • How efficient are the model processes in terms of resource optimization?
  • What factors explain the transfer process and how do the model mechanisms operate in accomplishing the transfer?
  • How relevant is the model to technology transfer practice (i.e., to its stakeholders in the context)?

Top of Page ]

Evaluation Approach and Methodology

The T2RERC program simultaneously accomplishes and evaluates technology transfer. Its targets are two-fold. Success in accomplishing the transfers of technologies and products, and obtaining a validated and refined model process that is "benchmarked for best practices". An evaluation strand threads through the program, committed both to the quality control of the technologies and products in transfer, as well as to model improvement through observation and analyses of the operating factors.

Evaluation, thus, has a three-level involvement. At the basic level, it closely monitors and enlightens each transfer case (i.e., a technology or a device in focus), working to ensure the success of the transfer, as well as to verify it. As stakeholder involvement is an essential element of the proposed transfer process and the program assumes a brokering role establishing the needed strategic partnerships between them, evaluation facilitates this mediation by providing the needed technical and market research for each transfer case. Evaluation focuses on formative and summative evaluation of the outcomes, and oversees the technical as well as primary and secondary market research of the SP and DP processes. At the next level, evaluation also independently studies the operating model processes, tracking relevant variables and integrating data across various transfers (reiterations). At the overall program level, similar tracking from the support systems of training, dissemination and partnership building, create the necessary synergy in information utilization, and helps keep resources appropriately apportioned to the project goals (Stone, 2000).

In effect, each time a transfer is accomplished (or not accomplished), it constitutes a case study of the proposed model. Collectively, the case iterations lead to model development.

Study Design

In the foregoing sense, then, the present evaluation uses a "collective case studies" approach (Stake, 2000). The choice of research design for each individual case study should respond to the evaluative questions and lead us to sound (valid and reliable) judgments of merit and worth of the proposed model (Joint committee, 1994). Judgment requires standards of comparison, which can be absolute and relative (Stake; In: Worthen et al, 1973). For example, the effectiveness of a program can be judged relative to an alternative program, or against (absolute) standards such as intentions and pre-established expectations for the program. Scriven (1967) and Cronbach (1963) have put forth contrasting views in this regard, as advocates of the former and latter respectively. Stufflebeam and Shinkfield (1985) clarify Stake's position about the need for both types of standards, depending on the purpose of evaluation - the need for comparative summative evaluations being greater for deciding between alternative programs, and not so much for designing and developing a new program. Given the formative stage of technology transfer as a discipline, an absolute evaluation approach is more valuable for purposes of this study. First, there is lack of a systematically developed, fully described alternative model currently available for meaningful process comparisons. Second, considering the novelty of the proposed model, comparative evaluations would make better sense after we will have more fully described its processes and outcomes based on empirical evidence. The urgent need then, is for pilot testing, revising and maximizing the model (Worthen et al, 1997) so later impact studies may be more meaningful. This is compatible with the Program's position that this empirical validation of individual models is a necessary initial step to establish framework for further model development through documented research. This will then facilitate valid comparisons in the future between models when appropriate, and is necessary for development of technology transfer as a discipline.

Our methodological choice then, is for an absolute evaluation approach with an underlying "single-case, post-only" research design (Cook and Campbell, 1979) for each case study. In interpreting the model's effectiveness and efficiency, actual process performance is compared with how it is supposed to work as per the proposed model. This involves descriptive analyses and includes resource optimization vis-à-vis the Carriers and Best Practices. Additionally, each single-case study of technology transfer focuses on an in-depth, qualitative analysis of the model processes to document how the model mechanisms operate in accomplishing the transfer. This effort is also descriptive and seeks to understand the factors that facilitate the transfer process, the factors that impede it, and the practices that promote the former over the latter. As pointed out earlier, it helps identify context variables that act as "Barriers" to the transfer process, interfering with the model variables (strategies and procedures) devised as "Carriers". It enables us to devise alternative practices to make a specific Carrier effective. It points to modifications in the Carrier or an alternate Carrier to be devised, as Carrier composition may vary from case to case needing adjustments and choices in practice. Such changes, verified by cross-case learning over case iterations, leads to changes in the hypothesized practices and mechanisms, thereby establishing Best Practices.

Thus optimization of the model through such an iterative process defines our model development design. While the main outcome of each case of technology transfer is a transfer (or no transfer), each study generates a new and improved version of the model as an additional outcome. This version then feeds into the next iteration or application as its overall input generating further improvement, and so on.

Conceptualizing program operations in terms of sequential Carriers allows us to partially address the validity of an otherwise weak design in terms of the classical set up for a true experiment (Campbell and Stanley, 1963). The strategy establishes explicit links between the beginnings of the process to the point of final outcome (transfer or no transfer) and builds causal reasoning into the design such that an occurred final outcome can be traced back to the beginning of the process. It lends some basis for a Modus Operandi analysis (Scriven, 1976) using the Carriers as "signatures" through which to infer a cause and effect relationship qualitatively (Mohr, 1995 and 1999). It also somewhat improves external validity because, "… the more thoroughly we understand the causal mechanism by which a treatment has affected an outcome in one case, the better the position we are in to know when a similar outcome will result in another case" (Mohr, 1995, pp.271)

Operational Definitions

Our indicator of effectiveness or success of each case under study is the "transfer" (Dependent Variable), or movement of the specific technology or product case, into the A/T marketplace. This is the final outcome for each case. It is defined as a dichotomous variable, in terms of whether or not a technology or a product has moved from the industry of origin to the A/T industry/market. It is measured by tangible evidences such as "product licensed to market" or equivalent events. The quality of the final outcome is the quality of the transferred technology or product. We ensure quality in our final outcomes by assuring the quality of the intermediate outcomes through criteria described later.

The single-level (no control group or alternative of comparison) Independent Variable in the study is intervention through the proposed technology transfer model. The model's processes consist of strategies and procedures hypothesized to bring about the desired transfer, and are adjusted and controlled by the research team. They include Carriers, mechanisms deliberately devised to move the case to the final outcome past Barriers in the process.

Barriers are impediments to transfer. They either obstruct a Carrier or affect its quality, negatively affecting the transfer outcome. These are observed and documented by key project staff in their logs.

Best Practices are developed as procedures proven to effectively and efficiently overcome the Barriers through appropriate Carriers. They reflect improvement in procedures for effectively using the Carriers. Additionally, since Carriers may apply differently in different cases or need alternates, the optimal Carriers also become part of the overall Best Practice. Best Practices are also observed and documented by key project staff in their logs.

Our process efficiency indicators are: (a) the amount of effort measured in terms of personnel time consumed by a transfer activity; (b) the time to failure or success measured by the total time expended for the transfer.

We note that the (number and kind of) specific Barriers that are overcome are also key to the understanding of the duration of a transfer. As qualitative variables, they play an explanatory and enlightening role in clarifying our level of efficiency, illustrating points of success and failure of the individual cases of in transfer.

Data collection and Analysis Procedures

Data are collected for a two-fold purpose. First, they serve to evaluate and improve each technology/product as various stages of its transfer to the end (formative and summative product evaluation). This data comes from the stakeholders and related market and technical research. For SP, they provide the specific criteria and technical standards (utility, safety, operability, consumer acceptance, cost, etc.) for judging and shaping the incoming products into useful and acceptable product designs and prototypes. End-user needs data are obtained through focus groups and surveys. For DP, technology needs data are obtained from multiple stakeholders - end-users as well as experts in the specific technology area including researchers, clinicians, manufacturers and technology developers. Stakeholders again validate them in a structured Forum, leading to specific problems. Other Stakeholders, technology experts, respond with concrete technology solutions to the problems. Useful and acceptable proposals are selected for negotiation and licensing. In addition to needs data, stakeholders give evaluative feedback at all significant stages of both SP and DP processes. Details can be seen on the Program's website in the Annual Reports and Demand Pull Forum Proceedings.

Second, data are obtained for analytically describing the model process in operation (model
evaluation). Here the objectives of the data collection and analysis are to:

  • Benchmark process observations from DP & SP against the proposed model steps, as a way of validating the model performance. Describe its efficiency.
  • Describe and document the interaction of critical factors - the "Carriers" [facilitators], the "Barriers"[impeders] and the Best Practices. Describe also the corresponding model refinements including changes in the proposed Carriers and proposed strategies.
  • Verify the relevance of the critical components of our process to appropriate stakeholders

All observations are tracked, documented and referenced to Carriers because these involve our intermediate outcomes and establish milestones achieved (or not achieved). The SP and DP projects describe their sequential protocols and the corresponding activities in the form of Action Plans. As these activities cluster together to develop Carriers in the desired sequence, the Action Plans provide a relevant basis to collect all data, both qualitative and quantitative, and record them against the respective Carriers as indicators of progress and success. Correspondingly, there are there are two sets of analyses – 1) Benchmarking and 2) Critical Factors.

Benchmarking concentrates on the efficiency variables. First, tracking events with reference to timelines keeps score of the duration or time lapse associated with transfer (time to failure or success). Second, person-hours are tracked (quantitative observations) through customized weekly time sheets completed by all project personnel. Their reports refer to the Action Plan protocols. These are periodically (quarterly) integrated across staff, organized by protocol activities and then by Carriers. Accumulated longitudinally over the transfer period, the personnel time percentages inform us of the effort (time invested) per Carrier and of the total effort (cumulative and elapsed time) to transfer (or non-transfer). These percentages also inform us of relative effort between Carriers which guides our personnel time utilization. Further, by appropriate graphical superposition of these observations on the proposed model framework (see fig 1), the overall process can be described for each prototype device targeted for transfer under SP, and for each new technology under DP.

The Critical Factors analysis focuses on Carriers, Barriers and Best Practices. The sources of our qualitative data are the key DP and SP personnel who routinely record their qualitative observations of Barriers and related Best Practices, in their logs. The Project Evaluator periodically interviews the DP and SP managers to compile this information, organizing them in function of respective Carriers which embody and deliver our intermediate outcomes in the transfer process. This information is triangulated with data from other sources such as project documents and independent participant observations of the evaluator (Guba and Lincoln, 1985; Patton, 1990). It then serves two purposes. First, it complements the benchmarking analyses and provides the necessary "why" explanations of the results, for each individual case (device or technology). It lends an understanding to the workings of the process, alerting us with lessons for future. Second, an independent content analysis of the Barriers and Best Practices might inform us of the patterns into which they mould themselves, knowledge important to the understanding of the context in which technology transfer operates. The key personnel also keep the Carriers under constant observation, and note the alternatives or modifications used as needed in specific cases (technologies and products).

The integration of findings from each study under the SP and the DP processes lend basis for developing individual and comparative transfer stories which should help clarify the transfer path and point to both possibilities and challenges.

Top of Page ]

Findings: Current Status

The program has so far brought three DP projects to conclusive stages, addressing technologies for Wheeled Mobility, Hearing Enhancement and Communication Enhancement. The transfer process for Visual Impairment is currently in the initial stages. About 165 devices have passed through our SP evaluative process, some screened out at various stages, and others making it to transfer. The following section highlights our findings to date, with a few illustrations. For reasons of space, only selected cases are presented from both SP and DP projects.

Effectiveness

Tables II and III present the current status of outcomes achieved by the DP and SP transfer processes respectively. They describe the quantity of transfers accomplished and in conclusive stages. Other significant outputs (not included in the tables) are 4 industry profiles, 12 White Papers, 29 Problem Statements and 3 Forum proceedings in the case of DP.

Table II: Outcome Status - Demand-Pull Project

Technology Areas

Number of technology proposals received in response to identified problems

Number of active technology proposals

transfers

Wheeled Mobility Technology

38

10

3

Hearing Impairment and Assistive Listening

19

7

1

Communication Impairment Technology

7

7

1

Technology for the Visually Impaired

*

*

*

* project started this year

Table III: Outcome Status - Supply Push Project

Outcome/Output

Projected
(for 4 years)

Actual
(at 4 years)

Devices Screened

120

165

Evaluation Reports

80

100

Market Research Reports

12

16

Commercialization Packages

8

10

Transfers

12

13

Devices Active in transfer process

--

10

It bears repeating that the quality of the final outcomes is assured by systematically judging each intermediate outcome and the corresponding Carrier according to pre-established quality criteria. These criteria are built into the SP and DP protocols and are specific to each individual product/technology under transfer. They guide decisions about the necessary reiterations for refinement and cumulatively assure the quality of the final outcome (device/technology).

Benchmarking and Process Efficiency

Recall that our quantitative data is time expended on the DP and SP activities. Recall also that Carriers (presented earlier in Table I) are our reference points for analyzing both the quantitative (benchmarking) and qualitative (critical factors) data.

Effort (person-time) required for transfer activities: An example of our quantitative data analyses for model efficiency for DP is presented in Figure 1. It shows the distribution of cumulative effort (time in this case) over four years, across the Carriers of the transfer process for wheeled mobility technology.

Fig.1 Benchmarking the Demand-Pull Process: The Wheeled Mobility Technology Project
figure 1d

The points A, B, C, D, E and F on the model curve represent the points at which the DP intermediate outcomes are obtained in the model process. The respective Carriers are: A. Industry Profile (generated by procedures to identify and portray relevant industry segment) B. White Paper (stakeholder generated technology needs) C. Problem Statements (stakeholder validated technology needs) D. Dissemination Vehicle (effort to locate technology solutions) E. Commercialization/market plan (effort to transfer solutions) and, finally, F. Post transfer support (expanded commercialization).

Relatively speaking, Carrier E (Transferring technology solutions) has demanded the most effort, followed by C (Validating technology needs) and B (identifying technology needs) in that order. Activities on which least effort was expended represent Selecting the industry segment (A), Disseminating Problem statements (D) and post-transfer activities (F) in that order. The final activities, however, are still ongoing and might affect the current effort pattern.

Analyses (not shown here) repeated for transfer processes for Hearing Enhancement and Communication Enhancement are showing similar patterns of time consumption.

A similar analysis is also performed for all SP products, transferred and not transferred. One finding from this analysis showed that the standard SP Carriers (shown earlier in Table I) worked for some products but not others. The Kinetic Seating System (KSS) is an example (graph not shown) that followed the standard Carriers. However, other devices required different or additional Carriers because they followed alternative paths to reach the marketplace. For example, the products licensed to the distributor Dynamic Living (see Leahy, 2002 for details) were one of them.

Cost: Data on time expended per Carrier also provides a basis for estimating the cost per transfer. Our overall figures for average cost of transfer are about $100K for DP and under $50K for SP. In the study, the time for a DP transfer ranged from 6 to 15 months; for a SP transfer the time was more variable, the range being 6 to 36 months. The time data also permits general comparisons with other commercialization programs. Comparing costs with similar processes is ideal but not possible, as there are none in operation. The T2RERC program appears to be an order of magnitude more efficient than the most similar process, the Federal SBIR program (Bauer, 2001)

Time to success/failure of the devices in transfer:
Figures 2 and 3 exemplify our analyses of the Time to success/failure of the devices in transfer, or the length of time that a device/technology takes to go through the transfer process. They are taken from SP, and refer to two sets of devices - one representing successful transfers and the other representing unsuccessful transfer efforts.

Fig.2 Relative Effort and Time to Success: Three Supply Push Devices
figure 2d

Fig.3 Relative Effort and Time to Failure: Supply Push Devices
figure 3d

Figure 2 compares a selected set of three (transferred) devices. They are the previously mentioned KSS, a device set transferred to one distributor (Dynamic Living - DL), also previously mentioned, and Little Fingers (LF) licensed to a manufacturer. The graph plots the amount of effort put into the Carriers (the vertical axis) against the length of time the device took for the entire process (horizontal axis). Interestingly, LF took relatively lesser amounts of effort to push through from Carrier to Carrier, but took almost as long as KSS (almost 10 quarters) to reach the final transfer stage. A closer look at its specific graph line shows a "break" between the second and fourth quarters, the lack of activity reflecting the "Barriers" that were beyond the control of the process.

Note similar breaks in the case of the devices shown in Figure 3, which presents the second set of SP devices that failed. Epileptic Seizure Monitor hit Barriers between the second and sixth quarters; Omni chair between fourth and ninth quarters; whereas Physiowalker hit no Barriers but still failed.

The Barriers and the Best Practices corresponding to the "breaks" as shown above are other factors critical to the understanding of the SP process and enhance the specific stories. Findings related to these are currently in their final stages of integration.

Model Refinement

Critical information so far observed and documented relate to: (a) how Carriers operate and what appropriate modifications or alternates needed to be devised; and (b) what Barriers impeded our operations and what refinements were made in the corresponding practices or what new practices were devised. All the foregoing information is being accumulated as part of final model profile.

Changes in Carriers: Carriers eliminated, replaced or modified constitute one form of Best Practice introduced in the model. Some modifications and changes have occurred regarding the DP and SP Carriers. "Agent agreement" proved ineffective as an SP Carrier, which was dropped; Another example is, as pointed out before, about the standard set of Carriers not applying in total for Dynamic Living (DL). We contrived and used E-Commerce as an alternate strategy in lieu of the "Commercialization Package", our original Carrier, thus creating an alternative path to market (Leahy, 2002). In the case of DP, refinements to Carriers have occurred and caused a steady protocol evolution over three years of learning from the three technology areas (Wheeled Mobility, Hearing Enhancement and Communication Enhancement) that we worked with.

Changes in methods: Even within an effective Carrier, modification of strategies and practices in function of Barriers constitute Best Practices. An initial listing of such Best Practices and the corresponding Barriers was obtained separately under SP and DP, organized by specific Carriers. It is now in final stages of content analysis, looking for their association with Carriers and overall emerging patterns. For reasons of space, we avoid a complete listing of the findings, but present in Tables IV and V an example each from the ongoing work under SP and DP, respectively. They describe the Barriers and Best Practices corresponding to the specific SP/DP Carrier presented at the head of the respective table.

Table IV: SP Carriers, Barriers, and Best Practices: an example
Carrier: Device Intake Package [to identify promising inventions]

Barriers

Best Practices

High volume and low quality of submissions

Obtaining Contact information [confidentiality issues]

Establishing credibility & getting the inventor interested

Qualifying the applicant- Difficulties and delays due to organizational protocols & timetables, intellectual property issues, no prototype...

Be selective; work with few, but high quality, submissions

Aggressive seeking [USPTO, invention competitions...]

Quick Secondary Market research

Build contacts early

Point out our not-for-profit nature, experience [website], government funding...

Persistence - Follow-up phone calls...


Table V: DP Carriers, Barriers, and Best Practices: an example
Carrier: Industry Profile
[Describes industry segment: Products, manufacturers, market share, R &D capabilities, production capabilities, market profile, customer needs, reimbursement; distribution; Also, conferences, trade shows, advocacy organizations, reference resources...]

Barriers

Best Practices

Difficulty obtaining key demographic information (market segments, market size, market penetration, yearly sales, etc.)

High cost of accessing market data

Extract needed info by Crossing multiple sources of statistical information – U.S. Govt. statistics with other sources (academic research, research by advocacy groups, manufacturers or industry associations, market studies.)

RTTC center for disability statistics, Disability statistics from Non-U.S. info sources.

Among the many examples of our evolving Best Practices are: Guidelines for customizing sampling procedures within a rationale of purposive sampling; Focus group moderation; panel formats and strategies for "full inclusion" of persons with disabilities in mixed groups; business plan development with companies we transfer technology to; Alternative surveying procedures for primary market research.

Model Impact and Relevance to Stakeholders

The T2RERC program is still completing its funding cycle, and it is pre-mature to expect full impact whether on stakeholders or in the overall transfer context. Impact on end-users' quality of life, although the ultimate purpose of transferring technology into the A/T field is a long-term goal beyond the scope of its current activities. So is the impact on the academic-scientific community in the technology transfer field, which involves future peer reaction to our dissemination efforts and of the direction of future research. For now, it must be pointed out that the State of the Science Conference we hosted in November, 2001 received contributions from 15 participating scholars and practitioners in the field, and the program's papers were well received. The validity of the Program to practice is reflected in its visibility and credibility as useful research - recently REHABDATA added the Program's papers presented at RESNA into their category of "Research Utilization".
Regarding our short-term goals, our primary stakeholders in the process from industry and business have expressed, both formally and informally, high levels of satisfaction and interest in relation to our process components. For example, stakeholders are satisfied with our SP commercialization packages. At our DP Stakeholder Forums, stakeholders with expertise in one area have consistently expressed appreciation of the opportunity to interact with and learn about expertise of other stakeholders. In particular, end-users and manufacturers value the knowledge of each other's concerns, needs and commitment in relation to the technology in which they have shared interest. Additionally, stakeholders from industry and business have consistently valued the networking opportunities with each other.

Recently a leading industry in home appliances recognized the value of our Best Practices in primary market research (beta focus groups) and involved us in taking their product successfully to the A/T and general marketplace. A second Fortune 200 company has sought our partnership in a similar venture.
Tracking of users of our web site that disseminates our DP Forum findings (proceedings with problem statements and related information) shows industry and business as the most interested category.

Top of Page ]

Discussion

As the T2RERC program is completing its final year of operation, it has demonstrated that both the DP and SP models are feasible -- they work. Their systematic processes are manageable and repeatable. They have led to multiple transfers, and have accomplished transfers with a high degree of effectiveness. The DP transfers address technology areas with transfers involving motor technology products, a lever drive manual wheelchair, power management, an assistive listening system and software providing improved computer access for persons with "mobility" impairments. The SP project has licensed 13 diverse A/T products and is negotiating the ones in process.

To make up for the infeasibility of a counterfactual (a comparable situation without intervention) the program design has attempted to connect the effect (transfers) to the cause (the model) by creating "signatures" through Carriers and tracking the model's modus operandi. Tracking the progression of products/technologies through the model processes and linking them, for example, to White Papers for the Stakeholder Forum (DP) and the Commercialization Package (SP), enables the program to present evidence that success was attributable to them. Ideally, the modus operandi for other rival programs with their signatures should be identified and ruled out as causes. However, including signatures explicitly in the design and tracking them proactively (rather than troubleshooting retroactively) makes the causal reasoning more straightforward and more convincing than it would otherwise have been.

The processes are efficient in the sense that they (a) produced or exceeded the targeted number of transfers (b) adhering to quality standards derived from market and technical research, and (c) keeping within the budgeted resources. Besides overall costs that averaged <$100K for DP and <$50 for SP, the cost per Carrier data should permit future comparisons with other models especially for functional benchmarking (Camp, 1995).

The processes have been refined, their profile defined in terms of effective Carriers and Best Practices and their workings described in terms of interaction of carriers and barriers, shedding light on the details of the transfer path. The Barriers and Best Practices as well as the lessons learnt are being compiled into case studies that illustrate the models in practice.

It may be too early to detect impacts, but there are some indications of the Program's value to the Stakeholders. Consistent stakeholder satisfaction with the Program's outcomes indicates an initial assessment of program value/relevance. Impact on manufacturers is reflected in some taking the initiative to bring products to A/T market. Their requests to collaborate with the Program reflect the Program's credibility. It is becoming recognized by research practice as "useful research".

The future indicators of impact remain to be verified: the diffusion of innovations introduced to the A/T field through the Program's models, the production quantities and sales volumes for products containing these innovations, and the contributions they make to the quality of life for persons with disabilities, and in general, acknowledgement from academic peers about the Program's contribution to the technology transfer knowledge base.

Summing up, findings so far accumulated strongly suggest both the feasibility and success of the program. The proposed model has been studied and described both in terms of the critical factors in operation, and in terms of the model's efficiency in managing them. Stakeholder acceptance of the program is evident in their continuous feedback. The program seems to have earned credibility with the stakeholders from business and industry. Initial indications of program impact and the program's value in the context are becoming visible.

Where do we go from here? An immediate useful step would seem taking the model to the next level of evaluation – comparison with external models. There are several reasons why a comparative evaluation would be desirable at this stage. One is, as pointed out earlier, establishing a counterfactual as a support for the intervention as the causal variable. A second reason is to provide an external standard against which to judge the program to determine if it is a better alternative to rival programs on some variable of value, such as cost. Yet another reason, especially in the business world, is to do process benchmarking (Bogan and English, 1994) in order to study best practices of excellent programs and incorporate these practices into the program's own processes for continuous quality control and excellent performance. In the present context of developing the technology transfer model as a business strategy, both the second and third perspectives above can be combined in future comparative studies. Considering the lack of an equivalent, well developed program comparable in all respects at this time, a viable alternative may be functional benchmarking (Camp, 1989; 1995) where components of the process are compared for efficiency against similar components in other processes, including programs both within and, if need be, outside the technology transfer industry. Both cost and Best Practices can be compared. Practices can be adapted into the model. In determining trade-off options between resources and benefits, both "el cheapo" and "el magnifico" entries (Scriven, 1993; pp.58) are worth considering in the list of critical competitors.

At this point the evaluator's unique role in the present evaluation context must be pointed out. The program accomplishes transfer of technology by implementing the proposed model at the same time as it evaluates the model itself. Thus an evaluative strand runs through the program's efforts. This places the program staff – engineers, business administrators, and specialists in assistive technology fields (such as speech pathology) - in a unique position vis-à-vis the evaluator as members of a multidisciplinary team. We very soon learnt to be led by each other's expertise at the same time as we led others. Although the author has exclusive responsibility to lead the research and evaluation efforts of the Program, she is just as involved in guiding others through the program's R & D efforts as collaborating evaluators. The responsibilities are thus for conducting internal evaluation, in order to guide project monitoring and performance, as well as to provide leadership for and give scientific orientation to the various projects. Evaluation is valued and appreciated, with ensuing evaluation process use (Patton, 1997). The program goal of model development, its design of operations and the emphasis on stakeholder involvement smoothly and naturally embraces an evaluation approach that is both developmental (Patton, 1994) and utilization-focused (Patton, 1997) at the same time as it is responsive in its orientation (Stake, 1967; Guba and Lincoln, 1985). More notably, in as much as the program's efforts represent pioneer work towards building a model for technology transfer, the evaluator is also involved in the program at the theory building level. Several views in evaluation literature in this respect have included program theory driving the evaluation approach (Chen 1990, 1994; Smith, 1994) or lending a basis to it (Fitz-Gibbon 1996; Weiss, 1996, 1997). However, Cook (2000) has adverted against the possibility of evaluators using the program theory based evaluation as an excuse to shy away from the often-difficult experimentation approach. Pointing out the "false choice" between the two approaches, he recommends using the two in combination. Stufflebeam (2001) cautions against evaluators attempting to construct a theory for the program where there is none although he favors the use of the theory based approach in the case of an already well developed theory existent in the program. It might be pertinent to point out, for the present evaluation context, that the evaluator encountered a program not only with a clear goal of developing a model in order to fill a gap in the field of technology transfer but also with a fairly well conceptualized program logic going into operation consistent with that goal. The evaluator's participation involved reconfiguring the original concepts and the logic into an evaluative framework and adapting the terminology while preserving the operating rationale. To the extent that T2RERC's effort represents a technology transfer theory in development, the role of program evaluation in the present context can be characterized as program theory building.

Top of Page ]

Acknowledgement

This is a publication of the Rehabilitation Engineering Research Center on Technology Transfer, which is funded by the National Institute on Disability and Rehabilitation Research of the Department of Education under grant number H133E980024. The opinions contained in this publication are those of the grantee and do not necessarily reflect those of the Department of Education. In preparing this paper, the author has heavily drawn upon support and input of colleagues Joseph Lane, the Project Director, and Dr. Stephen Bauer and James Leahy who lead the Demand-Pull and Supply-Push efforts respectively. Special acknowledgement goes to each one of them.

Top of Page ]

References

  1. Baskerville, R and Pries-Heje, J. Diversity in Modeling Diffusion of Information Technology.
  2. Bauer, S.M, Lane, J.P, Stone V.I, and Buczak, J. (2002). A Systematic Process for Successful Technology Transfer. Proceedings of the RESNA 2002 Annual Conference. Arlington, RESNAPRESS. (Pp. 309-311)
  3. Bauer, S.M (2001). The Demand Pull Technology Transfer Project. Paper presented for at the State of the Science Conference. Washington, D.C: NIDRR/ T2RERC/Technology Transfer Society, Nov. 2001.
  4. Bogan, C.E. and English, M.J. (1994). Benchmarking for Best Practices: Winning Through Innovative Adaptation. New York, McGraw-Hill, Inc.
  5. Camp, R.C. (1989). Benchmarking: The Search for Industry Best Practices that Lead to Superior Performance. Milwaukee, Wis: Quality Press.
  6. Camp, R.C. (1995). Business Process Benchmarking. Milwaukee, Wis: ASQC Quality Press.
  7. Campbell, D.T and Stanley J. C. (1963). Experimental and Quasi-Experimental designs for Research. Chicago: RandMcNally.
  8. Chen, H.T. (1990). Theory-driven Evaluations. Newbury, CA: Sage.
  9. Chen, H.T. (1994). Theory-driven Evaluations: Needs, Difficulties and Options. Evaluation Practice, Vol. 15, No. 1, (pp.79-82)
  10. Chen, H.T. (1994). Current Trends and Future Directions in Program Evaluation. Evaluation Practice, Vol. 15, No. 3, (pp.229-238).
  11. Cook T.D. and Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand McNally College Publishing Company
  12. Cook T.D. (2000). The False Choice between Theory-Based Evaluation and Experimentation. New Directions for Evaluation, #87; San Francisco: Jossey-Bass Publishers (pp.27-34)
  13. Cronbach, L.J. (1963). Course improvement through evaluation. Teachers College Record, 64, 672-683.
  14. Fitz-Gibbon C.T. and Morris, L.L. (1996). Theory-Based Evaluation. Evaluation Practice, Vol. 17, No. 2, (pp.177-184).
  15. Guba EG & Lincoln YS (1985). Effective evaluation. San Francisco, Jossey-Bass publishers.
  16. Joint Committee on Standards for Educational Evaluation. (1994). The Program Evaluation Standards (2nd ed.). Thousand Oaks, Sage Publications
  17. Lane J.P (1999). Understanding Technology Transfer. Assistive Technology, 11(1), Arlington, RESNAPRESS. (Pp. 5-19).
  18. Lane J.P (2000). Applications for a technology transfer model. Proceedings of the RESNA 2000 Annual Conference. Arlington, RESNAPRESS. (Pp. 282-284)
  19. Leahy, J.A. and Lane, J.L. (2002). Paths to Market. Proceedings of the RESNA 2002 Annual Conference. Arlington, RESNAPRESS. (Pp. 204-206)
  20. McNair C.J. and Liebfried, K.H.J. (1992). Benchmarking: a Tool for Continuous Improvement. New York: HarperCollins Publishers.
  21. Mohr, L B. (1995). Impact analysis for program evaluation. Thousand Oaks, CA: Sage Publications.
  22. Mohr, L.B. (1999). The Qualitative Method of Impact Analysis. American Journal of Evaluation, Vol. 20, No.1, Stamford: Jai Press Inc (pp. 69-84).
  23. Oxford English Dictionary (1989). (2nd Ed.) Bench-mark. Oxford University Press. Online: http://dictionary.oed.com (2002)
  24. Patton MQ (1994). Developmental evaluation. Evaluation practice 15(3), 311-320.
  25. Patton, MQ. (1997). Utilization-focused Evaluation: The new century text (3rd Ed). Thousand Oaks, CA: Sage Publications.
  26. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Simon & Schuster
  27. Scriven M. The methodology of evaluation. In Worthen, B.R, & Sanders, J.R. (1973). Educational evaluation: Theory and practice. Belmont, CA: Wadsworth
  28. Scriven M. Maximizing the power of causal investigations: The modus operandi method. In: G.V Glass (Ed.) (1976). Evaluation studies review annual (Vol. 1, pp. 343-355). Beverly Hills, CA: Sage.
  29. Scriven, M. (1993). Hard-won Lessons in Program Evaluation. New Directions for Evaluation, # 58; San Francisco: Jossey-Bass Publishers
  30. Smith, N.L. (1994). Clarifying and Expanding the application of Program Theory-driven Evaluations. Evaluation Practice, Vol. 15, No. 1, (pp.83-87).
  31. Stake RE. (1973). The countenance of educational evaluation. In: Worthen, B.R, & Sanders, J.R. Educational evaluation: Theory and practice. Belmont, CA: Wadsworth.
  32. Stake RE (2000). Case Studies. In Denzin, N.K. and Lincoln, Y.S. Handbook of Qualitative Research. (2nd ed.). Thousand Oaks: CA: Sage Publications.(pp. 435- 454)
  33. Stone, V.I. (2000). Establishing Best Practices for Technology Transfer – Preliminary Findings. Proceedings of the RESNA 2000 Annual Conference. Arlington, RESNAPRESS. (Pp. 288-290)
  34. Stone, V.I. and Lane, J.P. (2002). Critical Success Factors in Technology Transfer. Proceedings of the RESNA 2002 Annual Conference. Arlington, RESNAPRESS. (Pp. 207-209)
  35. Stone, V.I. (In press). Systematic Technology Transfer: A Case Study in Assistive Technology. (Paper presented for NIDRR/ T2RERC at the State of the Science Conference in Washington, D.C in Nov. 2001). Accepted for publication in the Spring 2003 issue of the Journal of Technology Transfer,
  36. Stufflebeam DL & Shinkfield AJ (1985). Systematic evaluation: A self-instructional guide to theory and practice. Boston: Kluwer-Nijhoff Publishing.
  37. Stufflebeam, D.L. (2001). Evaluation Models. New Directions for Evaluation, #89; San Francisco: Jossey-Bass Publishers. (pp. 37-39)
  38. T2RERC website: http://cosmos.buffalo.edu/t2rerc/
  39. Weiss, C.H. (1996). Excerpts from Evaluation Research: Methods of Assessing Program Effectiveness. Evaluation Practice, Vol. 17, No. 2, (pp.173-175).
  40. Weiss, C.H. (1997). Theory-Based Evaluation: Past, Present, and Future. New Directions for Evaluation, #76; San Francisco: Jossey-Bass Publishers. (pp. 41-55)
  41. Worthen, B.R, & Sanders, J.R. (1973). Educational evaluation: Theory and practice. Belmont, CA: Wadsworth.
  42. Worthen, B.R, Sanders, J.R. and Fitzpatrick, J.L. (1997) Program Evaluation: Alternative Approaches and Practical Guidelines. White Plains, NY: Longman, Inc.(pp. 14-24; 76-80; 97-106)

Top of Page ]