FAQ for Faculty
Frequently Asked Questions
FAQ about IDEA Using the IDEA student feedback system to obtain useful information for faculty development.
Generally the objectives you choose on the FIF should be a subset of the course goals that you wish to evaluate that semester. Please consider the questions below for details.
Should I make the objectives match the course goals in the CCG?
Generally they should be a subset of the course goals, since you are not likely trying to achieve objectives beyond the scope of the course. Because the IDEA feedback system is designed to help faculty measure the effects of their teaching choices on the course objectives,you select should be the objectives that you wish to measure that semester. This might be only some of the course goals, as the IDEA system provides the most accurate information when the number of objectives is small. See “How many objectives should I choose and at what level?” below for information on how many to select.
Do I set the objectives or does my department?
This varies. Some departments have discussed the objectives and selected a set together. Please ask your department chair or director for this information. Note that even with common departmental objectives, individual faculty may wish to select a smaller subset or adjust the “Important” or “Essential” rating in order to evaluate progress on a specific objective in a semester. See“Should I make the objectives match the course goals in the CCG?” above for more information.
How many objectives should I choose and at what level?
Except in unusual circumstances you should not pick more than three objectives as “Important” or “Essential.” Based on past results with IDEA the “Progress on Relevant Objectives” score decreases with each additional objective (see IDEA's website). The “Progress on Relevant Objectives” score is a weighted average of student responses to the objective questions with each “Essential” objective twice as important “Important” objectives. The student responses on the rest of the objectives are ignored.
What do the objectives mean?
To better understand each objective, you may want to read the following documents on IDEA's website.“Some Thoughts on Selecting Objectives” describes each objective in depth.
What do students actually answer?
The students answer a set of three questions for each objective plus additional information about themselves that is used to adjust the scores. This means they answer questions about objectives that you list as not important. This is a technology issue that cannot currently be changed.
Do the "Contextual Questions" matter?
The options labeled “Contextual Questions” are not part of the rating system. These are part of IDEA’s internal research. How you answer them does not affect your results.
Does my choice of department matter?
The department code that you select determines which department your scores are compared against nationally in the “Discipline” section of the report.
Do my choices affect the adjusted scores?
None of the choices on the FIF form are used in calculating the adjusted score at this time. The adjustments are based on information the students self report and some information about the class (not on the FIF).
How do I improve the response rates?
The largest student response rates involve students who understand the importance of the forms, faculty and departments who are organized in their handling of setup and communication, and faculty and students who understand the technology involved. See the questions below for tips on each of these aspects. Also see the questions on filling out the FIF.
How do I convince students that completing the surveys is important?
The largest response rates have included classes in which students understand how feedback forms are used and in which they believe the results will be heeded. Following are some options for helping your students understand the use and importance of this feedback system.
What ways can I communicate the survey dates to the students?
The more frequently students are reminded, the more likely they are to respond. Consider the following options for communicating the dates and methods of filling out the forms.
What can I do to help them understand how to fill out the surveys (the technology)?
When students can easily find and begin the surveys they are more likely to fill them out. Consider the following options to minimize the technology barrier for students.
How can I use the IDEA results?
If you have a specific goal, then you can fill out the FIF to match your goal, collect data over multiple semesters, and use the student surveys as part of the evidence that you have achieved that goal. IDEA surveys can be indicators of change in context. They are not good indicators of static concepts of quality.
Student survey results can be used as evidence of effective change in a class. If student’s response to the “Progress on Relevant Objectives” and the individual responses to objectives improve after making a change in a course, you have some evidence that the change improved student perception of objective achievement.
Example: An instructor adds a guided tour of library resources (provided by the Consortium Library faculty) to a course in which research is expected. If after doing this for the first time the responses to “Learning how to find and use resources for answering questions or solving problems” increases noticeably, the instructor has some evidence that a change might have been successful.
Student survey results can be used as evidence of consistency. If over a number of semesters, the student responses on the objectives remain similar (remain in the same bands on page one—Much Higher, Higher, Similar Lower, Much Lower) then student perception is constant over time demonstrating consistency in your work.
Student survey results from the diagnostic form (page 3) can be used for faculty development. The students’ answers to individual questions can – in conjunction with other information -- guide a faculty member in changing how they achieve the course objectives.
Example: A faculty member notices over multiple semesters that students rate the course high for “Introduced stimulating ideas about the subject” but they constantly rate “Demonstrated the importance and significance of the subject matter” lower. If both aspects are important for a given course, the instructor might then choose to include more applications if that is appropriate or explain to the students in which courses they will learn to apply the theory being learned in this class, or some other action consistent with the goals of that course. If in following semesters that response increases, the faculty member also has some evidence of successful development.
What are the limitations of the IDEA results?
The results cannot measure whether a good or bad job was done in a class. The results indicate student perception which may not match reality. Also, failure to meet some objectives may not be bad if the objective missed is not required in the course.
Example: A faculty member decided to use groups in class to improve student engagement. The instructor adds the “Acquiring skills in working with others as a member of a team” objective on the FIF and then instructs the students on how to work in groups throughout the semester. The students may perceive that their instruction on how to work in groups was insufficient and provide low ratings for this objective. The instructor’s “Progress on Relevant Objectives” will now be lower. However, if the groups were not a goal of the course as defined in the CCG and by the department, then this faculty member has not done a bad job. They may choose either to improve their group instructions or cease using groups.
The results often cannot be used to compare faculty members. If “reliability” is low, then comparison to other faculty members, or use of the discipline or institution fields is statistically invalid and inappropriate. Note this does not mean the results are not useful as a measure of effectiveness in that class which does not require comparison to others.
How do I …
Check quickly if I am meeting the objectives I recorded for this course?
The “Progress on Relevant Objectives” entries on the first page answer this question. The adjusted score on the left (a number between 1 and 5) is the students perception of meeting the objectives on a five point scale adjusted for known effects outside instructor control. See Adjusted below for more information. Higher scores represent students perceiving better achievement of the objectives.
You can also check where your adjusted score is in the five bands on the right (“Much Lower,” “Lower,” “Similar,” “Higher,” “Much Higher”). The words refer to students perception of meeting objectives in your class in comparison to other classes. For example, if your adjusted score for “Progress on Relevant Objectives” is in the “Similar” band, then students reported the same perceived level of success in your class as students reported in all classes reporting to IDEA.
For additional comparisons, look at the boxes below. These provide the comparisons to your discipline as you reported it for all classes reporting that discipline to IDEA and to UAA (eventually). If your adjusted score in the discipline is 47, you can see in the boxes above that 45-55 is in the “Similar” band. Thus students in your class reported a perception that you met the objectives in your class as well as students reported in all classes in your discipline for all schools using IDEA. Note these comparisons to discipline and institution will consistently be above or below the main comparison. This reflects student biases again. For example departments that teach more students in general education courses than they do in elective courses will find that their comparison on meeting objectives is higher in the “Discipline” category than in the main comparison (“All Classes in the IDEA database”).
Note, since the students may not fully understand the material of the course, they may not be able to accurately judge whether objectives were met. Additional measures of success must be checked.
Check if I met a specific objective I recorded for this course?
The same information provided as a summary on the first page is provided per objective on the second page. Note that there will be no information for objectives that you did not select.
What do these mean?
“Reliable” This is a technical concept of statistics. Think ‘stability.’ In brief, reliability/stability focuses on whether the results from those students who completed IDEA are likely to be relatively stable and not fluctuate or oscillate widely with additional respondents. “Unreliable” results are reported when there are relatively few respondents (even in low-enrollment courses) or a small percentage of student respond; in these cases the addition of a few more respondents can have a profound impact on the results. “Reliable” results are reported when a sufficient number and sufficient percent of students respond, suggesting that the results are not likely fluctuate widely with additional respondents.
Example: A faculty member teaching a small course incorporates service learning into the class. Note, sufficiently small classes are always unreliable. The students The results are listed as representative, and the students gave a higher rating for “Learning to analyze and critically evaluate ideas, arguments, and points of view” the instructor can be confident that the service learning did encourage broader perspectives. They cannot claim to have done so better than someone else however.
“Representative” This is a technical concept of statistics. In brief it means that the average results represent the perceptions of all students in the class whether or not all filled out the survey. If results are continually not representative, an instructor cannot make claims about helping all students solely on the basis of the student surveys. Other evidence will be needed. However, the instructor can use the results to indicate quality of work and to indicate change.
Example: A faculty member consistently has a 50% response rate. The response to “Gaining factual knowledge” is consistently high. The instructor does have evidence that the type of student who responds to a survey perceives that they are learning. Other evidence will be needed to address those who do not respond to the survey.
Example: A faculty member consistently has a 50% response rate. The instructor incorporates writing assignments in the course to help students improve their ability to communicate their knowledge. If the responses to “Developing skill in expressing myself orally or in writing” increases after adding these assignments, then the instructor has evidence that the assignments are effective. The instructor does not know if the students not responding have improved in their work, but this does not speak to the assignment but rather to those students.
“Adjusted” These scores are modified to reflect effects on student responses that are outside the instructor’s control. The adjustment is based on information provided by the university and reported by the students. For a complete description see the IDEA web site. The most commonly noticed adjustment is based on whether the class was required (e.g., general ed requirement) or optional (e.g., upper division elective in the major). Scores are adjusted upward for required courses and downward for elective course to account for a known student bias based on their desire to take a course. The “raw” scores reflect student responses as reported, but they may not be used for comparison purposes. The “adjusted” scores are better for broad comparison purposes.