Filling out the Faculty
How do I choose objectives?
the objectives you choose on the FIF should be a subset of the course
goals that you wish to evaluate that semester. Please consider the
questions below for details.
Should I make the objectives match the course goals in the CCG?
Generally they should be a subset of the course goals, since
you are not likely trying to achieve objectives beyond the scope of the
course. Because the IDEA feedback system is
designed to help faculty measure the effects of their teaching choices
on the course objectives,you select should be the
objectives that you wish to measure that semester. This might be only
some of the course goals, as the IDEA system provides the most accurate
information when the number of objectives is small. See “How many objectives should I choose
and at what level?” below for information on how many to
Do I set the objectives or does my department?
This varies. Some departments have discussed the
objectives and selected a set together. Please ask your department
chair or director for this information. Note that even with common
departmental objectives, inpidual faculty may wish to select a
smaller subset or adjust the “Important” or “Essential” rating in order
to evaluate progress on a specific objective in a semester. See“Should I make the objectives match the
course goals in the CCG?” above for more information.
How many objectives should I choose and at what level?
Except in unusual circumstances you should not pick more than three objectives
as “Important” or “Essential.” Based on past results with IDEA the
“Progress on Relevant Objectives” score decreases with each additional
objective (see IDEA's website). The “Progress on Relevant Objectives”
score is a weighted average of student responses to the objective questions with
each “Essential” objective twice as important “Important” objectives.
The student responses on the rest of the objectives are ignored.
What do the objectives mean?
To better understand each objective, you may want to read the following
documents on IDEA's website.“Some
Thoughts on Selecting Objectives” describes each objective in depth.
Do my choices affect the adjusted scores?
None of the choices on the FIF form are used in calculating the adjusted score at this time. The adjustments are based on information the students self report and some information about the class (not on the FIF).
What do students actually answer?
The students answer a set of three questions for each objective plus
additional information about themselves that is used to adjust the
scores. This means they answer questions about objectives that you list
as not important. This is a technology issue that cannot currently be
Do the "Contextual Questions" matter?
The options labeled “Contextual Questions” are not part of the rating
system. These are part of IDEA’s internal research. How you answer them
does not affect your results.
Does my choice of department matter?
The department code that you select determines which department your scores
are compared against nationally in the “Discipline” section of the
Do my choices affect the adjusted scores?
None of the choices on the FIF form are used in calculating the adjusted
score at this time. The adjustments are based on information the
report and some information about the class (not on the FIF).
Can I complete the FIF after the survey becomes accessible to students?
Yes. The objectives on the FIF can be changed and the FIF can be completed. However additional questions cannot be changed or added after the survey becomes accessible to students.
Improving Response Rates
How do I improve the response rates?
The largest student response rates involve students who understand the
importance of the forms, faculty and departments who are organized in
their handling of setup and communication, and faculty and students who
understand the technology involved. See the questions below for tips on
each of these aspects. Also see the questions on filling out the FIF.
How do I convince
students that completing the surveys is important?
largest response rates have included classes in which students
understand how feedback forms are used and in which they believe the
results will be heeded. Following are some options for helping your
students understand the use and importance of this feedback system.
the semester inform the students how your course has changed in
response to student feedback. When anything in class is the result of
student feedback mention it. They will then believe in the importance
of their feedback. Multiple reminders will also help them remember the
importance when the surveys are available.
- Take time in class before the surveys are available to
explain how you and the university use the forms.
are usually not aware that their feedback, if provided, is used in
retention, tenure, and promotion. Let them know that this mechanism is
their chance to be heard. Also remind them of the changes that you
made. State the changes in wording that is similar to the questions
they answer, so that they make the connection.
Prepare what to say to students about IDEA at the beginning of the course. One of the most important things faculty can say to students on the first day of the course is to share with them what they have learned from student feedback in previous courses. They might say something like, “Based on what I learned from my IDEA Reports (or Student Ratings of Instruction Reports), I have changed something this semester (and tell what it is that has been changed),” or “Based on what students said on their IDEA Surveys, I have confidence that this course design will help you as you work to achieve the goals of the course.” Finally, as faculty review the syllabus with their students, they might want to point out how the course objectives relate to the IDEA Learning Objectives.
What ways can I communicate the survey dates to the students?
The more frequently students are reminded, the more likely they are to
respond. Consider the following options for communicating the dates and
methods of filling out the forms.
- Include a note about the
IDEA surveys in the syllabus with a link to Blackboard and the dates
the survey will be available for your class.
- Add an
announcement on Blackboard before surveys are available including a
link to the surveys in Blackboard and the dates they will be open for
your students multiple times when the surveys are available. Include
a link to the survey's in Blackboard and the dates they will be
open for your class. This email can be sent through Blackboard.
- Discuss the importance of the surveys in class before they
- Remind the students every class period during the time in
which they are available for your class.
What can I do to help
them understand how to fill out the surveys (the technology)?
students can easily find and begin the surveys they are more likely to
fill them out. Consider the following options to minimize the
technology barrier for students.
- Require students to access
your course in Blackboard multiple times before the survey is available
in your class. This may include accessing the syllabus, assignment
lists and descriptions, and viewing their grades.
- Include links to the survey in multiple locations on
Blackboard. Request help from ITS help desk to learn how to add links.
getting to the survey in class. Note you won't have the link on
your account, but you can show the page. A student might also do
Using IDEA Results
Preface: Teaching is a complex
picture that involves multi-faceted talents, including – among other
things – interpersonal dynamics between instructors and students,
crafting of assignments, clarity of lectures, speed and quality of
grading, inspiring students to learn outside the classroom, etc.
There is, consequently, no single way to assess how well someone
manages all the complexities of teaching. Instead, a complex
picture requires multiple methods of assessment, including – among
other things – peer observations of classroom teaching, peer review of
assignments, and student ratings. Students, of course, are a
valuable source of information about teaching because they see the
class from a point of view that instructors don’t see. However,
students’ perceptions/ratings are only part of the picture: instructors
could be highly effective but get modest or poor evaluations from
students (e.g., perhaps because the course material is very difficult);
or instructors could strong evaluations (e.g., perhaps because of a
dynamic personality or easy grading) from students but not be very
effective in teaching the material. IDEA is not designed to
provide a complete picture of an instructor’s teaching; it cannot, for
instance, reveal how effectively an instructor is imparting the
material. Rather IDEA focuses on one only piece of the picture –
students’ perceptions. And, unlike UAA’s previous SDIS
- gives faculty the flexibility to customize the questions that are asked of students.
instance, an instructor can easily add questions to get students’
feedback about a new approach that the instructor is implementing in
- gives faculty the opportunity to customize
those aspects of teaching that are most relevant for the course, rather
than being evaluated on across-the-board objectives that might not be
for instance, instructors can specify whether their course
should be evaluated more on its ability to encourage the search for
personal values, or on its ability to teach a series of steps in some
complex problem-solving tasks, or on some other course-specific
- Allows faculty to see how their courses compares nationally to other courses in their discipline or subdiscipline.
- Provides statistical adjustments for factors that are known to affect students’ evaluations (e.g., class size).
- Advertises its weaknesses, calling attention, for instance, to low response rates.
How can I use the IDEA results?
you have a specific goal, then you can fill out the FIF to match your
goal, collect data over multiple semesters, and use the student surveys
as part of the evidence that you have achieved that goal. IDEA surveys
can be indicators of change in context. They are not good indicators of
static concepts of quality.
Student survey results can be used as
evidence of effective change in a class. If student’s response to the
“Progress on Relevant Objectives” and the individual responses to
objectives improve after making a change in a course, you have some
evidence that the change improved student perception of objective
Example: An instructor adds a guided tour of library
resources (provided by the Consortium Library faculty) to a course in
which research is expected. If after doing this for the first time the
responses to “Learning how to find and use resources for answering
questions or solving problems” increases noticeably, the instructor has
some evidence that a change might have been successful.
survey results can be used as evidence of consistency. If over a number
of semesters, the student responses on the objectives remain similar
(remain in the same bands on page one—Much Higher, Higher, Similar
Lower, Much Lower) then student perception is constant over time
demonstrating consistency in your work.
Student survey results from
the diagnostic form (page 3) can be used for faculty development. The
students’ answers to individual questions can – in conjunction with
other information -- guide a faculty member in changing how they
achieve the course objectives.
Example: A faculty member notices
over multiple semesters that students rate the course high for
“Introduced stimulating ideas about the subject” but they constantly
rate “Demonstrated the importance and significance of the subject
matter” lower. If both aspects are important for a given course,
the instructor might then choose to include more applications if that
is appropriate or explain to the students in which courses they will
learn to apply the theory being learned in this class, or some other
action consistent with the goals of that course. If in following
semesters that response increases, the faculty member also has some
evidence of successful development.
What are the limitations of the IDEA results?
results cannot measure whether a good or bad job was done in a class.
The results indicate student perception which may not match reality.
Also, failure to meet some objectives may not be bad if the objective
missed is not required in the course.
Example: A faculty member
decided to use groups in class to improve student engagement. The
instructor adds the “Acquiring skills in working with others as a
member of a team” objective on the FIF and then instructs the students
on how to work in groups throughout the semester. The students may
perceive that their instruction on how to work in groups was
insufficient and provide low ratings for this objective. The
instructor’s “Progress on Relevant Objectives” will now be lower.
However, if the groups were not a goal of the course as defined in the
CCG and by the department, then this faculty member has not done a bad
job. They may choose either to improve their group instructions or
cease using groups.
The results often cannot be used to compare
faculty members. If “reliability” is low, then comparison to other
faculty members, or use of the discipline or institution fields is
statistically invalid and inappropriate. Note this does not mean the
results are not useful as a measure of effectiveness in that class
which does not require comparison to others.
How do I …
Check quickly if I am meeting the objectives I recorded for this course?
“Progress on Relevant Objectives” entries on the first page answer this
question. The adjusted score on the left (a number between 1 and
5) is the students perception of meeting the objectives on a five point
scale adjusted for known effects outside instructor control. See
Adjusted below for more information. Higher scores represent students
perceiving better achievement of the objectives.
You can also
check where your adjusted score is in the five bands on the right
(“Much Lower,” “Lower,” “Similar,” “Higher,” “Much Higher”). The
words refer to students perception of meeting objectives in your class
in comparison to other classes. For example, if your adjusted score for
“Progress on Relevant Objectives” is in the “Similar” band, then
students reported the same perceived level of success in your class as
students reported in all classes reporting to IDEA.
comparisons, look at the boxes below. These provide the comparisons to
your discipline as you reported it for all classes reporting that
discipline to IDEA and to UAA (eventually). If your adjusted score in
the discipline is 47, you can see in the boxes above that 45-55 is in
the “Similar” band. Thus students in your class reported a perception
that you met the objectives in your class as well as students reported
in all classes in your discipline for all schools using IDEA. Note
these comparisons to discipline and institution will consistently be
above or below the main comparison. This reflects student biases again.
For example departments that teach more students in general education
courses than they do in elective courses will find that their
comparison on meeting objectives is higher in the “Discipline” category
than in the main comparison (“All Classes in the IDEA database”).
since the students may not fully understand the material of the course,
they may not be able to accurately judge whether objectives were met.
Additional measures of success must be checked.
Check if I met a specific objective I recorded for this course?
same information provided as a summary on the first page is provided
per objective on the second page. Note that there will be no
information for objectives that you did not select.
What do these mean?
This is a technical concept of statistics. Think ‘stability.’ In
brief, reliability/stability focuses on whether the results from those
students who completed IDEA are likely to be relatively stable and not
fluctuate or oscillate widely with additional respondents.
“Unreliable” results are reported when there are relatively few
respondents (even in low-enrollment courses) or a small percentage of
student respond; in these cases the addition of a few more respondents
can have a profound impact on the results. “Reliable” results are
reported when a sufficient number and sufficient percent of students
respond, suggesting that the results are not likely fluctuate widely
with additional respondents.
Example: A faculty member teaching a
small course incorporates service learning into the class. Note,
sufficiently small classes are always unreliable. The students The
results are listed as representative, and the students gave a higher
rating for “Learning to analyze and critically evaluate ideas,
arguments, and points of view” the instructor can be confident that the
service learning did encourage broader perspectives. They cannot claim
to have done so better than someone else however.
This is a technical concept of statistics. In brief it means that the
average results represent the perceptions of all students in the class
whether or not all filled out the survey. If results are continually
not representative, an instructor cannot make claims about helping all
students solely on the basis of the student surveys. Other evidence
will be needed. However, the instructor can use the results to indicate
quality of work and to indicate change.
Example: A faculty
member consistently has a 50% response rate. The response to “Gaining
factual knowledge” is consistently high. The instructor does have
evidence that the type of student who responds to a survey perceives
that they are learning. Other evidence will be needed to address those
who do not respond to the survey.
Example: A faculty member
consistently has a 50% response rate. The instructor incorporates
writing assignments in the course to help students improve their
ability to communicate their knowledge. If the responses to “Developing
skill in expressing myself orally or in writing” increases after adding
these assignments, then the instructor has evidence that the
assignments are effective. The instructor does not know if the students
not responding have improved in their work, but this does not speak to
the assignment but rather to those students.
These scores are modified to reflect effects on student responses that
are outside the instructor’s control. The adjustment is based on
information provided by the university and reported by the students.
For a complete description see the IDEA web site. The most commonly
noticed adjustment is based on whether the class was required (e.g.,
general ed requirement) or optional (e.g., upper division elective in
the major). Scores are adjusted upward for required courses and
downward for elective course to account for a known student bias based
on their desire to take a course. The “raw” scores reflect student
responses as reported, but they may not be used for comparison
purposes. The “adjusted” scores are better for broad comparison