FAQ For Faculty

  •  FAQs Regarding IDEA with CampusLab Platforms

    Is there any way to copy the extra questions that I used last semester over to this semester's IDEA surveys?

    Unfortunately, since we are moving to a new platform this semester, there is no way to copy from a previous semester. In future semesters, you will be able to copy custom questions from one semester to the same course in another semester.

    In trying to select learning objectives, there are several which are unable to be excluded that do not apply to writing courses. How can I remove these inapplicable learning objectives?

    Unfortunately, the questions on the default OSF are standard and instructors cannot remove questions.

    Is it possible to change the start date for my surveys?

    Yes, this can be done. However, there is a complication under the new platform. The existing administration will have to be deleted and a new administration will be created. If you have already completed the OSFs, they will be deleted and you will need to complete them again.

    Is it possible to change the end date for my surveys?

    Yes, email uaa.idea@alaska.edu to request the end date be changed. If you decide to close your surveys earlier, you will need to give your students plenty of notice that the dates in the default emails are not correct for your classes, and give them time to complete the surveys before they close.

    One of the courses I'm teaching this semester only has 2 students enrolled. Can I still do a survey?

    Under the new platform, courses with enrollments of 3 or more will receive a survey.

    How do students access the survey? Is it the menu link on Blackboard that says "Course Evaluation"?

    Students will receive an email with a link to the CampusLabs site to access their surveys. They will log in with their UA credentials (user name and password) and be able to see the survey. There is one URL that goes to a website where students log in to see their surveys, so you can write the URL on the board or put it in your syllabus?

    About how much time do the surveys take to complete?

    Surveys shouldn't take too long for students to complete, as the surveys can now be done on smart phones, iPads, tablets, laptops, etc.; we encourage faculty to allow time in class to complete the survey to increase the response rate.

    How will I know when the survey has been completed and how many students have completed it?

    With the new platform, you should be able to log in as soon as the survey is available to students (November 25th for fall 2019) to see how many responses your survey has received. Once surveys close (December 18th for fall 2019), you should be able to review the results in 3-5 days.

    Is it possible to incentivize students to complete the survey? For example, offer a couple of extra credit points if they send me confirmation that they have completed it?

    You can certainly offer extra credit points to encourage students to complete the survey. They will receive an email confirmation when they complete the survey, and can provide that to you as proof.

    Is it possible to select a CIP/discipline code? In the past, this was part of the process.

    No, it is not possible for individual faculty to select discipline codes on this new platform. Programmers are working to figure out if this is something we can import from Banner in the future.

    For CampusLabs technical support, please call (716) 270-0000 or email support@campuslabs.com For username/password issues, contact UAA IT Services at (907) 786-4646, option 1, or uaa.techsupport@alaska.edu

  • Filling out the Faculty Information Form

    How do I choose objectives?

    Generally the objectives you choose on the OSF should be a subset of the course goals that you wish to evaluate that semester. Please consider the questions below for details.

    Should I make the objectives match the course goals in the CCG?

    Generally they should be a subset of the course goals, since you are not likely trying to achieve objectives beyond the scope of the course. Because the IDEA feedback system is designed to help faculty measure the effects of their teaching choices on the course, objectives you select should be the objectives that you wish to measure that semester. This might be only some of the course goals, as the IDEA system provides the most accurate information when the number of objectives is small. See “How many objectives should I choose and at what level?” below for information on how many to select.

    Do I set the objectives or does my department?

    This varies. Some departments have discussed the objectives and selected a set together. Please ask your department chair or director for this information. Note that even with common departmental objectives, individual faculty may wish to select a smaller subset or adjust the “Important” or “Essential” rating in order to evaluate progress on a specific objective in a semester. See“Should I make the objectives match the course goals in the CCG?” above for more information.

    How many objectives should I choose and at what level?

    Except in unusual circumstances you should not pick more than three objectives as “Important” or “Essential.” Based on past results with IDEA the “Progress on Relevant Objectives” score decreases with each additional objective (see IDEA's website). The “Progress on Relevant Objectives” score is a weighted average of student responses to the objective questions with each “Essential” objective twice as important “Important” objectives. The student responses on the rest of the objectives are ignored.

    What do the objectives mean?

    To better understand each objective, you may want to read the following documents on IDEA's website. “Some Thoughts on Selecting Objectives” describes each objective in depth.

    What do students actually answer?

    The students answer a set of three questions for each objective plus additional information about themselves that is used to adjust the scores. This means they answer questions about objectives that you list as not important. This is a technology issue that cannot currently be changed.

    Do the "Contextual Questions" matter?

    The options labeled “Contextual Questions” are not part of the rating system. These are part of IDEA’s internal research. How you answer them does not affect your results.

    Does my choice of department matter?

    The department code that you select determines which department your scores are compared against nationally in the “Discipline” section of the report.

    Do my choices affect the adjusted scores?

    None of the choices on the OSF form are used in calculating the adjusted score at this time. The adjustments are based on information the students self report and some information about the class (not on the OSF).

    Can I complete the OSF after the survey becomes accessible to students?

    Yes. The objectives on the OSF can be changed and the OSF can be completed. However, additional questions cannot be changed or added after the survey becomes accessible to students.

  • Improving Response Rates

    How do I improve the response rates?

    The largest student response rates involve students who understand the importance of the forms, faculty and departments who are organized in their handling of setup and communication, and faculty and students who understand the technology involved. See the questions below for tips on each of these aspects. Also see the questions on filling out the OSF.

    How do I convince students that completing the surveys is important?

    The largest response rates have included classes in which students understand how feedback forms are used and in which they believe the results will be heeded. Following are some options for helping your students understand the use and importance of this feedback system.

    • Throughout the semester inform the students how your course has changed in response to student feedback. When anything in class is the result of student feedback mention it. They will then believe in the importance of their feedback. Multiple reminders will also help them remember the importance when the surveys are available.

    • Take time in class before the surveys are available to explain how you and the university use the forms. 

    • Allow time in class for students to complete the survey. The new platform allows students to access surveys via smartphone, tablet, laptop or desktop computer by logging on to a single URL. Remind students ahead of time to come prepared to class with either their cell phone, tablet, or laptop in order to complete the survey in class.

    • Students are usually not aware that their feedback, if provided, is used in retention, tenure, and promotion. Let them know that this mechanism is their chance to be heard. Also remind them of the changes that you made. State the changes in wording that is similar to the questions they answer, so that they make the connection.

    • Prepare what to say to students about IDEA at the beginning of the course. One of the most important things faculty can say to students on the first day of the course is to share with them what they have learned from student feedback in previous courses. They might say something like, “Based on what I learned from my IDEA Reports (or Student Ratings of Instruction Reports), I have changed something this semester (and tell what it is that has been changed),” or “Based on what students said on their IDEA Surveys, I have confidence that this course design will help you as you work to achieve the goals of the course.” Finally, as faculty review the syllabus with their students, they might want to point out how the course objectives relate to the IDEA Learning Objectives.

    What ways can I communicate the survey dates to the students?

    The more frequently students are reminded, the more likely they are to respond. Consider the following options for communicating the dates and methods of filling out the forms.

    • Include a note about the IDEA surveys in the syllabus and the dates the survey will be available for your class.

    • Add an announcement on Blackboard before surveys are available and the dates they will be open for your class.

    • Email your students multiple times when the surveys are available. The email can be sent through Blackboard.

    • Discuss the importance of the surveys in class before they are available.

    • Remind the students every class period during the time in which they are available for your class.

    What can I do to help them understand how to fill out the surveys (the technology)?

    When students can easily find and begin the surveys they are more likely to fill them out. Consider the following options to minimize the technology barrier for students.

    • Require students to access your course in Blackboard multiple times before the survey is available in your class. This may include accessing the syllabus, assignment lists and descriptions, and viewing their grades.

    • Include links to the survey in multiple locations on Blackboard. Request help from ITS help desk to learn how to add links.

    • Demonstrate getting to the survey in class. Note you won't have the link on your account, but you can show the page. A student might also do the demonstration.

  • Using IDEA Results

    Preface: 

    Teaching is a complex picture that involves multi-faceted talents, including – among other things – interpersonal dynamics between instructors and students, crafting of assignments, clarity of lectures, speed and quality of grading, inspiring students to learn outside the classroom, etc. There is, consequently, no single way to assess how well someone manages all the complexities of teaching.  Instead, a complex picture requires multiple methods of assessment, including – among other things – peer observations of classroom teaching, peer review of assignments, and student ratings. Students, of course, are a valuable source of information about teaching because they see the class from a point of view that instructors don’t see.  However, students’ perceptions/ratings are only part of the picture: instructors could be highly effective but get modest or poor evaluations from students (e.g., perhaps because the course material is very difficult); or instructors could get strong evaluations (e.g., perhaps because of a dynamic personality or easy grading) from students but not be very effective in teaching the material. IDEA is not designed to provide a complete picture of an instructor’s teaching; it cannot, for instance, reveal how effectively an instructor is imparting the material. Rather IDEA focuses on only one piece of the picture – students’ perceptions.  Unlike UAA’s previous SDIS system, IDEA

    1. gives faculty the flexibility to customize the questions that are asked of students.
      For instance, an instructor can easily add questions to get students’ feedback about a new approach that the instructor is implementing in the course,
    2. gives faculty the opportunity to customize  those aspects of teaching that are most relevant for the course, rather than being evaluated on across-the-board objectives that might not be relevant.
      For instance, instructors can specify whether their course should be evaluated more on its ability to encourage the search for personal values, or on its ability to teach a series of steps in some complex problem-solving tasks, or on some other course-specific learning objective.
    3. allows faculty to see how their courses compare nationally to other courses in their discipline or subdiscipline.
    4. provides statistical adjustments for factors that are known to affect students’ evaluations (e.g., class size).
    5. advertises its weaknesses, calling attention, for instance, to low response rates.

    How can I use the IDEA results?

    If you have a specific goal, then you can fill out the OSF to match your goal, collect data over multiple semesters, and use the student surveys as part of the evidence that you have achieved that goal. IDEA surveys can be indicators of change in context. They are not good indicators of static concepts of quality.

    Student survey results can be used as evidence of effective change in a class. If student’s response to the “Progress on Relevant Objectives” and the individual responses to objectives improve after making a change in a course, you have some evidence that the change improved student perception of objective achievement.
    Example: An instructor adds a guided tour of library resources (provided by the Consortium Library faculty) to a course in which research is expected. If after doing this for the first time the responses to “Learning how to find and use resources for answering questions or solving problems” increases noticeably, the instructor has some evidence that a change might have been successful.

    Student survey results can be used as evidence of consistency. If over a number of semesters, the student responses on the objectives remain similar (remain in the same bands on page one—Much Higher, Higher, Similar Lower, Much Lower) then student perception is constant over time demonstrating consistency in your work.
    Student survey results from the diagnostic form (page 3) can be used for faculty development. The students’ answers to individual questions can – in conjunction with other information -- guide a faculty member in changing how they achieve the course objectives.
    Example: A faculty member notices over multiple semesters that students rate the course high for “Introduced stimulating ideas about the subject” but they constantly rate “Demonstrated the importance and significance of the subject matter” lower. If both aspects are important for a given course, the instructor might then choose to include more applications if that is appropriate or explain to the students in which courses they will learn to apply the theory being learned in this class, or some other action consistent with the goals of that course. If in following semesters that response increases, the faculty member also has some evidence of successful development.

    What are the limitations of the IDEA results? 

    The results cannot measure whether a good or bad job was done in a class. The results indicate student perception which may not match reality. Also, failure to meet some objectives may not be bad if the objective missed is not required in the course. 


    Example: A faculty member decided to use groups in class to improve student engagement. The instructor adds the “Acquiring skills in working with others as a member of a team” objective on the OSF and then instructs the students on how to work in groups throughout the semester. The students may perceive that their instruction on how to work in groups was insufficient and provide low ratings for this objective. The instructor’s “Progress on Relevant Objectives” will now be lower. However, if the groups were not a goal of the course as defined in the CCG and by the department, then this faculty member has not done a bad job. They may choose either to improve their group instructions or cease using groups.

    The results often cannot be used to compare faculty members. If “reliability” is low, then comparison to other faculty members, or use of the discipline or institution fields is statistically invalid and inappropriate. Note this does not mean the results are not useful as a measure of effectiveness in that class which does not require comparison to others.

    How do I …

    Check quickly if I am meeting the objectives I recorded for this course?

    The “Progress on Relevant Objectives” entries on the first page answer this question. The adjusted score on the left (a number between 1 and 5) is the students' perception of meeting the objectives on a five point scale adjusted for known effects outside instructor control. See Adjusted below for more information. Higher scores represent students perceiving better achievement of the objectives. 

    You can also check where your adjusted score is in the five bands on the right (“Much Lower,” “Lower,” “Similar,”  “Higher,” “Much Higher”). The words refer to students perception of meeting objectives in your class in comparison to other classes. For example, if your adjusted score for “Progress on Relevant Objectives” is in the “Similar” band, then students reported the same perceived level of success in your class as students reported in all classes reporting to IDEA. For instance, if your adjusted score in the discipline is 47, you can see in the boxes above that 45-55 is in the “Similar” band. Thus students in your class reported a perception that you met the objectives in your class as well as students reported in all classes in your discipline for all schools using IDEA. Note these comparisons to discipline and institution will consistently be above or below the main comparison. This reflects student biases again. For example departments that teach more students in general education courses than they do in elective courses will find that their comparison on meeting objectives is higher in the “Discipline” category than in the main comparison (“All Classes in the IDEA database”).

    Note, since the students may not fully understand the material of the course, they may not be able to accurately judge whether objectives were met. Additional measures of success must be checked. 

    Check if I met a specific objective I recorded for this course? 

    The same information provided as a summary on the first page is provided per objective on the second page. Note that there will be no information for objectives that you did not select.

    What do these mean?

    “Reliable”

    This is a technical concept of statistics. Think ‘stability.’  In brief, reliability/stability focuses on whether the results from those students who completed IDEA are likely to be relatively stable and not fluctuate or oscillate widely with additional respondents. “Unreliable” results are reported when there are relatively few respondents (even in low-enrollment courses) or a small percentage of student respond; in these cases the addition of a few more respondents can have a profound impact on the results. “Reliable” results are reported when a sufficient number and sufficient percent of students respond, suggesting that the results are not likely to fluctuate widely with additional respondents. 
    Example: A faculty member teaching a small course incorporates service learning into the class. Note, sufficiently small classes are always unreliable. The results are listed as representative, and the students gave a higher rating for “Learning to analyze and critically evaluate ideas, arguments, and points of view” the instructor can be confident that the service learning did encourage broader perspectives. They cannot claim to have done so better than someone else, however.


    “Representative” 

    This is a technical concept of statistics. In brief it means that the average results represent the perceptions of all students in the class whether or not all filled out the survey. If results are continually not representative, an instructor cannot make claims about helping all students solely on the basis of the student surveys. Other evidence will be needed. However, the instructor can use the results to indicate quality of work and to indicate change.
    Example:  A faculty member consistently has a 50% response rate. The response to “Gaining factual knowledge” is consistently high. The instructor does have evidence that the type of student who responds to a survey perceives that they are learning. Other evidence will be needed to address those who do not respond to the survey.
    Example: A faculty member consistently has a 50% response rate. The instructor incorporates writing assignments in the course to help students improve their ability to communicate their knowledge. If the responses to “Developing skill in expressing myself orally or in writing” increases after adding these assignments, then the instructor has evidence that the assignments are effective. The instructor does not know if the students not responding have improved in their work, but this does not speak to the assignment but rather to those students.


    “Adjusted”   

    These scores are modified to reflect effects on student responses that are outside the instructor’s control. The adjustment is based on information provided by the university and reported by the students. For a complete description see the IDEA web site. The most commonly noticed adjustment is based on whether the class was required (e.g., general requirement) or optional (e.g., upper division elective in the major). Scores are adjusted upward for required courses and downward for elective courses to account for a known student bias based on their desire to take a course. The “raw” scores reflect student responses as reported, but they may not be used for comparison purposes. The “adjusted” scores are better for broad comparison purposes.