Center for Teaching

Student assessment in teaching and learning.

assessment by coursework

Much scholarship has focused on the importance of student assessment in teaching and learning in higher education. Student assessment is a critical aspect of the teaching and learning process. Whether teaching at the undergraduate or graduate level, it is important for instructors to strategically evaluate the effectiveness of their teaching by measuring the extent to which students in the classroom are learning the course material.

This teaching guide addresses the following: 1) defines student assessment and why it is important, 2) identifies the forms and purposes of student assessment in the teaching and learning process, 3) discusses methods in student assessment, and 4) makes an important distinction between assessment and grading., what is student assessment and why is it important.

In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as “the systematic collection and analysis of information to improve student learning.” (Stassen et al., 2001, pg. 5) This definition captures the essential task of student assessment in the teaching and learning process. Student assessment enables instructors to measure the effectiveness of their teaching by linking student performance to specific learning objectives. As a result, teachers are able to institutionalize effective teaching choices and revise ineffective ones in their pedagogy.

The measurement of student learning through assessment is important because it provides useful feedback to both instructors and students about the extent to which students are successfully meeting course learning objectives. In their book Understanding by Design , Grant Wiggins and Jay McTighe offer a framework for classroom instruction—what they call “Backward Design”—that emphasizes the critical role of assessment. For Wiggens and McTighe, assessment enables instructors to determine the metrics of measurement for student understanding of and proficiency in course learning objectives. They argue that assessment provides the evidence needed to document and validate that meaningful learning has occurred in the classroom. Assessment is so vital in their pedagogical design that their approach “encourages teachers and curriculum planners to first ‘think like an assessor’ before designing specific units and lessons, and thus to consider up front how they will determine if students have attained the desired understandings.” (Wiggins and McTighe, 2005, pg. 18)

For more on Wiggins and McTighe’s “Backward Design” model, see our Understanding by Design teaching guide.

Student assessment also buttresses critical reflective teaching. Stephen Brookfield, in Becoming a Critically Reflective Teacher, contends that critical reflection on one’s teaching is an essential part of developing as an educator and enhancing the learning experience of students. Critical reflection on one’s teaching has a multitude of benefits for instructors, including the development of rationale for teaching practices. According to Brookfield, “A critically reflective teacher is much better placed to communicate to colleagues and students (as well as to herself) the rationale behind her practice. She works from a position of informed commitment.” (Brookfield, 1995, pg. 17) Student assessment, then, not only enables teachers to measure the effectiveness of their teaching, but is also useful in developing the rationale for pedagogical choices in the classroom.

Forms and Purposes of Student Assessment

There are generally two forms of student assessment that are most frequently discussed in the scholarship of teaching and learning. The first, summative assessment , is assessment that is implemented at the end of the course of study. Its primary purpose is to produce a measure that “sums up” student learning. Summative assessment is comprehensive in nature and is fundamentally concerned with learning outcomes. While summative assessment is often useful to provide information about patterns of student achievement, it does so without providing the opportunity for students to reflect on and demonstrate growth in identified areas for improvement and does not provide an avenue for the instructor to modify teaching strategy during the teaching and learning process. (Maki, 2002) Examples of summative assessment include comprehensive final exams or papers.

The second form, formative assessment , involves the evaluation of student learning over the course of time. Its fundamental purpose is to estimate students’ level of achievement in order to enhance student learning during the learning process. By interpreting students’ performance through formative assessment and sharing the results with them, instructors help students to “understand their strengths and weaknesses and to reflect on how they need to improve over the course of their remaining studies.” (Maki, 2002, pg. 11) Pat Hutchings refers to this form of assessment as assessment behind outcomes. She states, “the promise of assessment—mandated or otherwise—is improved student learning, and improvement requires attention not only to final results but also to how results occur. Assessment behind outcomes means looking more carefully at the process and conditions that lead to the learning we care about…” (Hutchings, 1992, pg. 6, original emphasis). Formative assessment includes course work—where students receive feedback that identifies strengths, weaknesses, and other things to keep in mind for future assignments—discussions between instructors and students, and end-of-unit examinations that provide an opportunity for students to identify important areas for necessary growth and development for themselves. (Brown and Knight, 1994)

It is important to recognize that both summative and formative assessment indicate the purpose of assessment, not the method . Different methods of assessment (discussed in the next section) can either be summative or formative in orientation depending on how the instructor implements them. Sally Brown and Peter Knight in their book, Assessing Learners in Higher Education, caution against a conflation of the purposes of assessment its method. “Often the mistake is made of assuming that it is the method which is summative or formative, and not the purpose. This, we suggest, is a serious mistake because it turns the assessor’s attention away from the crucial issue of feedback.” (Brown and Knight, 1994, pg. 17) If an instructor believes that a particular method is formative, he or she may fall into the trap of using the method without taking the requisite time to review the implications of the feedback with students. In such cases, the method in question effectively functions as a form of summative assessment despite the instructor’s intentions. (Brown and Knight, 1994) Indeed, feedback and discussion is the critical factor that distinguishes between formative and summative assessment.

Methods in Student Assessment

Below are a few common methods of assessment identified by Brown and Knight that can be implemented in the classroom. [1] It should be noted that these methods work best when learning objectives have been identified, shared, and clearly articulated to students.

Self-Assessment

The goal of implementing self-assessment in a course is to enable students to develop their own judgement. In self-assessment students are expected to assess both process and product of their learning. While the assessment of the product is often the task of the instructor, implementing student assessment in the classroom encourages students to evaluate their own work as well as the process that led them to the final outcome. Moreover, self-assessment facilitates a sense of ownership of one’s learning and can lead to greater investment by the student. It enables students to develop transferable skills in other areas of learning that involve group projects and teamwork, critical thinking and problem-solving, as well as leadership roles in the teaching and learning process.

Things to Keep in Mind about Self-Assessment

  • Self-assessment is different from self-grading. According to Brown and Knight, “Self-assessment involves the use of evaluative processes in which judgement is involved, where self-grading is the marking of one’s own work against a set of criteria and potential outcomes provided by a third person, usually the [instructor].” (Pg. 52)
  • Students may initially resist attempts to involve them in the assessment process. This is usually due to insecurities or lack of confidence in their ability to objectively evaluate their own work. Brown and Knight note, however, that when students are asked to evaluate their work, frequently student-determined outcomes are very similar to those of instructors, particularly when the criteria and expectations have been made explicit in advance.
  • Methods of self-assessment vary widely and can be as eclectic as the instructor. Common forms of self-assessment include the portfolio, reflection logs, instructor-student interviews, learner diaries and dialog journals, and the like.

Peer Assessment

Peer assessment is a type of collaborative learning technique where students evaluate the work of their peers and have their own evaluated by peers. This dimension of assessment is significantly grounded in theoretical approaches to active learning and adult learning . Like self-assessment, peer assessment gives learners ownership of learning and focuses on the process of learning as students are able to “share with one another the experiences that they have undertaken.” (Brown and Knight, 1994, pg. 52)

Things to Keep in Mind about Peer Assessment

  • Students can use peer assessment as a tactic of antagonism or conflict with other students by giving unmerited low evaluations. Conversely, students can also provide overly favorable evaluations of their friends.
  • Students can occasionally apply unsophisticated judgements to their peers. For example, students who are boisterous and loquacious may receive higher grades than those who are quieter, reserved, and shy.
  • Instructors should implement systems of evaluation in order to ensure valid peer assessment is based on evidence and identifiable criteria .  

According to Euan S. Henderson, essays make two important contributions to learning and assessment: the development of skills and the cultivation of a learning style. (Henderson, 1980) Essays are a common form of writing assignment in courses and can be either a summative or formative form of assessment depending on how the instructor utilizes them in the classroom.

Things to Keep in Mind about Essays

  • A common challenge of the essay is that students can use them simply to regurgitate rather than analyze and synthesize information to make arguments.
  • Instructors commonly assume that students know how to write essays and can encounter disappointment or frustration when they discover that this is not the case for some students. For this reason, it is important for instructors to make their expectations clear and be prepared to assist or expose students to resources that will enhance their writing skills.

Exams and time-constrained, individual assessment

Examinations have traditionally been viewed as a gold standard of assessment in education, particularly in university settings. Like essays they can be summative or formative forms of assessment.

Things to Keep in Mind about Exams

  • Exams can make significant demands on students’ factual knowledge and can have the side-effect of encouraging cramming and surface learning. On the other hand, they can also facilitate student demonstration of deep learning if essay questions or topics are appropriately selected. Different formats include in-class tests, open-book, take-home exams and the like.
  • In the process of designing an exam, instructors should consider the following questions. What are the learning objectives that the exam seeks to evaluate? Have students been adequately prepared to meet exam expectations? What are the skills and abilities that students need to do well? How will this exam be utilized to enhance the student learning process?

As Brown and Knight assert, utilizing multiple methods of assessment, including more than one assessor, improves the reliability of data. However, a primary challenge to the multiple methods approach is how to weigh the scores produced by multiple methods of assessment. When particular methods produce higher range of marks than others, instructors can potentially misinterpret their assessment of overall student performance. When multiple methods produce different messages about the same student, instructors should be mindful that the methods are likely assessing different forms of achievement. (Brown and Knight, 1994).

For additional methods of assessment not listed here, see “Assessment on the Page” and “Assessment Off the Page” in Assessing Learners in Higher Education .

In addition to the various methods of assessment listed above, classroom assessment techniques also provide a useful way to evaluate student understanding of course material in the teaching and learning process. For more on these, see our Classroom Assessment Techniques teaching guide.

Assessment is More than Grading

Instructors often conflate assessment with grading. This is a mistake. It must be understood that student assessment is more than just grading. Remember that assessment links student performance to specific learning objectives in order to provide useful information to instructors and students about student achievement. Traditional grading on the other hand, according to Stassen et al. does not provide the level of detailed and specific information essential to link student performance with improvement. “Because grades don’t tell you about student performance on individual (or specific) learning goals or outcomes, they provide little information on the overall success of your course in helping students to attain the specific and distinct learning objectives of interest.” (Stassen et al., 2001, pg. 6) Instructors, therefore, must always remember that grading is an aspect of student assessment but does not constitute its totality.

Teaching Guides Related to Student Assessment

Below is a list of other CFT teaching guides that supplement this one. They include:

  • Active Learning
  • An Introduction to Lecturing
  • Beyond the Essay: Making Student Thinking Visible in the Humanities
  • Bloom’s Taxonomy
  • How People Learn
  • Syllabus Construction

References and Additional Resources

This teaching guide draws upon a number of resources listed below. These sources should prove useful for instructors seeking to enhance their pedagogy and effectiveness as teachers.

Angelo, Thomas A., and K. Patricia Cross. Classroom Assessment Techniques: A Handbook for College Teachers . 2 nd edition. San Francisco: Jossey-Bass, 1993. Print.

Brookfield, Stephen D. Becoming a Critically Reflective Teacher . San Francisco: Jossey-Bass, 1995. Print.

Brown, Sally, and Peter Knight. Assessing Learners in Higher Education . 1 edition. London ; Philadelphia: Routledge, 1998. Print.

Cameron, Jeanne et al. “Assessment as Critical Praxis: A Community College Experience.” Teaching Sociology 30.4 (2002): 414–429. JSTOR . Web.

Gibbs, Graham and Claire Simpson. “Conditions under which Assessment Supports Student Learning. Learning and Teaching in Higher Education 1 (2004): 3-31.

Henderson, Euan S. “The Essay in Continuous Assessment.” Studies in Higher Education 5.2 (1980): 197–203. Taylor and Francis+NEJM . Web.

Maki, Peggy L. “Developing an Assessment Plan to Learn about Student Learning.” The Journal of Academic Librarianship 28.1 (2002): 8–13. ScienceDirect . Web. The Journal of Academic Librarianship.

Sharkey, Stephen, and William S. Johnson. Assessing Undergraduate Learning in Sociology . ASA Teaching Resource Center, 1992. Print.

Wiggins, Grant, and Jay McTighe. Understanding By Design . 2nd Expanded edition. Alexandria, VA: Assn. for Supervision & Curriculum Development, 2005. Print.

[1] Brown and Night discuss the first two in their chapter entitled “Dimensions of Assessment.” However, because this chapter begins the second part of the book that outlines assessment methods, I have collapsed the two under the category of methods for the purposes of continuity.

Teaching Guides

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules

Center for Advancing Teaching and Learning Through Research logo

Course Assessment

Course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course.

Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes.

“Assessment” refers to a variety of processes for gathering, analyzing, and using information about student learning to support instructional decision-making, with the goal of improving student learning. Most instructors already engage in assessment processes all the time, ranging from informal (“hmm, there are many confused faces right now- I should stop for questions”) to formal (“nearly half the class got this quiz question wrong- I should revisit this concept”).

When approached in a formalized way, course-level assessment is a process of systematically examining and refining the fit between the course activities and what students should know at the end of the course. Conducting a course-level assessment involves considering whether all aspects of the course align with each other and whether they guide students to achieve the desired learning outcomes . Course-level assessment can be a practical process embedded within course design and teaching, that provides substantial benefits to instructors and students.

course assessment cycle

Over time, as the process is followed iteratively over several semesters, it can help instructors find a variety of pathways to designing more equitable courses in which more learners develop greater expertise in the skills and knowledge of greatest importance to the discipline or topic of the course.

Differentiating Grading from Assessment

“Assessment” is sometimes used colloquially to mean “grading,” but there are distinctions between the two. Grading is a process of evaluating individual student learning for the purposes of characterizing that student’s level of success at a particular task (or the entire course). The grade of an assignment may provide feedback to students on which concepts or skills they have mastered, which can guide them to revise their study approach, but may not be used by the instructor to decide how subsequent class sessions will be spent. Similarly, a student’s grade in a course might convey to other instructors in the curriculum or prospective employers the level of mastery that the student has demonstrated during that semester, but need not suggest changes to the design of the course as a whole for future iterations.

In contrast to grading, assessment practices focus on determining how many students achieved which learning course outcomes, and to what level of mastery, for the purpose of helping the instructor revise subsequent lessons or the course as a whole for subsequent terms. Since final course grades may include participation points, and aggregate student mastery of all course learning objectives into a single measure, they rarely give clarity on what elements of the course have been most or least successful in achieving the instructor’s goals. Differentiating assessment from grading allows instructors to plot a clear course forward toward making the changes that will have the greatest impact in the areas they define as being most important, based on the results of the assessment.

Course learning outcomes are measurable statements that describe what students should be able to do by the end of a course . Let’s parse this statement into its three component parts: student-centered, measurable, and course-level.

Student-Centered

First, learning outcomes should focus on what students will be able to do, not what the course will do. For example:

  • “Introduces the fundamental ideas of computing and the principles of programming” says what a course is intended to accomplish. This is perfectly appropriate for a course description but is not a learning outcome.
  • A related student learning outcome might read, “ Explain the fundamental ideas of computing and identify the principles of programming.”

Second, learning outcomes are measurable , which means that you can observe the student performing the skill or task and determine the degree to which they have done so. This does not need to be measured in quantitative terms—student learning can be observed in the characteristics of presentations, essays, projects, and many other student products created in a course (discussed more in the section on rubrics below).

To be measurable, learning outcomes should not include words like understand , learn , and appreciate , because these qualities occur within the student’s mind and are not observable. Rather, ask yourself, “What would a student be doing if they understand, have learned, or appreciate?” For example:

  • “Learners should understand US political ideologies regarding social and environmental issues,” is not observable.
  • “Learners should be able to compare and contrast U.S. political ideologies regarding social and environmental issues,” is observable.

Observable Performance

Course-Level

Finally, learning outcomes for course-level assessment focus on the knowledge and skills that learners will take away from a course as a whole. Though the final project, essay, or other assessment that will be used to measure student learning may match the outcome well, the learning outcome should articulate the overarching takeaway from the course, rather than describing the assignment. For example:

  • “Identify learning principles and theories in real-world situations” is a learning outcome that describes skills learners will use beyond the course.
  • “Develop a case study in which you document a learner in a real-world setting” describes a course assignment aligned with that outcome but is not a learning outcome itself.

Identify and Prioritize Your Higher-Order End Goals

Course-level learning outcomes articulate the big-picture takeaways of the course, providing context and purpose for day-to-day learning. To keep the workload of course assessment manageable, focus on no more than 5-10 learning outcomes per course (McCourt, 2007). This limit is helpful because each of these course-level learning objectives will be carefully assessed at the end of the term and used to guide iterative revision of the course in future semesters.

This is not meant to suggest that students will only learn 5-10 skills or concepts during the term. Multiple shorter-term and lower-level learning objectives are very helpful to guide student learning at the unit, week, or even class session scale (Felder & Brent, 2016). These shorter-term objectives build toward or serve as components of the course-level objectives.

Bloom’s Taxonomy of Educational Objectives (Anderson & Krathwohl, 2001) is a helpful tool for deciding which of your objectives are course-level, which may be unit-to class-level objectives, and how they fit together. This taxonomy organizes action verbs by complexity of thinking, resulting in the following categories:

Bloom's taxonomy organizes action verbs by complexity of thinking

Download a list of sample learning outcomes from a variety of disciplines .

Typically, objectives at the higher end of the spectrum (“analyzing,” “evaluating,” or “creating”) are ideal course-level learning outcomes, while those at the lower end of the spectrum (“remembering,” “understanding,” or “applying”) are component parts and day, week, or unit-level outcomes. Lower-level outcomes that do not contribute substantially to students’ ability to achieve the higher-level objectives may fit better in a different course in the curriculum.

Course learning outcomes spectrum

Consider Involving Your Learners

Depending on the course and the flexibility of the course structure and/or progression, some educators spend the first day of the course working with learners to craft or edit learning outcomes together. This practice of giving learners an informed voice may lead to increased motivation and ownership of learning.

Alignment, where all components work together to bolster specific student learning outcomes, occurs at multiple levels. At the course level, assignments or activities within the course are aligned with the daily or unit-level learning outcomes, which in turn are aligned with the course-level objectives. At the next level, the learning outcomes of each course in a curriculum contribute directly and strategically to programmatic learning outcomes.

Alignment Within the Course

Since learning outcomes are statements about key learning takeaways, they can be used to focus the assignments, activities, and content of the course (Wiggins & McTighe, 2005). Biggs & Tang (2011) note that, “In a constructively aligned system, all components… support each other, so the learner is enveloped within a supportive learning system.”

Alignments within the course

For example, for the learning outcome, “learners should be able to collaborate effectively on a team to create a marketing campaign for a product,” the course should: (1) intentionally teach learners effective ways to collaborate on a team and how to create a marketing campaign; (2) include activities that allow learners to practice and progress in their skillsets for collaboration and creation of marketing campaigns; and (3) have assessments that provide feedback to the learners on the extent that they are meeting these learning outcomes.

Alignment With Program

When developing your course learning outcomes, consider how the course contributes to your program’s mission/goals (especially if such decisions have not already been made at the programmatic level). If course learning outcomes are set at the programmatic level, familiarize yourself with possible program sequences to understand the knowledge and skills learners are bringing into your course and the level and type of mastery they may need for future courses and experiences. Explicitly sharing your understanding of this alignment with learners may help motivate them and provide more context, significance, and/or impact for their learning (Cuevas, Matveevm, & Miller, 2010).

If relevant, you will also want to ensure that a course with NUpath attributes addresses the associated outcomes . Similarly, for undergraduate or graduate courses that meet requirements set by external evaluators specific to the discipline or field, reviewing and assessing these outcomes is often a requirement for continuing accreditation.

See our program-level assessment guide for more information.

Transparency

Sharing course learning outcomes with learners makes the benchmarks for learning explicit and helps learners make connections across different elements within the course (Cuevas & Mativeev, 2010). Consider including course learning outcomes in your syllabus , so learners know what is expected of them by the end of a course and can refer to the outcomes as the term progresses. When educators refer to learning outcomes during the course before introducing new concepts or assignments, learners receive the message that the outcomes are important and are more likely to see the connections between the outcomes and course activities.

Formative Assessment

Formative assessment practices are brief, often low-stakes (minimal grade value) assignments administered during the semester to give the instructor insight into student progress toward one or more course-level learning objectives (or the day-to unit-level objectives that stair-step toward the course objectives). Common formative assessment techniques include classroom discussions , just-in-time quizzes or polls , concept maps , and informal writing techniques like minute papers or “muddiest points,” among many others (Angelo & Cross, 1993).

Refining Alignment During the Semester

While it requires a bit of flexibility built into the syllabus, student-centered courses often use the results of formative assessments in real time to revise upcoming learning activities. If students are struggling with a particular outcome, extra time might be devoted to related practice. Alternatively, if students demonstrate accomplishment of a particular outcome early in the related unit, the instructor might choose to skip activities planned to teach that outcome and jump ahead to activities related to an outcome that builds upon the first one.

Supporting Student Motivation and Engagement

Formative assessment and subsequent refinements to alignment that support student learning can be transformative for student motivation and engagement in the course, with the greatest benefits likely for novices and students worried about their ability to successfully accomplish the course outcomes, such as those impacted by stereotype threat (Steele, 2010). Take the example below, in which an instructor who sees that students are struggling decides to dedicate more time and learning activities to that outcome. If that instructor were to instead move on to instruction and activities that built upon the prior learning objective, students who did not reach the prior objective would become increasingly lost, likely recognize that their efforts at learning the new content or skill were not helping them succeed, and potentially disengage from the course as a whole.

formative assessment cycle

Artifacts for Summative Assessment

To determine the degree to which students have accomplished the course learning outcomes, instructors often assign some form of project , essay, presentation, portfolio, renewable assignment , or other cumulative final. The final product of these activities could serve as the “artifact” that is assessed. In this context, alignment is particularly critical—if this assignment does not adequately guide students to demonstrate their achievement of the learning outcomes, the instructor will not have concrete information to guide course design for future semesters. To keep assessment manageable, aim to design a single final assignment that create the space for students to demonstrate their performance on multiple (if not all) course learning outcomes.

Since not all courses are designed with a final assignment that allows students to demonstrate their highest level of achievement of all course learning outcomes, the assessment processes could use the course assignment that represents the highest level of achievement that students had an opportunity to demonstrate during the term. However, some learning objectives that do not come into play during the final may be better categorized as unit-level, rather than course-level, objectives.

Direct vs. Indirect Measures of Student Learning

Some instructors also use surveys, interviews, or other methods that ask learners whether and how they believe they have achieved the learning outcomes. This type of “indirect evidence” can provide valuable information about how learners understand their progress but does not directly measure students’ learning. In fact, novices commonly have difficulty accurately evaluating their own learning (Ambrose et al., 2010). For this reason, indirect evidence of student learning (on its own) is not considered sufficient for summative assessment.

Together, direct and indirect evidence of student learning can help an instructor determine whether to bolster student practice in certain areas or whether to simply focus on increasing transparency about when students are working toward which learning outcome.

Creating and Assessing Student Work with Analytic Rubrics

One tool for assessing student work is analytic rubrics (shown below) which are matrices of characteristics and descriptions of what it might look like for student products to demonstrate these characteristics at different levels of mastery. Analytic rubrics are commonly recommended for assessment purposes, since they provide more detailed feedback to guide course design in more meaningful ways than holistic rubrics. Pre-existing analytic rubrics such as the AAC&U VALUE Rubrics can be tailored to fit your course or program, or you can develop an outcome-specific rubric yourself (Moskal, 2000 is a useful reference, or contact CATLR for a one-on-one consultation). The process of refining a rubric often involves multiple iterations of applying the rubric to student work and identifying the ways in which it captures or does not capture the characteristics representing the outcome.

assessment by coursework

Summative assessment results can inform changes to any of the course components for subsequent terms. If students have underperformed on a particular course learning objective, the instructor might choose to revise the related assignments or provide additional practice opportunities related to that objective, and formative assessments might be revised or implemented to test whether those new learning activities are producing better results. If the final assessment does not provide sufficient information about student performance on a certain outcome, the instructor might revise the assessment guidelines or even implement a different assessment that is more aligned to the outcome. Finally, if an instructor notices during the assessment process that an important outcome has not been articulated, or would be more clearly stated a different way, that instructor might revise the objectives themselves.

For assistance at any stage of the course assessment cycle, contact CATLR for a one-on-one or group consultation.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010).  How learning works: Seven research-based principles for smart teaching . San Francisco, CA: John Wiley & Sons.

Anderson, L. W., & Krathwohl, D. R. (2001).  A taxonomy for learning, teaching and assessing: A revision of Bloom’s Taxonomy of Educational Objectives . New York, NY: Longman.

Bembenutty, H. (2011). Self-regulation of learning in postsecondary education.  New Directions for Teaching and Learning ,  126 , 3-8. doi: 10.1002/tl.439

Biggs, J., & Tang, C. (2011).  Teaching for Quality Learning at University . Maidenhead, England: Society for Research into Higher Education & Open University Press.

Cauley, K. M., & McMillan, J. H. (2010). Formative assessment techniques to support student motivation and achievement.  The Clearing House: A Journal of Educational Strategies, Issues and Ideas ,  83 (1), 1-6. doi: 10.1080/00098650903267784

Cuevas, N. M., Matveev, A. G., & Miller, K. O. (2010). Mapping general education outcomes in the major: Intentionality and transparency.  Peer Review ,  12 (1), 10-15.

Felder, R. M., & Brent, R. (2016).  Teaching and learning STEM: A practical guide . San Francisco, CA: John Wiley & Sons.

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview.  Theory into practice ,  41 (4), 212-218. doi:  10.1207/s15430421tip4104_2

McCourt, Millis, B. J., (2007).  Writing and Assessing Course-Level Student Learning Outcomes . Office of Planning and Assessment at the Texas Tech University.

Moskal, B. M. (2000). Scoring rubrics: What, when and how?  Practical Assessment, Research & Evaluation ,  7 (3).

Setting Learning Outcomes . (2012). Center for Teaching Excellence at Cornell University. Retrieved from  https://teaching.cornell.edu/teaching-resources/designing-your-course/setting-learning-outcomes .

Steele, C. M. (2010).  Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do . New York, NY: WW Norton & Company, Inc.

Wiggins, G., & McTighe, J. (2005).  Understanding by Design (Expanded) . Alexandria, US: Association for Supervision & Curriculum Development (ASCD).

  • Office of Curriculum, Assessment and Teaching Transformation >
  • Teaching at UB >
  • Course Development >
  • Design Your Course >

Designing Assessments

Determining how and when students have reached course learning outcomes.

On this page:

The importance of assessment.

Assessments in education measure student achievement. These may take the form traditional assessments such as exams, or quizzes, but may also be part of learning activities such as group projects or presentations.

While assessments may take many forms, they also are used for a variety of purposes. They may

  • Guide instruction 
  • Determine if reteaching, remediating or enriching is needed
  • Identify strengths and weaknesses
  • Determine gaps in content knowledge or understanding
  • Confirm students’ understanding of content
  • Promote self-regulating strategies 
  • Determine if learning outcomes have been achieved
  • Collect data to record and analyze
  • Evaluate course and teaching effectiveness

While all aspects of course design are important, your choice of assessment Influences what your students will primarily focus on.

For example, if you assign students to watch videos but do not assess understanding or knowledge of the videos, students may be more likely to skip the task. If your exams only focus on memorizing content and not thinking critically, you will find that students are only memorizing material instead of spending time contemplating the meaning of the subject matter, regardless of whether you attempt to motivate them to think about the subject.

Overall, your choice of assessment will tell students what you value in your course. Assessment focuses students on what they need to achieve to succeed in the class, and if you want students to achieve the learning outcomes you have created, then your assessments need to align with them.

The Assessment Cycle

Assessment does not occur only at the end of units or courses. To adjust teaching and learning, assessment should occur regularly throughout the course. The following diagram is an example of how assessment might occur at several levels.

Zoom image: The assessment cycle

The assessment cycle

This cycle might occur:

  • During a single lesson when students tell an instructor that they are having difficulty with a topic.
  • At the unit level where a quiz or exam might inform whether additional material needs to be included in the next unit.
  • At the course level where a final exam might indicate which units will need more instructional time the next time the course is taught.

In many of the above instances learning outcomes may not change, but assessment results will instead directly influence further instruction. For example, during a lecture a quick formative assessment such as a poll may make it clear that instruction was unclear, and further examples are needed.

Assessment Considerations

There are several types of assessment to consider in your course which fit within the assessment cycle. The two main assessments used during a course are formative and summative assessment. It is easier to understand each by comparing them.

  Formative Summative
  Assessment for Learning Assessment of Learning
Purpose Improve learning Measure attainment
When While learning is in progress End of learning
Focused on Learning process and learning progress Products of learning
Who Collaborative Instructor-directed
Use Provide feedback and adjust lesson Final evaluation

An often-used quote that helps illustrate the difference between these purposes is:

“When the cook tastes the soup, that’s formative. When the guests taste the soup, that’s summative.”  Robert E. Stake

It is important to note, however, that assessments may often serve both purposes. For example, a low-stakes quiz may be used to inform students of their current progress, and an instructor may alter instruction to spend more time on a topic if student scores warrant it. Additionally, activities like research papers or presentations graded on a rubric contain both the learning activity as well as the assessment. If students complete sections or drafts of the paper and receive grades or feedback along the way, this activity also serves as a formative assessment for learning while serving as a summative assessment upon completion.

Used to determine student understanding and misconceptions before your course or unit begin to determine background knowledge on upcoming topics.

Used to determine whether students understood course content, as well as what instruction or active learning worked well. It can also determine misconceptions and questions students may still have. Formative assessments are used to inform further instruction.

Used to determine student learning at the end of the learning process. These forms of assessments usually result in a weighted grade.

A set of criteria used to evaluate student work that improves efficiency, accuracy and objectivity while simultaneously providing feedback.

Best Practices

Considerations.

For assessments to accurately measure outcomes and to provide optimal feedback to students, the following should influence assessment choice and design:

Learning outcomes

  • Cognitive complexity
  • Options for expression
  • Course and Class

Assessment and grading

  • Weight of assessment
  • Time for grading and feedback
  • Delivery modes

Course level

  • Prerequisites and post learning
  • Time and length of course
  • Practice opportunities
  • Accessibility and accommodations

Provide Ongoing and Varied Methods

Because learning outcomes are unique, the types of knowledge and skills that demonstrate achievement of these outcomes will differ. Therefore, assessments will need to vary to capture this achievement. Consider using:

  • Evaluation of participation and engagement
  • Opportunities for feedback
  • Demonstrable learner progress
  • Opportunities to test and apply their knowledge
  • Different types of evidence: The following resource summarizes the different types of evidence to determine progress and how this evidence can be collected.

Question Types

There are several types of questions that you can use to assess student achievement. The following links explain question types and how to design high-quality multiple-choice questions.

Overview of the different types of questions you can use for assessments.

Overview of how to access, add and use all the question types available in Brightspace to design an assessment.

Overview of how to construct high-quality multiple-choice test questions.

At the beginning of choosing assessments for your course, you should start by reviewing the learning outcomes and then matching assessments to them. Assessments should align to the cognitive complexity (see Bloom’s Taxonomy ) or type of learning (see Fink’s Taxonomy ) of course learning outcomes. 

For example, if your course outcomes expect students to be able to memorize or understand course content, exams with multiple choice questions may accurately assess these outcomes. If your outcome expects students to be able to create an original product, then a multiple choice question would not measure an innovative creation.

Instead, a project, graded with a rubric, may best assess this.

Zoom image: Blooms pyramid showing multiple choice questions as appropriate for the Understanding and Remembering stages, while not appropriate for the Creating stage.

If an assessment does not map onto an outcome, you should ask whether you are missing a course learning outcome you care about and, if not, whether your assessment is necessary. Further, you may need to adapt the assessment itself or even your choice in assessment to align with the learning outcome.

The accompanying chart is helpful in choosing and reviewing your assessments as you create them. You may find as you go that a course outcome might change as you determine how you will be able to assess it or if the scope of the learning outcome is too large for the time needed for the assessment.

If you find most outcomes are assessed using quizzes and exams, consider alternative methods of assessment.

A colorized wheel illustrating how various assessments (60+) map onto the levels of learning outlined by Bloom.

Guide to developing formative assessment questions in alignment with Bloom’s Taxonomy.

Applying Assessments to Your Course

  • On your course design template , fill in the assessment column.
  • Consider a variety of assessments (e.g., formative and summative assessments).
  • Ensure assessments align to your course’s learning outcomes.

Now that you have chosen assessments to measure learning outcomes the next step is to consider methods of teaching.

If you would like to begin building some of your assessments, see:

Additional resources

The first step is deciding what you want your students to be able to do by the end of your course.

Examples for aligning assessments to the cognitive complexity of learning outcomes.

How does the assessment(s) align with your learning outcomes?

assessment by coursework

Rethinking educational assessments: the matrimony of exams and coursework

Rethinking educational assessments: the matrimony of exams and coursework

Standardised tests have been cemented in education systems across the globe, but whether or not they are a better assessment of students’ ability compared to coursework still divides opinions.

Proponents of exam assessments argue that despite being stressful, exams are beneficial for many reasons, such as:

  • Provides motivation to study;
  • Results are a good measure of the student’s work and understanding (and not anyone else’s); and
  • They are a fair way of assessing students’ knowledge of a topic and encourage thinking in answering questions that everyone else is also taking.

But the latter may not be entirely true. A  Stanford study says question format can impact how boys and girls score on standardised tests. Researchers found that girls perform better on standardised tests that have more open-ended questions, while boys score higher when the tests include more multiple-choice questions.

Meanwhile, The Hechinger Report notes that assessments, when designed properly, can support, not just measure, student learning, building their skills and granting them the feedback they need.

“Assessments create feedback for teachers and students alike, and the high value of feedback – particularly timely feedback – is well-documented by learning scientists. It’s useful to know you’re doing something wrong right after you do it,” it said.

assessment by coursework

Exams are important for students, but they must be designed properly to ensure they support student learning. Source: Shutterstock

Conversely, critics of exams say the obsession with test scores comes at the expense of learning – students memorise facts, while some syllabi lack emphasis of knowledge application and does little to develop students’ critical thinking skills.

Meanwhile, teachers have argued that report card grades aren’t the best way to measure a student’s academic achievement , adding that they measure effort more than achievement.

Coursework, on the other hand, assesses a wider range of skills – it can consist of a range of activities such as quizzes, class participation, assignments and presentations. These steady assessments over an academic year suggests there is fair representation of students’ educational attainment while also catering for different learning styles.

Quizzes can be useful as they keep students on their toes and encourages them to study consistently, while giving educators a yardstick as to how well students are faring. Group work, however, can open up a can of worms when lazy students latch on to hard-working peers to pull up their grades, or when work is unevenly distributed among teammates.

It becomes clear that exams and coursework clearly test students’ different ‘muscles’, but do they supplement and support students’ learning outcomes and develop students as a whole?

The shifting tides

assessment by coursework

Coursework can develop skills such as collaboration and critical thinking among students, which exams cannot. Source: Shutterstock

News reports suggest that some countries are gradually moving away from an exam-oriented education system; these include selected schools in the US and Asian countries.

Last year, Malaysia’s Education Minister, Dr Maszlee Malik, said students from Year One to Three will no longer sit for exams come 2019, enabling the ministry to implement the Classroom-Based Assessment (PBD), in which they can focus on a pupil’s learning development.

Meanwhile, Singapore is cutting down on the number of exams for selected primary and secondary school levels, while Georgia’s school graduate exams will be abolished from 2020. Finland is a country known for not having standardised tests, with the exception of one exam at the end of students’ secondary school year.

Drawing from my experience, I found that a less exam-oriented system greatly benefitted me.

Going through 11 years of the Malaysian national education system was a testament that I did not perform well in an exam-oriented environment. I was often ‘first from the bottom’ in class, which did little to boost my confidence in school.

For university, I set out to select a programme that was less exam-oriented and eventually chose the American Degree Programme (ADP), while many of my schoolmates went with the popular A-Levels before progressing to their degree.

With the ADP, the bulk of student assessments (about 70 percent, depending on your institution) came from assignments, quizzes, class participation, presentations and the like, while the remaining 30 percent was via exams. Under this system, I found myself flourishing for the first time in an academic setting – my grades improved, I was more motivated to attend my classes and learned that I wasn’t as stupid as I was often made out to be during my school days.

This system of continuous assessments worked more in my favour than the stress of sitting for one major exam. In the former, my success or failure in an educational setting was not entirely dependent on how well I could pass standardised tests that required me to regurgitate facts through essays and open-ended or multiple choice questions.

Instead, I had more time to grasp new and alien concepts, and through activities that promoted continuous development, was able to digest and understand better.

assessment by coursework

Mixed assessments in schools and universities can be beneficial for developing well-rounded individuals. Source: Shutterstock

Additionally, shy students such as myself are forced between a rock and a hard place – to contribute to class discussions or get a zero for class participation, and to engage in group and solo presentations or risk getting zero for oral presentations.

One benefit to this system is that it gives you the chance to play to your strengths and work hard towards securing top marks in areas you care about. If you preferred the  examination or assignments portion, for example, you could knock it out of the park in those areas to pull up your grades.

Some students may be all-rounders who perform well in both exam-oriented and coursework assessments, but not all students say the same. However, the availability of mixed assessments in schools and universities can be beneficial for developing well-rounded individuals.

Under this system, students who perform poorly in exams will still have to go through them anyway, while students who excel in exam-oriented conditions are also forced to undergo other forms of assessments and develop their skill sets, including creativity, collaboration, oral and critical thinking skills.

Students who argue that their grades will fall under mixed-assessments should rethink the purpose of their education – in most instances, degrees aim to prepare people for employment.

But can exams really prepare students for employment where they’ll be working with people with different skills, requiring them to apply critical thinking and communication skills over a period of time to ensure work is completed within stipulated deadlines, despite hiccups that can happen between the start and finishing line of a project?

It’ll help if parents, educators and policymakers are on the bandwagon, too, instead of merely chasing for children and students to obtain a string of As.  

Grades hold so much power over students’ futures – from the ability to get an academic scholarship to gaining entry to prestigious institutions – and this means it can be difficult to get students who prefer one mode of assessment to convert to one that may potentially negatively affect their grades.

Ideally, education shouldn’t be about pitting one student against the other; it should be based on attaining knowledge and developing skills that will help students in their future careers and make positive contributions to the world.

Exams are still a crucial part of education as some careers depend on a student’s academic attainment (i.e. doctors, etc.). But rather than having one form of assessment over the other, matrimony between the two may help develop holistic students and better prepare them for the world they’ll soon be walking into.

Liked this? Then you’ll love…

Smartphones in schools: Yes, no or maybe?

How do universities maintain a cohesive class culture?

Popular stories

Money isn’t everything. for this tasmania-trained architect, improving society is his #1 goal..

Money isn’t everything. For this Tasmania-trained architect, improving society is his #1 goal.

The prestigious education of Japan’s royal family

The prestigious education of Japan’s royal family

6 cheapest universities in Europe to study medicine

6 cheapest universities in Europe to study medicine

6 most affordable universities in Finland for international students

6 most affordable universities in Finland for international students

  • My Account |
  • StudentHome |
  • TutorHome |
  • IntranetHome |
  • Contact the OU Contact the OU Contact the OU |
  • Accessibility hub Accessibility hub

Postgraduate

  • International
  • News & media
  • Business & apprenticeships

Open Research Online - ORO

Coursework versus examinations in end-of-module assessment: a literature review.

Twitter Share icon

Copy the page URI to the clipboard

Richardson, John T. E. (2015). Coursework versus examinations in end-of-module assessment: a literature review. Assessment & Evaluation in Higher Education , 40(3) pp. 439–455.

DOI: https://doi.org/10.1080/02602938.2014.919628

In the UK and other countries, the use of end-of-module assessment by coursework in higher education has increased over the last 40 years. This has been justified by various pedagogical arguments. In addition, students themselves prefer to be assessed either by coursework alone or by a mixture of coursework and examinations than by examinations alone. Assessment by coursework alone or by a mixture of coursework and examinations tends to yield higher marks than assessment by examinations alone. The increased adoption of assessment by coursework has contributed to an increase over time in the marks on individual modules and in the proportion of good degrees across entire programmes. Assessment by coursework appears to attenuate the negative effect of class size on student attainment. The difference between coursework marks and examination marks tends to be greater in some disciplines than others, but it appears to be similar in men and women and in students from different ethnic groups. Collusion, plagiarism and personation (especially ‘contract cheating’ through the use of bespoke essays) are potential problems with coursework assessment. Nevertheless, the increased use of assessment by coursework has generally been seen as uncontentious, with only isolated voices expressing concerns regarding possible risks to academic standards.

Viewing alternatives

Public attention, number of citations, item actions.

This item URI

-

The Open University

  • Study with us
  • Work with us
  • Supported distance learning
  • Funding your studies
  • International students
  • Global reputation
  • Sustainability
  • Apprenticeships
  • Develop your workforce
  • Contact the OU

Undergraduate

  • Arts and Humanities
  • Art History
  • Business and Management
  • Combined Studies
  • Computing and IT
  • Counselling
  • Creative Writing
  • Criminology
  • Early Years
  • Electronic Engineering
  • Engineering
  • Environment
  • Film and Media
  • Health and Social Care
  • Health and Wellbeing
  • Health Sciences
  • International Studies
  • Mathematics
  • Mental Health
  • Nursing and Healthcare
  • Religious Studies
  • Social Sciences
  • Social Work
  • Software Engineering
  • Sport and Fitness
  • Postgraduate study
  • Research degrees
  • Masters in Social Work (MA)
  • Masters in Economics (MSc)
  • Masters in Creative Writing (MA)
  • Masters degree in Education (MA/MEd)
  • Masters in Engineering (MSc)
  • Masters in English Literature (MA)
  • Masters in History (MA)
  • Masters in International Relations (MA)
  • Masters in Finance (MSc)
  • Masters in Cyber Security (MSc)
  • Masters in Psychology (MSc)
  • A to Z of Masters degrees
  • OU Accessibility statement
  • Conditions of use
  • Privacy policy
  • Cookie policy
  • Manage cookie preferences
  • Modern slavery act (pdf 149kb)

Follow us on Social media

Google+

  • Student Policies and Regulations
  • Student Charter
  • System Status
  • Contact the OU Contact the OU
  • Modern Slavery Act (pdf 149kb)

Duke Learning Innovation and Lifetime Education

Design and Grade Course Work

This resource provides a brief introduction to designing your course assessment strategy based on your course learning objectives and introduces best practices in assessment design. It also addresses important issues in grading, including strategies to curb cheating and grading methods that reduce implicit bias and provide actionable feedback for students.

In this document, assessments refer to all the ways students’ learning can be measured. This includes summative assessments such as tests and papers, but also formative assessments such as a survey to gauge understanding of course concepts.

Table of Contents:

Crafting effective assessments

Encouraging academic integrity.

  • Grading fairly 

Resources & further reading

Tie assessments to the course learning objectives. To determine what kinds of assessments to use in your course, consider what you want the students to learn to do and how that can be measured. When designing an overall plan, it is important to begin with the end in mind.

Consider what type of assessments best fit your learning objectives. For example, a case study is appropriate for measuring students’ ability to apply skills to a new situation, while a multiple choice exam is better for testing their understanding of concepts. This table of assessment choices from Carnegie Mellon University can help you think about the alignment of learning objectives and types of assessments.

Rethink traditional assessment to enhance the learning experience. At the end of a learning unit or module, summative assessments are frequently employed to measure students’ learning. These assessments are usually graded, cumulative in design and take the form of a midterm exam, research paper or final project. Consider replacing a traditional assessment with an authentic assessment situated in a meaningful, real-world context or modifying existing assessments to “do” the subject instead of recalling information. Here are some high-level questions for to get you started:

  • Does this assessment replicate or simulate the contexts in which adults are “tested” in the workplace, civic life or personal life?
  • Does this assessment challenge students to use what they’ve learned in solving or analyzing new problems?
  • Does this assessment provide direct evidence of learning?  
  • Is this assessment realistic? Have students been able to practice along the way?
  • Does this assessment truly demonstrate success and mastery of a skill students should have at the end of your course?

Further considerations for authentic assessment design are available in this guide from University of Illinois.

In practice, authentic assessments look different by discipline and level of the course. A good starting point is to research common examples of alternative assessments , but consider researching approaches in your discipline. There are also ways to improve traditional assessments such as quizzes to be a measure of true learning instead of memorization.

Our page on  Alternative Strategies for Assessment and Grading  outlines some options for creating assessment activities and policies which are learning-focused, while also being equitable and compassionate. The suggestions are loosely grouped by expected faculty time commitment.

Tailor learning by assessing previous knowledge. At the beginning of a learning unit or module, use a diagnostic assessment to gain insight into students’ existing understanding and skills prior to beginning a new concept. Examples of diagnostic assessments include: discussion, informal quiz, survey or a quick write paper ( see this list for more ideas ).

Use frequent informal assessments to monitor progress. Formative assessments are any assessments implemented to evaluate progress during the learning experience. When possible, provide several low-stakes opportunities for students to demonstrate progress throughout the course. Formative assessments provide five major benefits: (1)

  • Students can identify their strengths and weaknesses with a particular concept and request additional support during the learning unit.
  • Instructors can target areas where students are struggling that should be addressed either individually or in whole class activities before a more high-stakes assessment.
  • Formative assessments can be reviewed and evaluated by peers which provides additional opportunities to learn, both for the reviewer and the student being reviewed.
  • Informal, low-stakes assessments reduce student anxiety.
  • A more frequent, immediate feedback loop can make some assessments (like graded quizzes) less necessary.

Examples include quick assessments like polls which can make large classes feel smaller or more informal, or end-of-class reflection questions on the day’s content. This longer list of low-stakes, formative assessments can help you find methods that work with your content and goals.

Use rubrics when possible. Students are likely to perform better on assessments when the grading criteria are clear. Research suggests that assessments designed with a corresponding rubric lead to an increased attention to detail and fewer misunderstandings in submitted work. (2)   If you are interested in creating rubrics, Arizona State University has a detailed guide to get started .



Improve student performance by clearly showing
the student how their work will be evaluated
and what is expected.
Encourage the instructor to clarify their criteria in specific terms. 
Help students become better judges of the quality of
their own work.
Provide objectivity and consistency in grading student work.
Provide students with more informative feedback about their strengths and areas that need improvement.Provide useful feedback to the instructor regarding the effectiveness of instruction.

Break up larger assessments into smaller parts. Scaffolding major or long-term work into smaller assignments with different deadlines gives students natural structure, helps with time and project management skills and provides multiple opportunities for students to receive constructive feedback. Students also benefit from scaffolding when:

  • Rubrics are provided to assess discrete skills and evaluate student practice via smaller pre-assignments. 
  • The stakes are lowered for preliminary work.
  • Opportunities are offered for rewrite or rework based on feedback.

Use practices that promote inclusive assessment design . Take inventory of the explicit and implicit norms and biases of your course assessments. For example, are your assessment questions phrased in a way that all students (including non-native English speakers) can be successful? Do your course assessments meet basic accessibility standards, including being appropriate for students with visual or hearing needs?

The Duke Community Standard embraces the principle that “intellectual and academic honesty are at the heart of the academic life of any university. It is the responsibility of all students to understand and abide by Duke’s expectations regarding academic work.” (3) Learning the rules of legitimacy in academic work is part of college education, so the topic of cheating and plagiarism should be embraced as part of ongoing discussion among students, and faculty instructors should remind students of this obligation throughout their courses.

Include a statement about cheating and plagiarism in your syllabus. Remind students that they must uphold the standards of student conduct as an obligation of participating in our learning community. This can be reinforced before important assessments as well. Studies have shown that when students have to manually agree to the Honor Pledge prior to submitting an assignment (either online or in person), they are less likely to cheat. (4)

Specify where training is available. Because of their cultural or academic experiences, some students may not be familiar with what constitutes plagiarism in your course. Students can use library resources to learn more about plagiarism and take the university’s plagiarism tutorial .

Include specific guidelines for collaboration, citation and the use of electronic sources for every assessment. For example, it may be necessary to define what kinds of online sources are considered cheating for your discipline (for example, online translators in language courses) or help students understand how to cite correctly .

Provide ongoing feedback to reduce the temptation to cheat. Students are more likely to seek short cuts when they don’t know how to approach a task. Requiring students to turn in smaller parts of a paper or project for feedback and a grade before the final deadline can lessen the risk of cheating. Having multiple milestones on larger assessments reduces the stress of finishing a paper at the last minute or cramming for a final exam.

Ask questions that have no single right answer. The most direct approach to reduce cheating is to design open-ended assessment items. When writing test or quiz questions ask yourself: could this answer be easily discovered online? If so, rewrite your question to elicit more critical thinking from your students.

Open-ended assessments can take the form of case studies, projects, essays, podcasts, interviews or “explain your work” problem sets. Students can provide examples of course concepts in a novel way. They can record themselves explaining the idea to someone else or make a mind map of related events or ideas. They can present their solutions to real-world scenarios as a poster or a podcast. If you choose to conduct an exam, designing questions that ask students to decide which concepts or equations to apply in a scenario, rather than testing recall, may make the most sense for many courses. You could include an oral exam component where students explain their work for a particular problem.

Minimize opportunities for cheating in tests and quizzes online. If you offer quizzes or tests through Sakai, there are several steps that you can take to reduce cheating, plagiarism or other violations:

  • Sakai tests include a pledge not to violate the Duke Community Standard. You could also have this printed at the top of a physical test.
  • Limit time. Set a time limit that gives students enough time to properly progress through the activity but not so much that unprepared students can research every question.
  • Randomize question or answer order. When you randomize (or shuffle) your test or quiz questions, all students will still receive the same questions but not necessarily in the same order. This strategy is particularly useful when you have a large question pool and choose to show a few questions at a time. When you randomize the answers to a question, all students will still receive the same answers but not necessarily in the same order.
  • Use large question pools. Pools allow you to use the same question across multiple assessments or create a large number of questions from which to pull a random subset. For example, you could develop (or repurpose) 30 questions in a pool and have Sakai randomly choose 15 of those questions for each student’s assessment.
  • Hide correct answers and scores until the test or quiz is closed. This can prevent students from sharing questions and answers with peers during the assessment period.
  • Require an explanation of the student’s answer. Require a rationale for their answer either as a short text question or perhaps a voice recording.

Duke has chosen not to implement a proctoring technology. When thinking about proctoring, keep in mind how implementing such policies and technologies might affect our ability to create equitable student-centered learning experiences. Several issues of student well-being and technological constraints you might want to keep in mind include:

  • Student privacy : In an online setting, proctoring services essentially bring strangers into students’ homes or dorm rooms — places students may not be comfortable exposing. Additionally, often these services record and store actions of students on non-Duke servers and infrastructure. This makes proctoring services problematic for the in-class setting as well. These violations of privacy perpetuate inequity through the use of surveillance technologies. 
  • Technology access : If testing is online all students may not have the same access to technology (e.g., external webcams) for proctoring.
  • Accessibility : Proctoring software can create more barriers for students who need accommodations.
  • Unease: Proctoring reinforces a surveillance aspect to learning, which impacts student performance .

Grading Fairly

Start with clear instructions, a direct assignment prompt and transparent grading criteria. Explicit instructions reduce confusion and the number of emails that you may receive from your students requesting clarification on an assignment. Your assignment instructions should detail:

  • Length requirements
  • Formatting requirements
  • Expectations of style, voice and tone
  • Acceptable structure for reference citations
  • Due date(s)
  • Technology requirements needed for the assignment
  • Description of the measures used to evaluate success

Offer meaningful feedback and a timely response when grading. There are many ways to provide feedback to students on submitted work. Regardless of the grading strategy and tool that you choose, there are a few best practices to consider when providing student feedback:

  • Feedback should be prompt . Send feedback as soon as possible after the assignment to give students an adequate amount of time to reflect before moving on to the next assignment.
  • Feedback should be equitable . Rubrics can help ensure that students are receiving consistent feedback for similar work. 
  • Feedback should be formative . Meaningful feedback focuses on students’ strengths and shares constructive areas to further develop their skills. It is not necessary to correct all errors if patterns can be pointed out.

We recommend avoiding curves for both individual assignments and final course grades. There are several downsides to curves that will negatively impact your pedagogy:

  • Curves lower motivation to learn and incentivize cheating
  • Curves create barriers to an inclusive learning environment
  • Curves also “often result in grades unrelated to content mastery” ( Jeffrey Schinske and Kimberly Tanner )

Rather than using curves, you can introduce feedback strategies that allow students to improve their performance on future assessments by revising submitted work or reflecting on study habits.

Create customized rubrics to grade assignments consistently. Rubrics can reduce the grading burden over the long-term for instructors and increase the quality of the work students create. A well-designed rubric: 

  • Provides clear criteria for success that help students produce better work and instructors to be consistent with grading.
  • Points out specific areas for students to address in future assignments.
  • Allows for consistency in grading and more meaningful feedback.

Grade students anonymously. Blind grading removes any potential positive or negative bias when reviewing an individual’s work. The main assessment tools at Duke, Sakai and Gradescope, have easy controls for implementing anonymous grading. 

Use a grade book that is visible to students. Students should have online access to their grades throughout the semester. It is not necessary to post their cumulative course grade at all times, but seeing the individual items is important. Knowing how they are doing reduces student stress before big assessments. An open and up-to-date grade book provides opportunities for students and instructors to address issues in a timely manner. It allows students to correct any omissions by the instructor and instructors have an immediate sense of which students are struggling as well.

Assessments

Best Practices for Inclusive Assessment (Duke University)

What are inclusive assessment practices? (Tufts University)

Sequencing and Scaffolding Assignments (University of Michigan)

Blind Grading (Yale University)

Using Rubrics (Cornell University)

How to Give Your Students Better Feedback with Technology (Chronicle of Higher Education)

  • The Many Faces of Formative Assessment (International Journal of Teaching and Higher Education)
  • A Review of Rubric Use in Higher Education (Reddy, Y, et al, Assessment and Evaluation in Higher Education)
  • Duke Community Standard
  • The Impact of Honor Codes and Perceptions of Cheating on Academic Cheating Behaviors, Especially for MBA Bound Undergraduates (O’Neill H., Pfeiffer C.)

Think Student

Coursework vs Exams: What’s Easier? (Pros and Cons)

In A-Level , GCSE , General by Think Student Editor September 12, 2023 Leave a Comment

Coursework and exams are two different techniques used to assess students on certain subjects. Both of these methods can seem like a drag when trying to get a good grade, as they both take so many hours of work! However, is it true that one of these assessment techniques is easier than the other? Some students pick subjects specifically because they are only assessed via coursework or only assessed via exams, depending on what they find easiest. However, could there be a definite answer to what is the easiest?

If you want to discover whether coursework or exams are easier and the pros and cons of these methods, check out the rest of this article!

Disclaimer: This article is solely based on one student’s opinion. Every student has different perspectives on whether coursework or exams are easier. Therefore, the views expressed in this article may not align with your own.

Table of Contents

Coursework vs exams: what’s easier?

The truth is that whether you find coursework or exams easier depends on you and how you like to work. Different students learn best in different ways and as a result, will have differing views on these two assessment methods.

Coursework requires students to complete assignments and essays throughout the year which are carefully graded and moderated. This work makes up a student’s coursework and contributes to their final grade.

In comparison, exams often only take place at the end of the year. Therefore, students are only assessed at one point in the year instead of throughout. All of a student’s work then leads up to them answering a number of exams which make up their grade.

There are pros and cons for both of these methods, depending on how you learn and are assessed best. Therefore, whether you find coursework or exams easier or not depends on each individual.

Is coursework easier than exams?

Some students believe that coursework is easier than exams. This is because it requires students to work on it all throughout the year, whilst having plenty of resources available to them.

As a result, there is less pressure on students at the end of the year, as they have gradually been able to work hard on their coursework, which then determines their grade. If you do coursework at GCSE or A-Level, you will generally have to complete an extended essay or project.

Some students find this easier than exams because they have lots of time to research and edit their essays, allowing the highest quality of work to be produced. You can discover more about coursework and tips for how to make it stand out if you check out this article from Oxford Royale.

However, some students actually find coursework harder because of the amount of time it takes and all of the research involved. Consequently, whether you prefer coursework or not depends on how you enjoy learning.

What are the cons of coursework?

As already hinted at, the main con of coursework is the amount of time it takes. In my experience, coursework was always such a drag because it took up so much of my time!

When you hear that you have to do a long essay, roughly 2000-3000 words, it sounds easily achievable. However, the amount of research you have to do is immense, and then editing and reviewing your work takes even more time.

Coursework should not be over and done within a week. It requires constant revisits and rephrasing, as you make it as professional sounding and high quality as possible. Teachers are also unable to give lots of help to students doing coursework. This is because it is supposed to be an independent project.

Teachers are able to give some advice, however not too much support. This can be difficult for students who are used to being given lots of help.

You also have to be very careful with what you actually write. If you plagiarise anything that you have written, your coursework could be disqualified. Therefore, it is very important that you pay attention to everything you write and make sure that you don’t copy explicitly from other websites. This can make coursework a risky assessment method.

You are allowed to use websites for research, however you must reference them correctly. This can be a difficult skill for some students to learn also!

What are the pros of coursework?

Some of the cons of coursework already discussed can actually be seen as pros by some students! Due to coursework being completed throughout the year, this places less pressure on students, as they don’t have to worry about final exams completely determining their grade.

Some subjects require students to sit exams and complete some coursework. However, if a student already knows that they have completed some high-quality coursework when it comes to exam season, they are less likely to place pressure on themselves. They know that their coursework could save their grade even if they don’t do very well on the exam.

A lot of coursework also requires students to decide what they want to research or investigate. This allows students to be more creative, as they decide what to research, depending on the subject. This can make school more enjoyable and also give them more ideas about what they want to do in the future.

If you are about to sit your GCSEs and are thinking that coursework is the way to go, check out this article from Think Student to discover which GCSE subjects require students to complete coursework.

What are the cons of exams?

Personally, I hated exams! Most students share this opinion. After all, so much pressure is put on students to complete a set of exams at the end of the school year. Therefore, the main con of sitting exams is the amount of pressure that students are put under.

Unlike coursework, students are unable to go back and revisit the answers to their exams over many weeks. Instead, after those 2 (ish) hours are up, you have to leave the exam hall and that’s it! Your grade will be determined from your exams.

This can be seen as not the best method, as it doesn’t take student’s performances throughout the rest of the year into account. Consequently, if a student is just having a bad day and messes up one of their exams, nothing can be done about it!

If you are struggling with exam stress at the moment, check out this article from Think Student to discover ways of dealing with it.

Exams also require an immense amount of revision which takes up time and can be difficult for students to complete. If you want to discover some revision tips, check out this article from Think Student.

What are the pros of exams?

Exams can be considered easier however because they are over with quickly. Unlike coursework, all students have to do is stay in an exam hall for a couple of hours and it’s done! If you want to discover how long GCSE exams generally last, check out this article from Think Student.

Alternatively, you can find out how long A-Level exams are in this article from Think Student. There is no need to work on one exam paper for weeks – apart from revising of course!

Revising for exams does take a while, however revising can also be beneficial because it increases a student’s knowledge. Going over information again and again means that the student is more likely to remember it and use it in real life. This differs greatly from coursework.

Finally, the main advantage of exams is that it is much harder to cheat in any way. Firstly, this includes outright cheating – there have been issues in the past with students getting other people to write their coursework essays.

However, it also includes the help you get. Some students may have an unfair advantage if their teachers offer more help and guidance with coursework than at other schools. In an exam, it is purely the student’s work.

While this doesn’t necessarily make exams easier than coursework, it does make them fairer, and is the reason why very few GCSEs now include coursework.

If you want to discover more pros and cons of exams, check out this article from AplusTopper.

What type of student is coursework and exams suited to?

You have probably already gathered from this article whether exams or coursework are easier. This is because it all depends on you. Hopefully, the pros and cons outlined have helped you to decide whether exams or coursework is the best assessment method for you.

If you work well under pressure and prefer getting assessed all at once instead of gradually throughout the year, then exams will probably be easier for you. This is also true if you are the kind of person that leaves schoolwork till the last minute! Coursework will definitely be seen as difficult for you if you are known for doing this!

However, if, like me, you buckle under pressure and prefer having lots of time to research and write a perfect essay, then you may find coursework easier. Despite this, most GCSE subjects are assessed via exams. Therefore, you won’t be able to escape all exams!

As a result, it can be useful to find strategies that will help you work through them. This article from Think Student details a range of skills and techniques which could be useful to use when you are in an exam situation.

Exams and coursework are both difficult in their own ways – after all, they are used to thoroughly assess you! Depending on how you work best, it is your decision to decide whether one is easier than the other and which assessment method this is.

guest

  • DOI: 10.1080/02602938.2014.919628
  • Corpus ID: 144597299

Coursework versus examinations in end-of-module assessment: a literature review

  • J. T. Richardson
  • Published 3 April 2015
  • Assessment & Evaluation in Higher Education

57 Citations

To see or not to see comparing the effectiveness of examinations and end of module assessments in online distance learning, a large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment, follow-up descriptive study of how proportioning marks between coursework and examination affects the performance of students in nursing, the perceptions of postgraduate international students of examinations.

  • Highly Influenced

Struggling and juggling: a comparison of student assessment loads across research and teaching-intensive universities

Marks, and their proportioning between examinations, tutorials and assignment, as determinants of the performance of nursing students in a pharmacology course, formulating module assessment for improved academic performance predictability in higher education, descriptive study of how proportioning marks determines the performance of nursing students in a pharmacology course, evaluating the design of digital tools for the transition to an e-continuous assessment in higher education, examining gender effects in different types of undergraduate science assessment, 78 references, coursework marks high, examination marks low: discuss.

  • Highly Influential

The Impact of Coursework on Degree Classifications and the Performance of Individual Students

Equity issues in performance assessment: the contribution of teacher-assessed coursework to gender-related differences in examination performance, coursework assessment, class size and student performance: 1984‐94, gender and mode of assessment at university: should we assume female students are better suited to coursework and males to unseen examinations1, what should make up a final mark for a course an investigation into the academic performance of first year bioscience students, mark distributions and marking practices in uk higher education, university students' expectations of teaching, class size, coursework assessment and student performance in geography: 1984-94., communication and practice with examination criteria. does this influence performance in examinations, related papers.

Showing 1 through 3 of 0 Related Papers

Best Practices and Sample Questions for Course Evaluation Surveys

Meaningful input from students is essential for improving courses. One of the most common indirect course assessment methods is the course evaluation survey. In addition to providing useful information for improving courses, course evaluations provide an opportunity for students to reflect and provide feedback on their own learning. Review an example of a digital course evaluation survey in HelioCampus Assessment and Credentialing (formerly AEFIS) that was created by Testing and Evaluation Services.

Best Practices

The following best practices are intended to guide departments and programs in creating and revising course evaluation questions, and achieving high response rates.

This is an accordion element with a series of buttons that open and close related content panels.

Achieving High Response Rates

  • Give students time (10-15 minutes) to complete the digital evaluation during class (just as they do with printed, paper evaluations).
  • Encourage students to complete the evaluation by discussing its purpose and importance in the weeks leading up to it. If students know that you will read their feedback and seriously consider changes based on their feedback, they will be more likely to complete the evaluation.
  • Share how you have incorporated past feedback into your courses.
  • Examples include making the evaluation an assignment with points attached or giving students a bonus point. One way to do this is to set a target response rate for the class – say 90% – and provide everyone a bonus point if the class reaches the target
  • Ask students to provide feedback about their own learning relative to the course’s learning outcomes

Creating and Revising Survey Questions - Strategies to Obtain More Effective Feedback

  • Meaningful input from students is essential for improving courses.
  • Obtaining student feedback on their learning is important to you.
  • Guide students to the specific type of feedback you are looking for.
  • Students, like anyone answering questions, tend to provide better feedback to more specific questions. Asking about a specific type of activity, or asking students to share the most important point they learned during the semester, may provide more useful feedback.
  • Example: instead of asking “How useful were the instructional materials and activities for this course?”, focus on a specific material or activity.
  • Yes/no questions can often be leading questions. Instead of asking “Did you learn a great amount from this course?”, a better question would be “To what extent do you feel you mastered the content in this course?
  • Asking open-ended questions can help you gain insight you may not otherwise receive. Research by the University of California – Merced is finding that coaching from peers or near-peers can help students provide more effective feedback to open-ended questions. The research includes short videos and a rubric you can share with your students prior to completing evaluations.
  • Students are hesitant to complete course evaluations if they feel they may be identified by their responses. For example, responding to “level” or “year” when they are the only graduate student or undergraduate senior in a course.

Sample Questions

Instructor-specific: delivery - teaching methods, strategies, practices and clarity.

  • The instructor was well prepared for class.
  • Individual class meetings were well prepared.
  • The instructor used class time effectively.
  • The instructor was organized, well prepared, and used class time efficiently.
  • The instructor communicated clearly and was easy to understand.
  • The instructor encouraged student participation in class.
  • The instructor presented course material in a clear manner that facilitated understanding.
  • The instructor effectively organized and facilitated well-run learning activities.
  • The instructor’s teaching methods were effective.
  • The instructor’s teaching methods aided my learning.
  • The instructor stimulated my interest in the subject matter.
  • The instructor provided helpful feedback.
  • The instructor provided feedback in a timely manner.
  • The instructor returned assignments and exams in a timely manner.
  • The online course platform was updated and accurate.

Instructor-Specific: Personal / Connection - Clarity and Encouragement

  • The instructor effectively explained and illustrated course concepts.
  • The instructor’s feedback to me was helpful and improved my understanding of the material.
  • I was able to access the instructor outside of scheduled class time for additional help.
  • The instructor was available to students.
  • I could get help if I needed it.
  • The instructor cared about the students, their progress, and successful course completion.
  • The instructor created a welcoming and inclusive learning environment.
  • The instructor treated students with respect.

Course Materials

  • The lectures, readings, and assignments complemented each other.
  • The instructional materials (i.e., books, readings, handouts, study guides, lab manuals, multimedia, software) increased my knowledge and skills in the subject matter.
  • The text and assigned readings were valuable.
  • The workload consisted of less than two hours outside of the classroom for each hour in class.
  • The course workload and requirements were appropriate for the course level.
  • The course was organized in a manner that helped me understand underlying concepts.
  • The course assignments (readings, assigned problems, laboratory experiments, videos, etc.) facilitated my learning.
  • The assigned readings helped me understand the course material.
  • Graded assignments helped me understand the course material.
  • The tests/assessments accurately assess what I have learned in this course.
  • Exams and assignments were reflective of the course content.
  • The course was well organized.
  • The course followed the syllabus.
  • The instructor grades consistently with the evaluation criteria.
  • The course environment felt like a welcoming place to express my ideas.

Student Engagement and Involvement

  • I attend class regularly.
  • I consistently prepared for class.
  • I have put a great deal of effort into advancing my learning in this course.
  • In this course, I have been challenged to learn more than I expected.
  • I was well-prepared for class/discussion sections.

Course Structure

  • This class has increased my interest in this field of study.
  • This course gave me confidence to do more advanced work in the subject.
  • I believe that what I am being asked to learn in this course is important.
  • The readings were appropriate to the goals of the course.
  • The written assignments contributed to my knowledge of the course material and understanding of the subject.
  • Expectations for student learning were clearly defined.
  • Student learning was fairly assessed (e.g., through quizzes, exams, projects, and other graded work).
  • Exams/assignments were a fair assessment of my knowledge of the course material.
  • The grading practices were clearly defined.
  • The grading practices were fair.
  • The examinations/projects measured my knowledge of the course material.
  • This course was challenging.
  • This course made me think.
  • What grade do you expect to earn in this course? Options for this question: A-AB, B-BC,  C,  Below C, Unsure

Student Learning and Course Learning Outcomes

Text in “{}” should be changed to match the specific course learning outcomes (CLO). 

  • This course helped me { develop intellectual and critical thinking skills }.
  • This course improved my ability to { evaluate arguments }.
  • This course helped me { argue effectively }.
  • My ability to { identify, formulate, and solve problems } has increased.
  • My understanding of { basic chemical transformations, reactivity, and properties } has increased.
  • My ability to { recognize the relationship between structure, bonding, and the properties of molecules and materials } has increased.
  • I am capable of { locating, evaluating, and using information in the literature }.
  • I am confident in my ability to { communicate chemical knowledge effectively }.
  • I understand { professional and ethical responsibility related to data storage }.
  • This course helped me analyze { relations among individual, civil society, political institution, and countries }.
  • The coursed helped me { further develop my writing ability }.
  • The course improved my { verbal communication skills }.
  • The course increased my ability to { collaborate and work in teams }.
  • The course increased my { intercultural knowledge and awareness to help me become a global citizen }.

UW Essential Learning Outcomes

  • This course enhanced my knowledge of the world (e.g., human cultures, society, sciences, etc.).
  • This course helped me develop intellectual skills (e.g., critical or creative thinking, quantitative reasoning, problem solving, etc.).
  • This course helped me develop professional skills (e.g., written or oral communication, computer literacy, teamwork, etc.).
  • This course enhanced my sense of social responsibility.

General / Overall Rating

  • I would highly recommend this instructor to other students.
  • I would recommend this instructor to others.
  • Overall, this instructor met my expectations for the quality of a UW-Madison teacher.
  • I would highly recommend this course to other students.
  • I would recommend this course to others.
  • Overall, this course met my expectations for the quality of a UW-Madison course.
  • This course had high educational impact.
  • This course was useful in progress toward my degree.

Qualitative, Open-Ended Response

  • Do you have any specific recommendations for improving this course?
  • What are one to three specific things about the course or instructor that especially helped to support student learning?
  • What are the strengths of this course?
  • What parts of the course aided your learning the most?
  • What are one to three specific things about the course that could be improved to better support student learning?
  • What parts of the class were obstacles to your learning?
  • What changes might improve your learning?

TA-Specific

  • Assignments and tests handled by the TA were returned with useful feedback.
  • The TA was willing to explain grading and evaluation of my work.
  • The TA knew and was confident in the material related to this course.
  • The TA was adequately prepared for discussion sections.
  • The TA was clear in presenting subject matter.
  • The TA presented the material in an interesting and engaging way.
  • The TA fostered intellectual communication among my peers.
  • The TA was able to adequately prepare students for assignments (examination, book reviews, research papers, etc.).
  • The TA stimulated thought and discussion.
  • I felt comfortable asking my TA questions.
  • The TA was willing to answer questions.
  • The TA was able to answer questions clearly and completely.
  • The TA effectively utilizes electronic communication (e.g., Learn@UW, Canvas, email, etc.).
  • The TA is well-prepared for each meeting.
  • The TA is flexible and adapts the learning environment when things do not go according to plan.
  • The TA was available during offices hours or by appointment.
  • The TA arrives to class on time.
  • The TA was committed to teaching and aiding students.
  • The TA is an effective teacher.
  • If given the opportunity, I would enroll in a section led by my TA again.
  • Overall, the TA performed well.

Qualitative, open-ended response

  • Are there distinctive qualities about the TA that you would like to highlight?
  • What are one to three specific things that your TA does particularly well to support student learning?
  • What might your TA do to improve his/her teaching?
  • What are one to three specific things that you would like to see your TA improve to better support student learning?

Training & Resources

  • Getting Started - Resources for HelioCampus AC Administrators More
  • Instructor FAQs More
  • Student FAQs More

Contact us at [email protected]

Do My Coursework

What is a Coursework Assessment?

There are several types of Coursework Assessment (also called CA); however, they all have in common the wide scope of their application and the tools utilized to evaluate a student’s composition coursework. As long as the student can document their coursework, pass the assessment and meet admission requirements, there is no requirement that they disclose their true course status. In other words, an individual can go through his entire academic career without ever knowing that he has been assessed for competency! However, it’s important to recognize that most colleges and universities require students to complete an assessment before accepting them into a program or granting them a degree.

There are many types of course assessment tests available to students. The most common types include multiple-choice, essay, reading, writing, math and personality tests. These assessments are administered either by the college or the university, or by independent companies who specialize in administering assessments. Students can choose to take a variety of these tests or a specific one for their course of study.

What is a coursework assessment? A course profile is the written examination that is written by an instructor for a student to examine throughout a semester or academic year. This type of academic document is considered a formal class assignment, but students are not required to submit it until after they have earned their degree or have satisfactorily passed their course work. In essence, the document serves as a prerequisite for earning a degree or passing out of a course.

Course assessment tests are written and evaluate aspects of a student’s entire course of study. They assess the topics you cover throughout your course. For example, if you are a student who wishes to learn about the American government, your assessment might cover how you learned about Congress, the executive branch, and government policy. It might assess your understanding of constitutional drafting, debates and discussions on various social issues, and even economic policies. An assessment might even look at how well you organized your notes and mastered assignments.

What is a coursework assessment? They are used by colleges and universities to determine which courses a student needs to take. They help students prepare for their first course with an eye to helping them succeed.

Every student is different, which is why the process of evaluating a course is so important. The assessment works by examining each student’s strengths and weaknesses. For example, if a student takes a course and performs well in the majority of the classes, the school may consider this to be a positive sign. If that same student struggles with assignments and tests, however, the school might consider him to be a poor performer and give him a lower grade. After all, the school cannot use the grades earned by a student as a yardstick to compare him to other students.

The types of coursework assessments vary, but they all have one common thing: the purpose of the tests is to help the school determine whether or not a student’s coursework and performance warrants a higher or lower educational level. In essence, what is a coursework assessment is the tool used to help the school to figure out whether or not a certain student is up to the task of taking a certain class. When a student does not pass an assessment, he still has a chance to improve his grade!

We Are Here To Assist You

Here are a few letters your customers love. s a l e. do you know how we know because the days when retailers offer their biggest discounts..

assessment by coursework

Coursework and examinations

assessment by coursework

Assessment by coursework

Advice and tips on submitting coursework:.

  • Avoid a last minute rush: at the start of your course, check all submission deadlines in your course handbook and plan ahead.
  • Our Online support site explains how to submit coursework online. This must be used where required for a course.
  • Check well before the submission deadline that you can access the online submission site from the computer that you will be using.
  • Back up and maintain a copy of your work in case of technical problems.
  • Never attempt to submit assignments directly to your tutors or to the Course Director.
  • Ensure that you submit the correct file version, together with any images and appendices.
  • Online submission is a two-step process, and work uploaded but left in ‘draft’ is not counted as submitted (your assignment will be deemed to be late if it is still in ‘draft’ after the deadline passes).
  • Ensure that files submitted online meet requirements on file size, type, name etc (see How to submit an assignment ).  
  • Seek  online ‘self-help’ , or assistance via e-mail and telephone from  TALL IT Help .
  • You may be liable to pay fees for late entry for examinations, late change of options, and for re-assessment—see  Other charges .

Word count limits and referencing

  • Refer to your course handbook or VLE (course portal) for information about word count limits, including what material (such as indices etc) should be counted or not.
  • If you have any doubts or questions about referencing, check with your tutor/ Course Director and refer to the University guidance on Plagiarism .

Late submission

If you submit work after the deadline, it will normally be subject to an academic penalty, as outlined in your course conventions (see your Course Handbook).

In exceptional circumstances, if you are not able to submit your work by the deadline, you may request permission to submit late—see our Late Submission Policy .

Withdrawal and resubmission of work

Before the deadline.

It is your responsibility to submit the correct document/file. However, as outlined on the Submissions page, you may withdraw and resubmit work on one occasion before the submission deadline, without permission. In these circumstances, you should contact your Course Administrator without delay.

After the deadline:

You have up to 30 minutes after the deadline to review the work that you have submitted. If you have made a substantive error (e.g. wrong file, earlier draft, missing bibliography) you can send a replacement to your course administrator. This process should not be used to correct incidental errors e.g. typos, a missing reference, formatting etc.

Files received from 30 minutes onwards will not be accepted under any circumstances.

Corrupt files

In cases where it is discovered that the submitted file is corrupt or cannot be accessed, your course administrator will contact you to request the file be emailed to them, possibly in a new format. The emailed file must be received by the course administrator within 7 days.

Examinations

Arrangements .

If you are required to sit an examination as part of your course, further details will be provided in your Course Handbook. For some courses, examinations will be held as an “ open book ” (online) examinations, for other courses they will be in person. Most examinations last for two or three hours. 

Preparing for handwritten examinations

For in-person examinations you are expected to handwrite your answers, unless you have a medical condition that prevents you from doing so. We recommend that you practice writing for a suitable period, making sure that your handwriting remains legible. (If the examiners deem a script to be illegible, then a transcription will be required. Transcriptions take place under examination conditions, usually within a week or so of the examination itself, and any costs are charged to the student.) See Sitting your examinations  for more information.

Alternative arrangements for examinations

If you have any special requirements for your examination, medical or otherwise, you should inform your Course Administrator (matriculated students should inform their college). Meeting such requests requires formal University approval, which can take some time, so it is important to submit your request as early as possible. See Alternative examination arrangements  for more information.

Past examination papers

Past examination papers are available through either the  Rewley House Continuing Education Library  or, for undergraduates, on your course VLE (e.g. Moodle or Canvas). Check with your Course Director which papers are most relevant for revision purposes.

Marking and moderation

You will receive marks and feedback on your assignments as you progress throughout the course. All marks are provisional until they are formally agreed by the Board of Examiners (which is normally convened at the end of the academic year).

Assessment is moderated, which means that someone other than the marker of your work will review the spread of all marks awarded within a class or cohort, as well as a sample of the marked assignments, to ensure that the marking is both consistent and fair. Work that is awarded a failing grade will be scrutinised by the External Examiner. You should not expect to be told if a particular piece of your work has been selected for moderation, but students will usually have at least one piece of work moderated during the year. Occasionally a mark will be changed (either increased or decreased) during the moderation process, and if this happens after you have received your work back you will be notified.

Marking feedback

Feedback from assessors (markers) will normally indicate what was good about your work, as well as where it was weaker, and how you can improve.

If you have questions about or are unhappy with the marking

You will probably receive a number of marks during the year, and these may vary as you learn new skills, or reach an element of the course in which you are more or less confident to some degree. This is normal - many students experience variations of marks within a body of assessed work.

If you have questions or concerns, or are dissatisfied with the marking process, you can raise this with the Director of Undergraduate Study or Director of Postgraduate Taught Study  (as relevant) in the first instance. You should not approach the marker or Course Director, because students are not permitted to discuss marks with Examiners.

If, after this, you remain dissatisfied with the conduct of the process, you may consider appealing under the Academic appeals procedure , however, please be aware that you are not able to appeal on the grounds of academic judgement.

Failed assessment and resubmission

If you fail an item of assessment, you will be informed of the reassessment opportunities. These will vary according to the requirements of the course, and are detailed in your course conventions (available in your Course Handbook).

You can normally only resubmit a failed piece of work once, and failure at a second attempt will usually mean will fail the course outright. Occurrences are rare, but should this happen, then you can ask your Course Director for advice about possible options.

An introduction to assessment and exams

assessment by coursework

Award-Bearing Handbook

Assessment and exams, further information.

assessment by coursework

Curriculum analytics: Exploring assessment objectives, types, and grades in a study program

  • Published: 06 September 2024

Cite this article

assessment by coursework

  • Jelena Jovanović   ORCID: orcid.org/0000-0002-1904-0446 1 ,
  • Andrew Zamecnik 2 ,
  • Abhinava Barthakur 2 &
  • Shane Dawson 2  

59 Accesses

Explore all metrics

Higher education institutions are increasingly seeking ways to leverage the available educational data to make program and course quality improvements. The development of automated curriculum analytics can play a substantial role in this effort by bringing novel and timely insights into course and program quality. However, the adoption of curriculum analytics for program quality assurance has been impeded by a lack of accessible and scalable data-informed methods that can be employed to evaluate assessment practices and ensure their alignment with the curriculum objectives. Presently, this work remains a manual and resource intensive endeavour. In response to this challenge, we present an exploratory curriculum analytics approach that allows for scalable, semi-automated examination of the alignment between assessments and learning objectives at the program level. The method employs a comprehensive representation of assessment objectives (i.e., learning objectives associated with assessments), to encode the domain specific and general knowledge, as well as the specific skills the implemented assessments are designed to measure. The proposed method uses this representation for clustering assessment objectives within a study program, and proceeds with an exploratory analysis of the resulting clusters of objectives in relation to the corresponding assessment types and student assessment grades. We demonstrate and discuss the capacity of the proposed method to offer an initial insight into alignment of assessment objectives and practice, using the assessment-related data from an undergraduate study program in information systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

assessment by coursework

Similar content being viewed by others

Competency analytics tool: analyzing curriculum using course competencies.

assessment by coursework

Higher Education Analytics: New Trends in Program Assessments

assessment by coursework

Curriculum analytics in higher education institutions: a systematic literature review

Explore related subjects.

  • Artificial Intelligence
  • Digital Education and Educational Technology

Data availability

The data that support the findings of this study are available from University of South Australia but restrictions apply to the availability of these data. While the data access was granted to the current study, the data are not publicly available. Data could be available from the authors upon reasonable request and with permission of University of South Australia.

full list of GQs is available at https://www.unisa.edu.au/student-life/teaching-and-learning/graduate-qualities/

Mdn stands for median; IQR stands for Interquartile range (the difference between the 3rd and the 1st quartile), as a measure of variability in the data.

We have repeated the method described in Sect. 4.5 with all the grades ( N  = 35,818) included and got almost exactly the same results as those reported in Sect. 5.3. Interested readers can find these analyses in Supplementary file 2.

The verb lists we relied on are those presented in (Meda & Swart, 2018 ) as well as lists publicly available from https://bit.ly/45Js8hn and https://bit.ly/3zpV81w .

Abelha, M., Fernandes, S., Mesquita, D., Seabra, F., & Ferreira-Oliveira, A. T. (2020). Graduate employability and competence development in Higher Education—A. Systematic Literature Review Using PRISMA Sustainability , 12 (15). https://doi.org/10.3390/su12155900

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning . A Revision of Bloom’s Taxonomy of Educational Objectives. Longman.

Armatas, C., & Spratt, C. F. (2019). Applying learning analytics to program curriculum review. The International Journal of Information and Learning Technology , 36 (3), 243–253. https://doi.org/10.1108/IJILT-11-2018-0133

Article   Google Scholar  

Armatas, C., Kwong, T., Chun, C., Spratt, C., Chan, D., & Kwan, J. (2022). Learning Analytics for Programme Review: Evidence, analysis, and action to Improve Student Learning outcomes. Technology Knowledge and Learning , 27 (2), 461–478. https://doi.org/10.1007/s10758-021-09559-6

Balduf, M. (2009). Underachievement among College Students. Journal of Advanced Academics , 20 (2), 274–294. https://doi.org/10.1177/1932202X0902000204

Barthakur, A., Joksimovic, S., Kovanovic, V., Richey, M., & Pardo, A. (2022). Aligning objectives with assessment in online courses: Integrating learning analytics and measurement theory. Computers & Education , 190 , 104603. https://doi.org/10.1016/j.compedu.2022.104603

Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear mixed-effects models using lme4. Journal of Statistical Software , 67 (1). https://doi.org/10.18637/jss.v067.i01

Beheshitha, S. S., Hatala, M., Gašević, D., & Joksimović, S. (2016). The role of achievement goal orientations when studying effect of learning analytics visualizations. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge , 54–63 . https://doi.org/10.1145/2883851.2883904

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education , 32 (3), 347–364. https://doi.org/10.1007/BF00138871

Biggs, J. (2003). Aligning teaching and assessing to course objectives. Teaching and Learning in Higher Education: New Trends and Innovations , 2 (4), 13–17.

Google Scholar  

Brock, G., Pihur, V., Datta, S., & Datta, S. (2008). clValid: An R Package for Cluster Validation. Journal of Statistical Software , 25 (4), 1–22. https://doi.org/10.18637/jss.v025.i04

Calleia, A., & Howard, S. (2014). Assessing what students know: Effects of assessment type on spelling performance and relation to working memory. Journal of Student Engagement: Education Matters , 4 , 14–24.

Choudhury, B., & Freemont, A. (2017). Assessment of anatomical knowledge: Approaches taken by higher education institutions. Clinical Anatomy , 30 (3), 290–299. https://doi.org/10.1002/ca.22835

Crompton, H., Burke, D., & Lin, Y. C. (2019). Mobile learning and student cognition: A systematic review of PK-12 research using Bloom’s taxonomy. British Journal of Educational Technology , 50 (2), 684–701. https://doi.org/10.1111/bjet.12674

Dawson, S., & Hubball, H. (2014). Curriculum Analytics: Application of Social Network Analysis for Improving Strategic Curriculum Decision-Making in a Research-Intensive University. Teaching & Learning Inquiry: The ISSOTL Journal , 2(2), 59–74. https://doi.org/10.2979/teachlearninqu.2.2.59 .

Dawson, S., Pardo, A., Salehian Kia, F., & Panadero, E. (2023). An Integrated Model of Feedback and Assessment: From fine grained to holistic programmatic review. In LAK23: 13th International Learning Analytics and Knowledge Conference (pp. 579–584).

Day, I. N. Z., van Blankenstein, F. M., Westenberg, P. M., & Admiraal, W. F. (2018). Explaining individual student success using continuous assessment types and student characteristics. Higher Education Research & Development , 37 (5), 937–951. https://doi.org/10.1080/07294360.2018.1466868

Divjak, B., Svetec, B., Horvat, D., & Kadoić, N. (2023). Assessment validity and learning analytics as prerequisites for ensuring student-centred learning design. British Journal of Educational Technology , 54 , 313–334. https://doi.org/10.1111/bjet.13290

Egger, R., & Yu, J. (2022). A topic modeling comparison between LDA, NMF, Top2Vec, and BERTopic to Demystify Twitter posts. Frontiers in Sociology , 7 , 886498. https://doi.org/10.3389/fsoc.2022.886498

Fan, Y., Jovanović, J., Saint, J., Jiang, Y., Wang, Q., & Gašević, D. (2022). Revealing the regulation of learning strategies of mooc retakers: A learning analytic study. Computers & Education , 178 , 104404. https://doi.org/10.1016/j.compedu.2021.104404

Gao, T., Fisch, A., & Chen, D. (2021). Making Pre-trained Language Models Better Few-shot Learners (arXiv:2012.15723). arXiv. https://doi.org/10.48550/arXiv.2012.15723

Garg, R., Han, J., Cheng, Y., Fang, Z., & Swiecki, Z. (2024). Automated discourse analysis via Generative Artificial Intelligence. Proceedings of the 14th Learning Analytics and Knowledge Conference , 814 , 820. https://doi.org/10.1145/3636555.3636879

Gibson, A., Kitto, K., & Willis, J. (2014). A cognitive processing framework for learning analytics. Proceedings of the 4th International Conference on Learning Analytics and Knowledge , 212–216. https://doi.org/10.1145/2567574.2567610

Gottipati, S., & Shankararaman, V. (2014). LEARNING ANALYTICS APPLIED TO CURRICULUM ANALYSIS. Proceedings of the 2014 AIS SIGED: IAIM International Conference on Information Systems Education and Research. https://aisel.aisnet.org/siged2014/2

Gottipati, S., & Shankararaman, V. (2018). Competency analytics tool: Analyzing curriculum using course competencies. Education and Information Technologies , 23 (1), 41–60. https://doi.org/10.1007/s10639-017-9584-3

Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure (arXiv:2203.05794). arXiv. https://doi.org/10.48550/arXiv.2203.05794

Hammer, S., Ayriss, P., & McCubbin, A. (2021). Style or substance: How Australian universities contextualise their graduate attributes for the curriculum quality space. Higher Education Research & Development , 40 (3), 508–523. https://doi.org/10.1080/07294360.2020.1761304

Hassan, T., Edmison, B., Stelter, T., & McCrickard, S. D (2021). Learning to Trust: Understanding Editorial Authority and Trust in Recommender Systems for Education. Proceedings of the 29th ACM Conference on User Modeling Adaptation and Personalization , 24–32. https://doi.org/10.1145/3450613.3456811

Heckman, J. J., & Kautz, T. (2012). Hard evidence on soft skills. Labour Economics , 19 (4), 451–464. https://doi.org/10.1016/j.labeco.2012.05.014

Hennig, C. (2023). fpc: Flexible Procedures for Clustering (2.2–10). https://CRAN.R-project.org/package=fpc

Hilliger, I., Aguirre, C., Miranda, C., Celis, S., & Pérez-Sanagustín, M. (2020). Design of a curriculum analytics tool to support continuous improvement processes in higher education. Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, 181–186. https://doi.org/10.1145/3375462.3375489

Hilliger, I., Aguirre, C., Miranda, C., Celis, S., & Pérez-Sanagustín, M. (2022). Lessons learned from designing a curriculum analytics tool for improving student learning and program quality. Journal of Computing in Higher Education , 34 (3), 633–657. https://doi.org/10.1007/s12528-022-09315-4

Holmes, D. W., Sheehan, M., Birks, M., & Smithson, J. (2018). Development of a competency mapping tool for undergraduate professional degree programmes, using mechanical engineering as a case study. European Journal of Engineering Education , 43 (1), 126–143. https://doi.org/10.1080/03043797.2017.1324404

Hou, C., Zhu, G., Zheng, J., Zhang, L., Huang, X., Zhong, T., Li, S., Du, H., & Ker, C. L. (2024). Prompt-based and fine-tuned GPT models for Context-Dependent and -independent deductive coding in.

Hristova, G., & Netov, N. (2022). Media Coverage and Public Perception of Distance Learning During the COVID-19 Pandemic: A Topic Modeling Approach Based on BERTopic. 2022 IEEE International Conference on Big Data (Big Data), 2259–2264. https://doi.org/10.1109/BigData55660.2022.10020466

Iqbal, S., Rakovic, M., Chen, G., Li, T., Ferreira-Mello, R., Fan, Y., Fiorentino, G., Radi Aljohani, N., & Gasevic, D. (2023). Towards Automated Analysis of Rhetorical Categories in students essay writings using Bloom’s taxonomy. LAK23: 13th International Learning Analytics and Knowledge Conference , 418 , 429. https://doi.org/10.1145/3576050.3576112

Irvine, J. (2021). Taxonomies in Education: Overview, comparison, and future directions. Journal of Education and Development , 5 (2). https://doi.org/10.20849/jed.v5i2.898

Jiang, W., & Pardos, Z. A. (2020). Evaluating Sources of Course Information and Models of Representation on a Variety of Institutional Prediction Tasks. In International Educational Data Mining Society. International Educational Data Mining Society. https://eric.ed.gov/?id=ED607904

Jiang, H., Fei, X., Liu, H., Roeder, K., Lafferty, J., Wasserman, L., Li, X., & Zhao, T. (2020). huge: High-Dimensional Undirected Graph Estimation (1.3.4.1). https://CRAN.R-project.org/package=huge

Jovanović, J., Saqr, M., Joksimović, S., & Gašević, D. (2021). Students matter the most in learning analytics: The effects of internal and instructional conditions in predicting academic success. Computers & Education , 172 , 104251. https://doi.org/10.1016/j.compedu.2021.104251

Kaliisa, R., Jivet, I., & Prinsloo, P. (2023). A checklist to guide the planning, designing, implementation, and evaluation of learning analytics dashboards. International Journal of Educational Technology in Higher Education , 20 (1), 28. https://doi.org/10.1186/s41239-023-00394-6

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice , 41 (4), 212–218. https://doi.org/10.1207/s15430421tip4104_2

Li, Y., Rakovic, M., Poh, B. X., Gaševic, D., & Chen, G. (2022). Automatic Classification of Learning Objectives Based on Bloom’s Taxonomy. In International Educational Data Mining Society. International Educational Data Mining Society. https://eric.ed.gov/?id=ED624058

Liu, H., Lafferty, J., & Wasserman, L. (2009). The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs. Journal of Machine Learning Research , 10 (80), 2295–2328. http://jmlr.org/papers/v10/liu09a.html

MathSciNet   Google Scholar  

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics with Learning Design. American Behavioral Scientist , 57 (10), 1439–1459. https://doi.org/10.1177/0002764213479367

Lüdecke, D., Ben-Shachar, M. S., Patil, I., Waggoner, P., & Makowski, D. (2021). Performance: An R Package for Assessment, comparison and testing of statistical models. Journal of Open Source Software , 6 (60), 3139. https://doi.org/10.21105/joss.03139

Maltese, V. (2018). Digital Transformation Challenges for Universities: Ensuring Information Consistency Across Digital Services. Cataloging & Classification Quarterly , 56(7), 592–606. https://doi.org/10.1080/01639374.2018.1504847 .

Meda, L., & Swart, A. J. (2018). Analysing learning outcomes in an Electrical Engineering curriculum using illustrative verbs derived from Bloom’s taxonomy. European Journal of Engineering Education , 43 (3), 399–412. https://doi.org/10.1080/03043797.2017.1378169

Méndez, G., Ochoa, X., & Chiluiza, K. (2014). Techniques for data-driven curriculum analysis. Proceedings of the Fourth International Conference on Learning Analytics and Knowledge , 148-157 . https://doi.org/10.1145/2567574.2567591

Pardos, Z. A., & Nam, A. J. H. (2020). A university map of course knowledge. PLOS ONE , 15 (9), e0233207. https://doi.org/10.1371/journal.pone.0233207

Pieper, S. L., Fulcher, K. H., Sundre, D. L., & Erwin, T. D. (2008). What Do I Do with the Data Now? Analyzing Assessment Information for Accountability and Improvement. Research & Practice in Assessment, 3, 4–10. https://eric.ed.gov/?id=EJ1062741.

Rogaten, J., Clow, D., Edwards, C., Gaved, M., & Rienties, B. (2020). Are Assessment Practices Well Aligned Over Time? A Big Data Exploration. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining University Assessment in a Digital World (pp. 147–164). Springer International Publishing. https://doi.org/10.1007/978-3-030-41956-1_11

Saqr, M., & López-Pernas, S. (2023). The temporal dynamics of online problem-based learning: Why and when sequence matters. International Journal of Computer-Supported Collaborative Learning , 18 (1), 11–37. https://doi.org/10.1007/s11412-023-09385-1

Schwendimann, B. A., Rodríguez-Triana, M. J., Vozniuk, A., Prieto, L. P., Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving learning at a glance: A systematic literature review of learning Dashboard Research. IEEE Transactions on Learning Technologies , 10 (1), 30–41. https://doi.org/10.1109/TLT.2016.2599522

Sclater, N. (2018). Curriculum analytics: Report from Jisc LA Cymru workshop – Data analytics. Retrieved from: https://analytics.jiscinvolve.org/wp/2018/12/17/curriculum-analytics-report-from-jisc-la-cymru-workshop/ Accessed January 30, 2023.

Social Annotation Proceedings of the 14th Learning Analytics and Knowledge Conference, 518–528. https://doi.org/10.1145/3636555.3636910

Song, R., Liu, Z., Chen, X., An, H., Zhang, Z., Wang, X., & Xu, H. (2023). Label prompt for multi-label text classification. Applied Intelligence , 53 (8), 8761–8775. https://doi.org/10.1007/s10489-022-03896-4

Suleman, F. (2018). The employability skills of higher education graduates: Insights into conceptual frameworks and methodological options. Higher Education , 76 (2), 263–278. https://doi.org/10.1007/s10734-017-0207-0

Tight, M. (2023). The curriculum in higher education research: A review of the research literature. Innovations in Education and Teaching International , 0 (0), 1–14. https://doi.org/10.1080/14703297.2023.2166560

Varouchas, E., Sicilia, M. A., & Sánchez-Alonso, S. (2018). Towards an integrated learning analytics framework for quality perceptions in higher education: A 3-tier content, process, engagement model for key performance indicators. Behaviour & Information Technology , 37 (10–11), 1129–1141. https://doi.org/10.1080/0144929X.2018.1495765

Download references

No funds, grants, or other support was received for conducting this study and preparing the manuscript.

Author information

Authors and affiliations.

Faculty of Organizational Sciences, University of Belgrade, Jove Ilica 154, Belgrade, 11000, Serbia

Jelena Jovanović

Centre for Change and Complexity in Learning, University of South Australia, Adelaide, Australia

Andrew Zamecnik, Abhinava Barthakur & Shane Dawson

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and method. Material preparation, data collection and analysis were performed by Jelena Jovanovic and Andrew Zamecnik. The first draft of the manuscript was written by Jelena Jovanovic and all authors commented on all versions of the manuscript. All authors read and approved the final manuscript.”

Corresponding author

Correspondence to Jelena Jovanović .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Jovanović, J., Zamecnik, A., Barthakur, A. et al. Curriculum analytics: Exploring assessment objectives, types, and grades in a study program. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-13015-0

Download citation

Received : 31 January 2024

Accepted : 22 August 2024

Published : 06 September 2024

DOI : https://doi.org/10.1007/s10639-024-13015-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Curriculum analytics
  • Educational data analysis
  • Assessment-to-learning-objectives alignment
  • Multidimensional learning objectives representation
  • Assessment clustering
  • Higher education
  • Scalable quality assurance
  • Find a journal
  • Publish with us
  • Track your research

Planning Tank

What is Coursework? | Definition, Meaning & keypoints!

What is coursework.

Coursework is a practical work or study done by a student in partial fulfilment of a degree or training. Projects, field work, design studies, long essays etc constitutes a coursework. The nature of work which requires to be carried out depends on the course. It is largely a part of learning exercise and a step to prepare you to handle the required work/ task effectively and efficiently.

Written or practical work done by a student during a course of study, usually assessed in order to count towards a final mark or grade.

Who assigns coursework and why?

Major types of coursework & how to go about them, coursework for academic topics which require writing:, what makes a good and effective content, coursework requiring you to make something like model, sculpture or artwork, key points to be kept in mind while working on coursework.

Doctorates are the highest degrees conferred by universities. An online or on campus doctorate can lead to a high-level position in a number of different fields, from business administration to health care to quality control. The lengthy road to earning a doctorate can be shortened by at least several months through online study.

Admission to doctoral programs requires completion of an undergraduate degree program and typically, but not always, of a master’s degree program. Students earning a doctorate must take a specified number of advanced graduate-level courses, requiring at least two or three years of study beyond the master’s degree. Upon passing written or oral examinations, or a combination of both, doctoral students are granted the status of doctoral candidates. Then they must research and write a dissertation on an original topic, and then satisfactorily defend the dissertation before a committee of professors in the field.

About The Author

Cheslin Kolbe’s candid assessment of Springboks rookies’ displays in Rugby Championship

Cheslin Kolbe Sacha Feinberg-Mngomezulu and Aphelele Fassi image

Springboks backline stars Sacha Feinberg-Mngomezulu, Cheslin Kolbe and Aphelele Fassi.

With the Springboks on course to win this year’s Rugby Championship, Cheslin Kolbe sang the praises of the inexperienced players in the world champions’ squad.

South Africa have been in fine form this year, winning seven out of eight of their Tests played this year with their only blip being a narrow defeat against Ireland in Durban.

The Boks put that disappointment behind them and hit the ground running in the Rugby Championship where they have won all four of their matches.

Set for crunch matches against Argentina

South Africa will do battle with Argentina for Rugby Championship title in their last two matches of the competition in Santiago del Estero and at the Mbombela Stadium in Nelspruit on Saturday, September 28 respectively.

Kolbe had nothing but praise for the likes of Sacha Feinberg-Mngomezulu and Aphelele Fassi who have both made valuable contributions to the Boks cause this year.

“The guys coming through bring a high level of energy into the team,” Kolbe told Sportsboom . “I’ve been trying to learn and pick up from the things they do whether it’s in the gym or at training, because it brings up something different to my style of play.

‘Sad’ blow as Springboks lose in-form star while ‘completely different challenge’ awaits rookie in Argentina

“You’d swear they’ve been in the team for five years. Their composure, and how well they remain calm under pressure are impressive. They have a lot of new tricks, and I’m trying to grab as much and stay on par with them.”

The Boks’ impressive Rugby Championship form is due to them claiming back-to-back victories over Australia in Brisbane and Perth before repeating that feat against New Zealand in Johannesburg and Cape Town.

Rassie Erasmus’ troops are sitting pretty at the top of the Rugby Championship table with 18 points amassed after notching four victories in as many games, while Argentina are in second place on 10 points, followed by New Zealand (seven) and Australia (four).

Room for improvement

Despite their eight-point lead in the standings, Kolbe believes there is still room for improvement.

“We’re undefeated up until now but we haven’t been playing our best rugby,” he said. “We can’t get carried away, there’s so much to improve on and learn from especially the two New Zealand matches.

“That’s the beauty of the sport, we have another opportunity in the next couple of weeks, the job isn’t done.”

READ MORE: Ox Nche: ‘Emotional’ Springboks fans did ‘not understand All Blacks tradition’ after haka ‘disrespect’

Related Articles

Springboks Damian De Allende, Handre Pollard, Sacha Feinberg-Mngomezulu and Manie Libbok

Springboks legends weigh in on fly-half battle as ‘Rassie Intelligence’ puts South Africa in a ‘privileged position’

Schalk Burger says the Springboks are spoilt for choice at fly-half while Rassie Erasmus' management puts them in a 'privileged position'.

John Dobson and Damian Willemse Stormers pic

John Dobson reveals who will start at fly-half for Stormers in United Rugby Championship

Stormers boss John Dobson has revealed who will start at fly-half when his team kick off their URC campaign against the Ospreys on September 28.

Springboks wing Cheslin Kolbe during All Blacks Test and his lineout throw (inset).

Great Wallabies hooker loses his mind over Cheslin Kolbe skill which ‘sh*t on my entire career’

Springboks star Cheslin Kolbe showed his versatility during the second Test with the All Blacks.

Springboks and Sharks stars Siya Kolisi, Andre Esterhuizen and Eben Etzebeth.

Siya Kolisi: The OUTRAGEOUS Springboks-laden team the Sharks can pick

A look at how the Sharks could line up following the signing of Siya Kolisi.

IMAGES

  1. Summative Coursework Assessment Brief

    assessment by coursework

  2. Course Evaluation Template

    assessment by coursework

  3. CSE 6009-B Coursework Assessment Criteria 2020-21

    assessment by coursework

  4. How to Write a Coursework: Best Tips and Topics

    assessment by coursework

  5. College Coursework

    assessment by coursework

  6. FBA A Level English Literature : Assessment Objectives for the Coursework

    assessment by coursework

VIDEO

  1. COURSE 504 : Methods and Tools of Assessment in Assamese

  2. COMP2001-Information Management and Retrieval Coursework Demonstration Video : Profile Microservices

  3. Crack That 2019 Theory Paper for AQA A Level Paper 2

  4. 2022 AQA Paper 1 Section B Walkthrough

  5. Assessing Writing Instruction

  6. #examsspelledout

COMMENTS

  1. Coursework versus examinations in end-of-module assessment: a

    Assessment by coursework appears to attenuate the negative effect of class size on student attainment. The difference between coursework marks and examination marks tends to be greater in some disciplines than others, but it appears to be similar in men and women and in students from different ethnic groups. Collusion, plagiarism and ...

  2. Coursework versus examinations in end-of-module assessment: a

    Assessment by coursework appears to attenuate the negative effect of class size on student attainment. The difference between coursework marks and examination marks tends to be greater in some ...

  3. Assessing Student Learning

    Integrating assessment with other course elements. Then the remainder of the course design process can be completed. In both integrated (Fink 2013) and backward course design models (Wiggins & McTighe 2005), the primary assessment methods, once chosen, become the basis for other smaller reading and skill-building assignments as well as daily ...

  4. Student Assessment in Teaching and Learning

    In their handbook for course-based review and assessment, Martha L. A. Stassen et al. define assessment as "the systematic collection and analysis of information to improve student learning." (Stassen et al., 2001, pg. 5) This definition captures the essential task of student assessment in the teaching and learning process. Student ...

  5. Course Assessment

    The course assessment cycle, illustrated above, helps instructors identify areas in which students excel in the current course design, and others in which they may struggle. This allows the instructor to reallocate time from easier skills or topics to more challenging ones, and to design activities that guide and support students' learning ...

  6. Designing Assessments

    Evaluate course and teaching effectiveness; While all aspects of course design are important, your choice of assessment Influences what your students will primarily focus on. For example, if you assign students to watch videos but do not assess understanding or knowledge of the videos, students may be more likely to skip the task.

  7. Rethinking assessments: Combining exams and coursework

    Rethinking educational assessments: the matrimony of exams and coursework. It's 2019 - and high time we rethink educational assessments. Source: Shutterstock. Standardised tests have been cemented in education systems across the globe, but whether or not they are a better assessment of students' ability compared to coursework still divides ...

  8. Open Research Online

    Assessment by coursework appears to attenuate the negative effect of class size on student attainment. The difference between coursework marks and examination marks tends to be greater in some disciplines than others, but it appears to be similar in men and women and in students from different ethnic groups. Collusion, plagiarism and ...

  9. Design and Grade Course Work

    Design and Grade Course Work. This resource provides a brief introduction to designing your course assessment strategy based on your course learning objectives and introduces best practices in assessment design. It also addresses important issues in grading, including strategies to curb cheating and grading methods that reduce implicit bias and ...

  10. Quick Guide: Approaches to Evaluating Student Coursework for

    This quick guide was prepared by the WSU Office of Assessment for Curricular Effectiveness (ACE) and is intended to help WSU programs and faculty consider approaches to evaluating student coursework as part of program assessment. ACE is also available to collaborate with WSU undergraduate degree programs on utilizing student coursework to ...

  11. PDF Learning Through Coursework (Arts and English)

    How to present assessment criteria meaningfully to learners and involve them in the self and peer assessment processes Teachers mark coursework -Cambridge moderates teachers' judgements to ensure consistency of standards Using sentences or phrases from assessment criteria as the focus for learning in a staged developmental way.

  12. (PDF) Different types of assessments and their effect on students

    Coursework, online quizzes and online exams were used to assess students' learning in this academic year. A student workload model was used to predict the effect of assessments and ensure students ...

  13. PDF Course Assessment Practices and Student Learning Strategies in Online

    To begin with, the results of this study allow a picture to be drawn of typical assessment practices in online courses at Colorado community colleges. In brief, a typical course would consist of 29 assignments and use five different assessment methods. Assignments would be due in at least 10 of the 15 weeks.

  14. Coursework vs Exams: What's Easier? (Pros and Cons)

    This work makes up a student's coursework and contributes to their final grade. In comparison, exams often only take place at the end of the year. Therefore, students are only assessed at one point in the year instead of throughout. All of a student's work then leads up to them answering a number of exams which make up their grade.

  15. Course Evaluations and End-term Student Feedback

    The end-term student feedback survey, often referred to as the "course evaluations", opens in the last week of instruction each quarter for two weeks: Course evaluations are anonymous and run online. Results are delivered to instructors after final grades are posted. The minimum course enrollment for evaluations is three students.

  16. Coursework versus examinations in end-of-module assessment: a

    Assessment by coursework appears to attenuate the negative effect of class size on student attainment. The difference between coursework marks and examination marks tends to be greater in some disciplines than others, but it appears to be similar in men and women and in students from different ethnic groups. Collusion, plagiarism and ...

  17. Best Practices and Sample Questions for Course Evaluation Surveys

    One of the most common course assessment methods is the course evaluation survey. The following best practices are intended to guide departments and programs in creating or revising course evaluation questions. Clearly state the purpose at the top of the course evaluation. Meaningful input from students is essential for improving courses. Obtaining student feedback on…

  18. What is a Coursework Assessment?

    Course assessment tests are written and evaluate aspects of a student's entire course of study. They assess the topics you cover throughout your course. For example, if you are a student who wishes to learn about the American government, your assessment might cover how you learned about Congress, the executive branch, and government policy.

  19. Coursework and examinations

    Assessment by coursework Advice and tips on submitting coursework: Avoid a last minute rush: at the start of your course, check all submission deadlines in your course handbook and plan ahead. Our Online support site explains how to submit coursework online. This must be used where required for a course.

  20. Curriculum analytics: Exploring assessment objectives, types, and

    Higher education institutions are increasingly seeking ways to leverage the available educational data to make program and course quality improvements. The development of automated curriculum analytics can play a substantial role in this effort by bringing novel and timely insights into course and program quality. However, the adoption of curriculum analytics for program quality assurance has ...

  21. PDF Coursework Guidelines Booklet

    • Coursework Handbook. 3.3 Planning assessment It is recommended that assessment takes place at least three times during the course so that records of candidates' progress are available. This allows for unforeseen circumstances, such as candidate ill health, which could prevent a fi nal assessment taking place.

  22. What is Coursework?

    Coursework is a practical work or study done by a student in partial fulfilment of a degree or training. Projects, field work, design studies, long essays etc constitutes a coursework. The nature of work which requires to be carried out depends on the course. It is largely a part of learning exercise and a step to prepare you to handle the ...

  23. Coursework

    Coursework was removed from UK GCSE courses and replaced by "Controlled Assessment", much of which must be completed under exam conditions, without teacher assistance and with access to resources tightly controlled in order to reduce the possibility of cheating. [2] However, this too has been largely removed and replaced by mainly exam-based assessment as part of a general GCSE reform.

  24. Agile & Hybrid Project Management Strategies

    Course Duration. 13 hours. Languages. 1. Associated Certifications. PMP. Overview. Overview. What you'll learn. What you'll learn. PDU breakdown. PDU breakdown. Overview. Agile Hybrid Project Pro™ is a course that validates you have upskilled and aligned to the new PMP which include Agile and Hybrid approaches.

  25. Cheslin Kolbe's candid assessment of Springboks rookies' performances

    With the Springboks on course to win this year's Rugby Championship, Cheslin Kolbe sang the praises of the inexperienced players in the world champions' squad. South Africa have been in fine form this year, winning seven out of eight of their Tests played this year with their only blip being a narrow defeat against Ireland in Durban.