Artemis offers three different modes of exercise assessment:
Manual: Reviewers must manually grade the submissions of students.
Automatic: Artemis automatically grades the submissions of students.
Semi-Automatic: Artemis provides an automatic starting point for the reviewers to manually improve the grading afterward.
Manual assessment refers to evaluating or grading student assignments, typically performed by a reviewer rather than in an automated way. It is available and tailored to all exercise types except quiz exercises.
Manual assessment in Artemis involves the following steps:
Submission: Students submit their assignments through Artemis.
Review: Reviewers access the submitted work and put a lock on the submission, preventing inconsistency and ambiguity the other reviewer evaluations can cause. Then they review it carefully, considering the objectives, requirements, and criteria established for the assessment.
Evaluation: Based on their assessment, reviewers assign scores, provide feedback, or grade the student’s work, keeping the grading criteria in mind to ensure consistency and fairness. In the end, reviewers submit or cancel their evaluation, which revokes the lock for use by the other reviewers.
Student feedback: Students rate the quality of the feedback to motivate the reviewers to provide high-quality feedback to improve understanding and prevent misconceptions.
To keep track of manual assessments, Artemis offers an assessment dashboard. It represents the assessment progress of each exercise by showing the state of the exercise, the total number of submissions, the number of submissions that have been assessed, and the number of complaints and feedback requests. It also shows the average rating the students have given each exercise.
Each exercise also has its own assessment dashboard that shows all of this information for a single exercise.
To ensure consistency, fairness, and transparency, as well as to simplify the grading process, Artemis provides structured grading instructions (comparable to grading rubrics) that can be dragged and dropped, making it easier and faster to provide feedback. They include predefined feedback and points so that different reviewers can follow the same criteria when assessing student work. Additionally, they provide transparency to students, allowing them to understand how reviewers evaluate their submissions.
Instructors can use the assessment training process to make the grading more consistent. An integrated training process for reviewers based on example submissions and example assessments ensures that reviewers have enough knowledge to assess submissions and provide feedback properly. They define a series of example submissions and assessments that the reviewers must first read through.
The integrated training process is as follows:
The instructor creates an example submission for the exercise and
select the desired assessment mode that can define how a reviewer
has to confirm that the example was understood.
Depending on the assessment mode, the reviewer can either read and
confirm (Read and Confirm) or must assess the example submission
correctly (Assess Correctly).
The instructor adds an example assessment to the aforementioned
exercise submission. It is also possible to add actual student
submissions and “import” them as example submissions to make the
training more realistic and to reduce the effort of coming up with
new example submissions.
The reviewer sees the status of an exercise during the whole
The reviewer reads the grading instructions (problem statement,
grading criteria and example solution) and confirms that he/she
has understood it.
As soon as the reviewer starts participating in the exercise, he/she
can start reading example submissions and assessments provided
by the instructor if the assessment mode is “Read and Confirm”.
On the other hand, if the assessment mode is “Assess Correctly”,
Artemis compares the reviewer’s assessment with the one provided
by the instructor. If it does not match, it gives feedback on
why the assessment should be different.
The manual assessment begins after the due date for an exercise has passed for all students and is double-blind. It means that the reviewers do not know the names of the students they assess, and the students do not know the identity of the reviewers. The double-blind grading aims to minimize bias and increase the objectivity of the assessment. It implies that both the students and the reviewers are blind to each other’s identities, ensuring that their expectations or biases do not influence the results.
After receiving a grade, students can complain about an exercise assessment if the instructor enabled this option, the complaint due date is still ongoing, and the students think the evaluation needs to be revised. The instructor can set a maximum number of allowed complaints per course. These so-called tokens are used for each complaint. The token is returned to the student if the reviewer accepts the complaint. It means a student can submit as many complaints as they want, as long as they are accepted.
The complaint process is as follows:
The student opens the related exercise, interacts with the “Complain” button below the exercise instructions, and writes additional text before submitting a complaint to justify the reevaluation.
The reviewer interacts with the “Assessment Dashboard” button of the desired course, which displays the table for all the course exercises.
By interacting with the respective “Exercise Dashboard” button, the reviewer opens the exercise-specific dashboard and assesses students’ submissions. Upon evaluation, the reviewer puts a lock expiring automatically in 24 hours in addition to an option of unlocking manually.
The reviewer decides on the student’s complaint for each submission.In case of a justification, thereviewer adds feedback blocks andinteracts with the “Acceptcomplaint” button. Feedback pointscan be both negative and positive.Otherwise, the reviewer explainswhy the complaint was rejectedand interacts with the “Rejectcomplaint” button.If the reviewer cannot decidebetween accepting and rejecting,it is possible to remove the lockso that another reviewer canevaluate the complaint.
Student can rate the quality of the feedback.
Another possibility after receiving an assessment is the More Feedback Request. Unlike complaints, they do not cost a token, but the reviewer cannot change the score after a feedback request.
For the reviewers, the process is identical to the complaint process.
Sending a More Feedback Request removes the option to complain about the assessment entirely. The score cannot be changed even if the reviewer made a mistake during the first assessment and acknowledges this during the More Feedback Request.
Artemis also offers a way for instructors to monitor the reviewers’ assessments based on the students’ feedback on reviewer evaluation. The first part of this is the grading leaderboard, which is visible to all reviewers.
The leaderboard shows the number of assessments each reviewer has done and the number of feedback requests and accepted complaints about them. It also shows the average score the reviewer has given and the average rating they received for their assessments. It helps to track and display the performance and rankings of the reviewers who assess and provide feedback on student submissions. Additionally, Artemis automatically checks for “Issues with reviewer performance” in case reviewers significantly deviate from the average.
Automatic assessment is available for programming and quiz exercises. For quiz exercises, this is the only mode of assessment available. Artemis automatically grades students’ submissions after the quiz due date has passed. See the section about Quiz exercise for more information about this.
For programming exercises, this is done via instructor-written test cases that are run for each submission either during or after the due date. See the section about Programming Exercise for detailed information about this. Instructors can enable complaints for automatically graded programming exercises.