Teacher Manual

OpenSubmit was invented for making assignments more fun for the students, and less work for the teachers. Before you start to read into details, we recommend to get into the general principles we follow.

Student tutors, course owners and administrators (see also Permissions) all operate in the teacher backend, which is reachable by a link at the bottom of the student dashboard page, or directly via <your OpenSubmit url>/teacher.

Managing study programs

Location: Teacher backend - System - Actions - Manage study programs

This function is only available for users with according permissions.

Students register in OpenSubmit by themselves, simply by using one of the configured authentication methods (see Authentication methods). After the first login, there are asked to complete their user details (see also Setting your user details).

One part of the user details dialogue is the choice of the study program, e.g. computer science (Bachelor) or Greek philosophy (Master). When more than one study program is configured in OpenSubmit, then this choice becomes mandatory. If only a single or no study program is configured, then the students are not forced to make that choice.

The study program is shown in the User management and the Grading table. Is has no further impact on the operation of OpenSubmit, but can help with the grading of mixed courses.

Managing courses

Location: Teacher backend - System - Actions - Manage courses

This function is only available for users with according permissions.

Assignments for students belong to a course. The registered students can choose (see also Choosing your courses) which course they participate in. This is different to many other learning management systems, which offer dedicated course permission systems (see also The Zen of OpenSubmit).

Course creation

Location: Teacher backend - System - Actions - Manage courses - Add course

The following settings must be configured for a new course:

Title
The title of the course.
Course owner
A user who automatically gets course owner permissions for this course. His email address is used as sender in student notifications.
Tutors
A set of users that get student tutor permissions for this course.
Course description link
A URL for the course home page. Used in the student dashboard.
Active
The flag decides if any assignments from this course are shown to the students, regardless of their deadlines. This allows to put courses in an ‘archive’ mode after the term is over.
LTI key / LTI passphrase

OpenSubmit supports the LTI protocol, so that you can integrate it into other learning management systems (LMSs) such as Moodle.

The LMS needs a consumer key and a shared secret resp. passphrase that you configure separately for each OpenSubmit course. This makes sure that the system knows automatically the course in which the external LMS user is interested in. Such users don’t need to perform any authentication, OpenSubmit blindly believes in the identify information forwarded by the LMS. If a user already exists with the same email address, the LMS identity is added to his social login credentials.

Using LTI authentication can lead to duplicate accounts. You can merge users to fix that.

The machine-readable configuration for LTI, which can be parsed by some LMS systems, is available under the relative URL /lti/config/.

Grading scheme creation

Location: Teacher backend - System - Actions - Manage grading schemes

Before you can create assignments for students, you must think about the grading scheme. A grading scheme is an arbitrary collection of gradings, were each grading either means ‘pass’ or ‘fail’.

Grading schemes can later be used in the creation of assignments.

Assignment creation

Location: Teacher backend - Course - Actions - Manage assignments - Add assignment

With an existing course and an appropriate grading scheme, you can now create a new assignment:

Title (mandatory)
The title of the assignment.
Course (mandatory)
The course this assignment belongs to.
Grading scheme (optional)
The grading scheme for this assignment. If you don’t chose a grading scheme, than this assignment is defined as ungraded, which is also indicated on the student dashboard. Ungraded assignments are still validated.
Max authors (mandatory)
For single user submissions, set this to one. When you choose larger values, the students get a possiblity to define their co-authors when submitting a solution.
Student file upload (mandatory)
If students should upload a single file as solution, enable this flag. Otherwise, they can only enter submission notes. Students typically submit archives (ZIP / TGZ) or PDF files, but the system puts no restrictions on this.
Description (mandatory)
The assignment description is linked on the student dashboard. It can either bei configured as link, f.e. when you host it by yourself, or can be uploaded to OpenSubmit.
Publish at (mandatory)
The point in time where the assignment becomes visible for students. Users with teacher backend access rights always see the assignment in their student dashboard, so that they can test the validation before the official start.
Soft deadline (optional)

The deadline shown to the students. After this point in time, submission is still possible, although the remaining time counter on the student dashboard shows zero.

If you leave that value empty, then the hard deadline becomes the soft deadline, too.

The separation between hard and soft deadline is intended for the typical late-comers, which try to submit their solution shortly after the deadline. Broken internet, time zone difficulties, dogs eating the homework … we all know the excuses.

Hard deadline (optional)

The deadline after which submissions for this assignment are no longer possible.

If you leave that value empty, then submissions are possible as long as the course is active.

Validation test (optional)
The uploaded validation test is executed automatically for each student submission and can lead to different subsequent states for the submission. Students are informed about this state change by email. The test is executed before the hard deadline. It is intended to help the students to write a valid solution.
Download of validation test (optional)
The flag defines if the students should get a link to the validation test. This makes programming for the students much more easy, since the can locally if their uploaded code would pass the validation checks.
Full test (optional)
The uploaded full test is executed automatically for each student submission and can lead to different subsequent states for the submission. Students are not informed about this test. The test is executed after the hard deadline. It is intended to support the teachers in their grading with additional information.
Support files (optiona)
A set of files that you want to have in the same directory when the validation test or the full test is running.
Test machines (mandatory in some cases)
When you configure a validation test or full test, you need to specify the :test machines that run it. When chosing multiple machines, the testing load is distributed.

Managing submissions

A submission is a single (archive) file + notes handed in by a student. Every submission belongs to a particular assignment and its according course in OpenSubmit.

A student submission can be in different states. Each of the states is represented in a different way in student frontend and the teacher backend:

    # State description in teacher backend
    STATES = (

        # The submission is currently uploaded,
        # some internal processing still takes place.
        (RECEIVED, 'Received'),

        # The submission was withdrawn by the student
        # before the deadline. No further automated action
        # will take place with this submission.
        (WITHDRAWN, 'Withdrawn'),

        # The submission is completely uploaded.
        # If code validation is configured, the state will
        # directly change to TEST_VALIDITY_PENDING.
        (SUBMITTED, 'Submitted'),

        # The submission is waiting to be validated with the
        # validation script on one of the test machines.
        # The submission remains in this state until some
        # validation result was sent from the test machines.
        (TEST_VALIDITY_PENDING, 'Validity test pending'),

        # The validation of the student sources on the
        # test machine failed. No further automated action will
        # take place with this submission.
        # The students get informed by email.
        (TEST_VALIDITY_FAILED, 'Validity test failed'),

        # The submission is waiting to be checked with the
        # full test script on one of the test machines.
        # The submission remains in this state until
        # some result was sent from the test machines.
        (TEST_FULL_PENDING, 'Full test pending'),

        # The (compilation and) validation of the student
        # sources on the test machine worked, only the full test
        # failed. No further automated action will take place with
        # this submission.
        (TEST_FULL_FAILED, 'All but full test passed, grading pending'),

        # The compilation (if configured) and the validation and
        #  the full test (if configured) of the submission were
        # successful. No further automated action will take
        # place with this submission.
        (SUBMITTED_TESTED, 'All tests passed, grading pending'),

        # Some grading took place in the teacher backend,
        # and the submission was explicitly marked with
        # 'grading not finished'. This allows correctors to have
        # multiple runs over the submissions and see which
        # of the submissions were already investigated.
        (GRADING_IN_PROGRESS, 'Grading not finished'),

        # Some grading took place in the teacher backend,
        # and the submission was explicitly marked with
        # 'grading not finished'. This allows correctors
        # to have multiple runs over the submissions and
        #  see which of the submissions were already investigated.
        (GRADED, 'Grading finished'),

        # The submission is closed, meaning that in the
        # teacher backend, the submission was marked
        # as closed to trigger the student notification
        # for their final assignment grades.
        # Students are notified by email.
        (CLOSED, 'Closed, student notified'),

        # The submission is closed, but marked for
        # another full test run.
        # This is typically used to have some post-assignment
        # analysis of student submissions
        # by the help of full test scripts.
        # Students never get any notification about this state.
        (CLOSED_TEST_FULL_PENDING, 'Closed, full test pending')
    )

    # State description in student dashboard
    STUDENT_STATES = (
        (RECEIVED, 'Received'),
        (WITHDRAWN, 'Withdrawn'),
        (SUBMITTED, 'Waiting for grading'),
        (TEST_VALIDITY_PENDING, 'Waiting for validation test'),
        (TEST_VALIDITY_FAILED, 'Validation failed'),
        (TEST_FULL_PENDING, 'Waiting for grading'),
        (TEST_FULL_FAILED, 'Waiting for grading'),
        (SUBMITTED_TESTED, 'Waiting for grading'),
        (GRADING_IN_PROGRESS, 'Waiting for grading'),
        (GRADED, 'Waiting for grading'),
        (CLOSED, 'Done'),
        (CLOSED_TEST_FULL_PENDING, 'Done')
    )

Submission grading

Location: Teacher backend - Course - Manage submissions

Location: Teacher backend - Course - Manage assignments - Show submissions

The grading of student submissions always follows the same workflow, regardless of the fact if you are using the automated testing facilities or not.

Short version:

  • For every submission:
    • Open the submission in the teacher backend.
    • Use the preview function for inspecting uploaded student archives.
    • Check the output from validation test and full test.
    • Optional: Add grading notes and a grading file for the student as feedback.
    • Decide for a grading, based on the provided information.
    • Mark the submission as grading finished if you are done with it.
  • Close and notify all finished submissions as bulk action.

Long version:

On the right side of the submissions overview page, different filtering options are available.

_images/ui_backend_submissions.png

The most important thing is the distinguishing between non-graded, graded and closed submissions:

Non-graded submissions are the ones that were submitted (and successfully validated) before the hard deadline. Your task is to go through these submissions and decided for a particular grading. If this is done, than the grading is marked as being completed for this particular submission. This moves it into the graded state.

When all gradings are done, then the submissions can be closed. This is the point in time were the status for the students changes, before that, no notification is done. The idea here is to first finish the grading - maybe with multiple people being involved - before notifying all students about their results. Only submissions in the graded status can be closed. This is a safeguard to not forget the finishing of some grading procedure.

The submission details dialogue shows different information:

_images/ui_backend_submission.png

The assignment may allow the students to define co-authors for their submission. You can edit this list manually, for example when students made a mistake during the submission. The according section is hidden by default, click on the Authors tab to see it.

The original submitter of the solution is stated separately. Submitters automatically become authors.

Students can always add notes to their submission. If file upload is disabled for the assignment, this is the only gradable information.

The file upload of the students is available for direct download, simply by clicking on the file name. This is especially relevant when having text or PDF document as solution attachment. The Preview link opens a separate web page with a preview of the file resp. the archive content.

When testing is activated for this assignment, then the according result output is shown in the submission details.

The choice of a grading is offered according to the grading scheme being configured for the particular assignment. The grading notes are shown in the student frontend, together with the grade, when the submission is closed.

The grading file is also offered after closing, and may - for example - contain some explanary template solution or a manually annotated version of the student file upload.

The radio buttons at the bottom of the page allow to mark the submission as non-graded or graded.

When all submissions are finally graded, it is time to release the information to the students. In order to do this, mark on the overview page all finished submissions. This can be easily done by using the filters on the right side and the ‘mark all’ checkbox in the upper left corner. The choose the action ‘Close graded submissions + send notification’.

Grading table

Location: Teacher backend - Course - Show grading table

If you want to have a course-level overview of all student results so far, use the grading table overview. It is available as action in the Courses section of the teacher backend.

Duplicate report

Location: Teacher backend - Course - Manage assignments - Show duplicates

A common task in assignment correction is the detection of cheating. In OpenSubmit terms, this leads to the question if different students have submitted identical, or at least very similar, solutions for an assignment.

Checking arbitrary code for similarities is a complex topic by itself and is closely related to the type and amount of code being checked. OpenSubmit follows it general principles here by not restricting the possible types of submission for a perfect duplicate detection. Instead, we encourage users with specific demands to use such services in their testing scripts.

OpenSubmit provides a basic duplicate checking for submitted files based on weak hashing of the student archives content. This method works independently from the kind of data and can, at least, detect the most lazy attempts of re-using other peoples work.

Based on the hashing results, the duplicate report shows groups of students that may have submitted the same result. This list must be treated as basic indication for further manual inspection. The report works independently from the course and the status of the submissions. Withdrawn solutions are skipped in the report.

Automated testing of submissions

The automated testing of submissions is performed by a Python 3 script that you, the assignment creator, have to write. This script is executed by OpenSubmit on some configured test machines. You are completely free in what you want to do in this script - at the end, OpenSubmit just needs an indication about the result. Common tasks, such as code compilation and execution, are supported by helper functions you can use in this script.

You can upload such a script in two ways:

  • As single Python file named validator.py.
  • As ZIP / TGZ archive with an arbitrary name, which must contain a file named validator.py.

The second option allows you to deploy additional files (e.g. profiling tools, libraries, code not written by students) to the test machine. OpenSubmit ensures that all these files are stored in the same directory as the student code and the Python script.

How to write a test script

Test scripts are written in Python 3.4 and will be directly called by the OpenSubmit daemon running on test machines.

You can install this daemon, which is also called executor, on your own computer easily. This gives you an offline development environment for test scripts while you are working on the assignment description.

Similar to the installation of test machines, the following procedure (for Debian / Ubuntu systems) gives you a testing environment:

  • Install Python 3: sudo apt-get install python3 python3-pip

To keep your Python installation clean, we recommend to use Virtualenv:

  • Install the Virtualenv tool: sudo pip3 install virtualenv
  • Create a new virtual environment, e.g. in ~/my_env: python3 -m virtualenv ~/my_env
  • Activate it with source ~/my_env/bin/activate
  • Install the OpenSubmit validator library / executor inside: pip3 install opensubmit-exec
  • Develop the validator.py for your assignment.

Examples for test scripts can be found online.

We illustrate the idea with the following walk-through example:

Students get the assignment to create a C program that prints ‘Hello World’ on the terminal. The assignment description demands that they submit the C-file and a Makefile that creates a program called hello. The assignment description also explains that the students have to submit a ZIP archive containing both files.

Your job, as the assignment creator, is now to develop the validator.py file that checks an arbitrary student submission. Create a fresh directory that only contains an example student upload and the validator file:

1
2
3
4
5
6
7
def validate(job):
    job.run_make(mandatory=True)
    exit_code, output = job.run_program('./hello')
    if output.strip() == "hello world":
        job.send_pass_result("The world greets you! Everything worked fine!")
    else:
        job.send_fail_result("Wrong output: " + output)

The validator.py file must contain a function validate(job) that is called by OpenSubmit when a student submission should be validated. In the example above, this function performs the following steps for testing:

  • Line 1: The validator function is called when all student files (and all files from the validator archive) are unpacked in a temporary working directory on the test machine. In case of name conflicts, the validator files always overwrite the student files.
  • Line 2: The make tool is executed in the working directory with run_make(). This step is declared to be mandatory, so the method will throw an exception if make fails.
  • Line 3: A binary called hello is executed in the working directory with the helper function run_program(). The result is the exit code and the output of the running program.
  • Line 4: The generated output of the student program is checked for some expected text.
  • Line 5: A positive validation result is sent back to the OpenSubmit web application with send_pass_result(). The text is shown to students in their dashboard.
  • Line 6: A negative validation result is sent back to the OpenSubmit web application with send_fail_result(). The text is shown to students in their dashboard.

Test scripts are ordinary Python code, so beside the functionalities provided by the job object, you can use any Python functionality. The example shows that in Line 4.

If any part of the code leads to an exception that is not catched inside validate(job), than this is automatically interpreted as negative validation result. The OpenSubmit executor code forwards the exception as generic information to the student. If you want to customize the error reporting, catch all potential exceptions and use your own call of send_fail_result() instead.

To check if the validator is working correctly, you can run the command opensubmit-exec test <directory> in your VirtualEnv. It assumes the given directory to contain a validator script resp. archive and the student submission file resp. archive. The command simulates a complete validation run on a test machine and prints exhaustive debugging information. The last line contains the feedback sent to the web application after finalization.

Test script examples

The following example shows a validator for a program in C that prints the sum of two integer values. The values are given as command line arguments. If the wrong number of arguments is given, the student code is expected to print “Wrong number of arguments!”. The student only has to submit the C file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
from opensubmitexec.compiler import GCC

test_cases = [
    [['1', '2'], '3'],
    [['-1', '-2'], '-3'],
    [['-2', '2'], '0'],
    [['4', '-10'], '-6'],
    [['4'], 'Wrong number of arguments!'],
    [['1', '1', '1'], 'Wrong number of arguments!']
]

def validate(job):
    job.run_compiler(compiler=GCC, inputs=['sum.c'], output='sum')
    for arguments, expected_output in test_cases:
        exit_code, output = job.run_program('./sum', arguments)
        if output.strip() != expected_output:
            job.send_fail_result("Oops! That went wrong! Input: " + str(arguments) + ", Output: " + output, "Student needs support.")
            return
    job.send_pass_result("Good job! Your program worked as expected!", "Student seems to be capable.")
  • Line 1: The GCC tuple constant is predefined by the OpenSubmit library and refers to the well-known GNU C compiler. You can also define your own set of command-line arguments for another compiler.
  • Line 3-10: The variable test_cases consists of the lists of inputs and the corresponding expected outputs.
  • Line 13: The C file can be compiled directly by using run_compiler(). You can specify the used compiler as well as the names of the input and output files.
  • Line 14: The for-loop is used for traversing the test_cases-list. It consists of tuples which are composed of the arguments and the expected output.
  • Line 15: The arguments can be handed over to the program through the second parameter of the run_program() method. The former method returns the exit code as well as the output of the program.
  • Line 16: It is checked if the created output equals the expected output.
  • Line 17: If this is not the case an appropriate negative result is sent to the student and teacher with send_fail_result()
  • Line 18: After a negative result is sent there is no need for traversing the rest of the test cases, so the validate(job) function can be left.
  • Line 19: After the traversal of all test cases, the student and teacher are informed that everything went well with send_pass_result()

The following example shows a validator for a C program that reads an positive integer from standard input und prints the corresponding binary number.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from opensubmitexec.exceptions import TerminationException

test_cases = [
    ['0', '0'],
    ['1', '1'],
    ['8', '1000'],
    ['9', '1001'],
    ['15', '1111']
]

def validate(job):
    job.run_build(inputs=['dec_to_bin.c'], output='dec_to_bin')
    for std_input, expected_output in test_cases:
        running = job.spawn_program('./dec_to_bin')
        running.sendline(std_input)
        try:
            running.expect(expected_output, timeout=1)
        except TerminationException:
            job.send_fail_result("Arrgh, a problem: We expected {0} as output for the input {1}.".format(expected_output, std_input), "wrong output")
            return
        else:
            running.expect_end()
    job.send_pass_result("Everything worked fine!", "Student seems to be capable.")
  • Line 1: A TimeoutException is thrown when a program does not respond in the given time. The exception is needed for checking if the student program calculates fast enough.
  • Line 3-9: In this case the test cases consist of the input strings and the corresponding output strings.
  • Line 12: The method run_build() is a combined call of configure, make and the compiler. The success of make and configure is optional. The default value for the compiler is GCC.
  • Line 13: The test cases are traversed like in the previous example.
  • Line 14: This time a program is spawned with spawn_program(). This allows the interaction with the running program.
  • Line 15: Standard input resp. keyboard input can be provided through the sendline() method of the returned object from line 14.
  • Line 17-20: The validator waits for the expected output with expect(). If the program terminates without producing this output, a TerminationException exception is thrown.
  • Line 22: After the program successfully produced the output, it is expected to terminate. The test script waits for this with expect_end()
  • Line 23: When the loop finishes, a positive result is sent to the student and teacher with send_pass_result().

Warning

When using expect(), it is important to explicitely catch a TerminationException and make an explicit fail report in your validation script. Otherwise, the student is only informed about an unexpected termination without further explanation.

The following example shows a validator for a C program that reads a string from standard input and prints it reversed. The students have to use for-loops for solving the task. Only the C file has to be submitted.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from opensubmitexec.exceptions import TimeoutException
from opensubmitexec.exceptions import TerminationException

test_cases = [
    ['hallo', 'ollah'],
    ['1', '1'],
    ['1234', '4321']
]

def validate(job):
    file_names = job.grep('.*for[:space:]*(.*;.*;.*).*')
    if len(file_names) < 1:
        job.send_fail_result("You probably did not use a for-loop.", "Student is not able to use a for-loop.")
        return

    job.run_build(inputs=['reverse.c'], output='reverse')
    for std_input, expected_output in test_cases:
        running = job.spawn_program('./reverse')
        running.sendline(std_input)
        try:
            running.expect(expected_output, timeout=1)
        except TimeoutException:
            job.send_fail_result("Your output took to long!", "timeout")
            return
        except TerminationException:
            job.send_fail_result("The string was not reversed correctly for the following input: " + std_input, "The student does not seem to be capable.")
            return
        else:
            running.expect_end()
    job.send_pass_result("Everything worked fine!", "Student seems to be capable.")
  • Line 1: A TimeoutException is thrown when a program does not respond in the given time. The exception is needed for checking if the student program calculates fast enough.
  • Line 2: A TerminationException is thrown when a program terminates before delivering the expected output.
  • Line 4-8: The test cases consist of the input strings and the corresponding reversed output strings.
  • Line 11: The grep() method searches the student files for the given pattern (e.g. a for-loop) and returns a list of the files containing it.
  • Line 12-14: If there are not enough elements in the list, a negative result is sent with send_fail_result() and the validation is ended.
  • Line 16-24: For every test case a new program is spawned with spawn_program(). The test script provides the neccessary input with sendline() and waits for the expected output with expect(). If the program is calculating for too long, a negative result is sent with send_fail_result().
  • Line 25: If the result is different from the expected output a TerminationException is raised.
  • Line 26-27: The corresponding negative result for a different output is sent with send_fail_result() and the validation is cancelled.
  • Line 28-29: If the program produced the expected output the validator waits with expect_end() until the spawned program ends.
  • Line 30: If every test case was solved correctly, a positive result is sent with send_pass_result().

Developer reference

The Job class summarizes all information about the submission to be validated by the test script. It also offers a set of helper functions that can be directly used by the test script implementation.

Test scripts can interact with a running student program, to send some simulated keyboard input and check the resulting output for expected text patterns.