Updating Canvas Notification Preferences

This is a technical post. Read on for code and commentary.


In moving to online, we've tried to streamline all of our communication through Canvas. The goal is to cut down on disconnected email threads and encourage students to use submission comments to keep questions and feedback in context.

The Problem

Many students had already turned off email notifications for most communications in Canvas, preferring not to have any notices, which reduces their responsibility for teacher prompting and revision. Notifications are a user setting and the Canvas admin panel doesn't provide a way to define a default set of notification levels for users. However, with the API, we were able to write a Python program that sets notification prefs by combining the as_user_id query param as an admin that sets user notification preferences.

API Endpoints

  • GET user communication channel IDs: /api/v1/users/:user_id/communication_channels
  • PUT channel preferences for user: api/v1/users/self/communication_channels/{channel_id}/notification_preferences/{msg_type}

Params

  • Int user_id
  • Int channel_id
  • String frequency

Get User IDs

There is no easy way to programmatically get user IDs at the account or subaccount levels without looping each course and pulling enrollments. Instead, we opted to pull a CSV of all enrollments using the Provisioning report through the Admin panel. We configured separate files using the current term as the filter. This CSV included teacher, student, and observer roles. The script limits the notification updates to student enrollments.

Script Details

The full program is available in a GitHub gist. Here is an annotated look at the core functions.

main handles the overall process in a multi-threaded context. We explicitly define a number of workers in the thread pool because the script would hang without a defined number. Five seemed to work consistently and ran 1500 records (a single subaccount) in about 7 minutes.

The CSV includes all enrollments for each student ID, so we created a set to isolate a unique list of student account IDs (lines 9-10 below).

To track progress, we wrapped the set in tqdm. This prints a status bar in the terminal while the process is running which shows the number of processed records out of the total length. This is not part of the standard library, so it needs to be installed from PyPI before you can import it.

def main():
    """
    Update Canvas user notification preferences as an admin.
    """
    unique = set()
    data = []
    with open('your.csv', 'r') as inp:
        for row in csv.reader(inp):
            if re.search("student", row[4]):
                unique.add(int(row[2]))

    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        with tqdm(total=len(unique)) as progress:
            futures = []
            for student in unique:
                future = executor.submit(process_student_id, student)
                future.add_done_callback(lambda p: progress.update())
                futures.append(future)
            results = [future.result() for future in futures

process_student_id is called by the context manager for each student ID in the set. Canvas breaks communication methods into "channels:" email, push, Twitter, etc (line 3). Each channel has a unique ID for each user, so we needed to call each user's communication channels and then pass the ID for emails to a setter function.

def process_student_id(student):
    # Get their communication channel prefs
    channel_id = get_channel_id(student)

    try:
        # Update the channel prefs and return
        update = update_prefs(student, channel_id)
        return update
    except Exception as e:
        print(e)

GET communication_channels

def get_channel_id(student_id):
    url = f"https://yourURL.instructure.com/api/v1/users/{student_id}/communication_channels"
    resp = requests.request("GET", url, headers=headers)

    for channel in resp.json():
        # find the ID of the email pref
        if channel['type'] == 'email':
            return channel['id']

PUT communication_channels/:channel_id/notification_preferences/:message_type[frequency]

The communication channel can receive several types of communications. We wanted to set the student notifications to "immediately" for new announcements, submission comments, and conversation messages. You can define others as well as their frequencies by modifying the values on lines 3-4.

The communication types are not well documented, so we used our own channel preferences to find the notification strings: GET /users/self/communication_channels/:channel_id/notification_preferences.

The crux of this step is to make the request using the Masquerading query param available to the calling user. Make sure the account which generated the API key can masquerade or else the script will return an unauthorized error.

def update_prefs(student_id, channel_id):
    # loop through different announcement types
    types = ["new_announcement", "submission_comment", "conversation_message"]
    frequency = "immediately"  # 'immediately', 'daily', 'weekly', 'never'
    responses = []

    for msg_type in types:
        url = f"https://elkhart.test.instructure.com/api/v1/users/self/communication_channels/{channel_id}/notification_preferences/{msg_type}?as_user_id={student_id}&notification_preferences[frequency]={frequency}"
        resp = requests.request("PUT", url, headers=headers)

        responses.append(resp)

    return responses

Final Thoughts

Updating a user's personal preferences isn't something I was thrilled about doing, but given our current circumstances, it was preferable to the alternative of continuing to struggle to help students move forward in their coursework. Further improvements would be to call each CSV in the file system incrementally, cutting down on the time someone has to log in and run the script. Hopefully, this only needs to be done once and does not become a recurring task.

Second, there is an endpoint in the API to update multiple communication preferences at once, but it isn't well documented and I wasn't able to get it working reliably. For just one channel and three specific types of messages, the performance improvements probably would have been negligible (at least that's what I'm telling myself).

Running arbtt on Mac OS

Tom Woodward has a semi-regular weekly roundup blog post with links to interesting things he finds on the Internet. A couple weeks back, he posted a link to something called arbtt (pronounced ar-bit? that's what I say, anyways), which is actually an acronym for "arbitrary, rule-based time tracker." In short, it runs in the background and samples all the open programs on your computer, once per minute, and writes that information to a text log.

It's super, super geeky. Like seriously. I've used todo-txt for almost two years now and I recently started tracking how long I work on a given task so I can keep better track of what I spend my time on. But, the catch is, I have to remember to do turn on the tracker. arbtt runs in the background. The data is standardized, so you can write different queries and poll for information in very, very granular ways.

It was a real pain to get set up. After two days of fussing on and off, I have it running well on my computer. The documentation for Mac OS is really lacking, so here's what I did for my machine.

(This is fairly technical, so jump to the bottom if you want to see what it does once it's running.)

Install

Installation wasn't too bad for arbtt on Mac. The install instructions on the project page worked fine, particularly because I already had Homebrew installed and set up. I'm not going to rehash those steps here. Go read them there.

Binaries

Getting the thing to run was a different story. arbtt installs itself at the User level on Mac OS in a hidden .arbtt directory. This holds the configuration file and the actual capture.log file.

The actual executables are in the .cabal directory (also under /Users/yourname) because they're managed by the package manager. The documentation says to go into .cabal/bin and run arbtt-capture in order to start capturing data with the daemon.

Well, that didn't work.

The files in .cabal/bin are symlinked to the actual executables, and from what I can gather, Mac OS doesn't like that. At all. So, to run the scripts, you have to call the absolute path to the actual binaries. Those are in .cabal/store/ghc-8.4.4/rbtt-0.10.1-*/bin. I don't know enough about package managers, but those binaries are buried. I ended up creating aliases in my terminal so I can use one-line invocation.

categorize.cfg

Because the collection of information is arbitrary, you can collect without knowing what you want to know, which is pretty cool. The syntax for the query rules is in Haskell, which I don't know, so mine is a little hacky right now. In my playing, there are two main rules:

  1. Target the current window with $program.
  2. Target the current window title with $title.

You can use standard boolean operators to combine queries to get specific information. Each query pushes to a tag object that contains a Category and identifier in a Category:identifier syntax. A query I'm using to watch my Twitter use is:

current window $program == "Vivaldi" && current window $title =~ [m!TweetDeck!, m!Twitter!] ==> tag Social:twitter

So, it checks for both the browser (I use one called Vivaldi) and the window title before pushing (==>) to the Social:twitter tag. Mine are all really rudimentary right now, but you can sort and filter by date, titles, even directory locations if you're working on your filesystem. Since the underlying data never changes, you can just write new rules that will parse with arbtt-stats (next).

arbtt-stats

The main capture daemon just runs in the background and collects data. The workhorse is arbtt-stats which parses all of the raw data. You can run arbtt-stats in the command line and get a full report of all matched rules or you can query by specific tag with the -c flag. So, executing arbtt-stats -c Social:twitter would return only the time recorded where Twitter was the focused window.

arbtt-graph (addon)

This all happens in the command line, which isn't super helpful, especially if you have a lot of results like this:

arbtt-stats raw output

Filtering down by tag with -c is helpful, but it would also be nice to turn this into something graphical. That's where arbtt-graph comes in. It's a Python-based parser that generates a website based on arbtt stats for each day as well as a four-month aggregate.

arbtt-graph output

The biggest problem I had with arbtt-graph was that python isn't super happy with relative file locations. I had to edit the scripts with absolute paths to write and read all of the necessary documents. It's a fun little helper on my computer, and if I was insane, I might investigate putting it online somewhere with a cron job, but that's for another day.