Tracking Missing IDs with a Google Sheet

Like many schools, our students are asked to wear their student ID’s when they come into the building each day. During the day, they don’t need to have them on…just in the morning so we can make sure people coming in are part of our student body. If a student doesn’t have their ID, we issue a temporary and track how many times they don’t have one when they come to school. This process used to be done by hand each morning. A queue would form and a teacher would write down ID numbers (they all have them memorized) and names. This caused several problems:

  1. If a student gave a fake ID number, office staff wouldn’t know until much later in the day.

  2. The paper copy of the ID list was given to a secretary to transcribe into a spreadsheet.

  3. Transcribing meant looking up the ID in our SIS and then manually entering the number of times the student didn’t have their ID.

  4. When benchmarks were hit (5, 8, 11), disciplinary action was issued and followed up on by an assistant principal.

I spoke with the head secretary and we worked out a custom Google Sheet to do all of this automatically. Now, the duty station is equipped with a Chromebook so the teacher issuing IDs can quickly check veracity of the given ID and have all of the tally work done. This (mega) post outlines the sheet structure and custom code used to do the work.

The template and source are linked at the bottom of the post.

The Sheet

The Google Sheet is split into four tabs:

  1. Raw: Blank sheet with Timestamp, ID, and Name.

  2. Lookup: database of student ID numbers and the associated name/grade.

  3. Aggregate: ID, Name, Grade, Count, Cons 1, Cons 2, Cons 3.

  4. Daily groups. Filter, ID, Name, Grade.

Raw Input

This is used by the duty station. The date and student ID numbers are entered as students are given temporary bands. The Name column is populated by an ID lookup using =index(lookup!B:B, match(B2,lookup!A:A)). Both VLOOKUP and INDEX/MATCH are notorious for mistakes with large data sets, which is why the lookup tab is filtered by ID number and not alphabetically.

Aggregate

The same student can have multiple dates of entry and this sheet aggregates by student ID. Using UNIQUE, it pulls each ID as a single column. Then, I used another index-match function to populate the name and grade. To count the number of missing ID instances, a COUNTIF function worked well to count the number of times the ID number shows up in the raw sheet.

Scripting

At this point, much of the work of the sheet was moved over to Apps Scripts. We needed some dynamic results and using a script to sort through the volume of information was much faster and more reliable than using regular Sheets functions. I’m breaking the code into chunks to better explain the purpose. In the actual sheet, all functions are in the same file.

Function 1: Globals and menu item

I use a couple global variables (not super efficiently, though). I also create a custom menu to run sheet functions. This takes care of creating those items. I have mine at the top of the script, but they can go anywhere.

Function 2: Listing dates a student was missing the ID

For reporting, it was helpful to know which dates a student was missing an ID. Rather than adding n columns to fill with dates, a script was used to look up the student ID number and then add a note to the cell with the dates.

Function 3: Populating a daily list of students

Each day, the secretary checks for students who have hit a benchmark: 5, 8, or 11 missing IDs. Searching through the aggregate list isn’t feasible, so a script does the search and then returns the results dynamically.

This checks two conditions: A) The number of missing IDs is equal to or greater than the target and B) there is no consequence filled in the appropriate column. If the consequence has been assigned, it’s in the SIS and doesn’t need to be entered by the secretary.

Function 4: Assigning consequences

When the office staff pulled the daily list, they go into our SIS and update the disciplinary action or other notes. They would still have to go back and document that consequence being assigned in the aggregate sheet. Rather than scroll through the list, the appropriate consequence for the target is now filled in when the list is generated. The daily list isn’t cleared until the script is run again with a new target.

If, by chance, there are no students to assign a consequence to, a popup is shown to let the user know that no students meet the criteria.

The Result

In the end, we’ve removed two steps from the administrative process, but they were the most time-intensive steps. Rather than looking each student up and then remarking a spreadsheet, the staff member needs to simply pull the list of students for that day.

Computers are great at repetitive tasks, so let’s use them to do those tasks. There’s definitely some optimization that can be done, especially in the last two functions as they pass those arrays around like a cold. If you make updates, please comment and share back. The code is hosted on GitHub, so you can fork and update as much as you’d like.

You can look at a copy of the template or just make a copy for yourself and start poking away.

The entire source (not broken up) is hosted on GitHub Gists.

Turning an Old Laptop into a Video Kiosk

My father-in-law came to me with an interesting idea. He wanted to create video kiosk for our church which would play videos on different mission organizations we’re involved with. The wall – previously – had photos and text about each missionary or organization, but he wanted to revamp.

We initially tried to use PowerPoint and a custom keyboard to jump to different slides, but maintaining and updating that system wasn’t going to be very elegant or user friendly. So, about a year later, I had an “oh, duh” moment and realized we could do it as a static, locally-hosted website. It would be easy to build and even easier to maintain, so that’s what we did.

In this post, I’ll talk about the hardware and software we used to get it finished. There are still some things to hammer out, but the bones of the project are done and tested successfully, so it seems like a good time to document it.

The Hardware

Our initial idea was to use a Raspberry Pi 3 to do everything because of it’s low price point and small size. Unfortunately, the RPi, in general, doesn’t handle web video too well. I looked into using the kweb minimal web browser, which hooks into the native OMX Video Player on the Pi, but you can’t customize the look and playing full screen video had lots of menus. In the end, it was turning into a huge job to make it look polished, so we moved away from the Pi.

My brother-in-law had an old HP laptop that had died when he tried to update it to Windows 8 (insert snarky Microsoft joke here). So, he donated it to the cause.

I wiped the hard drive and installed Ubuntu Linux 16.04 LTS. It’s pretty lightweight and gets consistent updates. It’s also really user-friendly in case there is a problem with the laptop, so one of the office assistants can troubleshoot if I can’t make it. I also chose to stick with Linux because I can use SSH to log in via my Chromebook on Sunday mornings and run updates remotely if I need to.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

You could definitely argue that running a full desktop environment for a simple web kiosk slows the machine and introduces a bunch of variables that could cause things to go wrong, which is 100% accurate. OMG! Ubuntu! has a good article on how to either convert a full machine to a dedicated kiosk or how to build one from scratch, but since I didn’t find the article until we were almost done, I decided not to go back and rework everything.

For user interaction, we grabbed an Intuos Art Small tablet from Wacom for $100. It’s seated in a wall mount to lock it in place and hide the wires. Essentially, it’s a giant touchpad.

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

Finally, we bought a 55″ wall mounted TV. The laptop had an HDMI port, so that took care of high-definition video and audio.

The Software

I built the page with plain HTML and JavaScript. It’s currently being hosted locally on the machine to ensure smooth video with no buffering. I’m planning on testing the broadband rates via ethernet next time in church because over wifi we ran into issues. If I can get a good download rate, I’ll switch the site over to GitHub Pages so I can update remotely.

Source

The HTML and CSS is pretty standard. I didn’t want a ton of bloat, so I coded everything from scratch. You can take a look at the markup on GitHub. There’s also a live example so you can see how it’s rendered.

First, this is a hallway display. There will probably be times where people aren’t watching videos, in which case I want to avoid burning an image into the screen of the TV. I added a small jQuery function to bring up a prompt if no one touches the trackpad for 30 seconds. This also turned out to be handy because a lot of people walked up to the tv and tried touching it directly rather than using the trackpad input.

To play the videos, I wanted each container to reference a hidden video div. I use jQuery to handle the element selection and JavaScript to pay attention to the play state. When a user clicks the tile, a fullscreen video starts playing. There is no keyboard for them to quit out of the video, so I don’t worry about keypress events. If they jump out of fullscreen using the playback controls, it saves the video location.

Ubuntu tweaks

There were also some software tweaks I needed to make on the machine itself.

I wanted a standard user to log in automatically. So, I created a generic user on the system and dropped the source files onto the desktop (more on that in a minute). Theoretically, the user will never get out of Chrome because there’s no keyboard available. When the computer boots, it logs into the generic user right away.

Then, I edited the Startup Applications option. You can launch Chrome from the Terminal and you can specify which command to use in the settings. Using:

chromium-browser –kiosk [URL]

launches Chrome in the full screen kiosk mode and displays the website immediately after login.

The laptop is mounted on the wall behind the TV. Ubuntu wasn’t recognizing the monitor when the lid was closed. There is a flag in etc/systemd/logind.conf that handles a dock, but we weren’t using one. So, I had to change the HandleLidSwitch flag to ignore to ignore the lid being closed (thanks to this answer)

Finally, because the laptop is mounted behind the TV on tracks with a padlock, it’s a pain to take it out to turn it on and off. I was able to automate the daily shutdown pretty easily by specifying a time using crontab -e (you have to be root to shut down). Startup was harder.

After some research, I found that most computers have something called a Real Time Clock (RTC) synced with UTC. It can be used to set an alarm to wake the computer. You can test this by setting the clock to zero with:

sudo echo 0 > /sys/class/rtc/rtc0/wakealarm

and then resetting it with:

sudo echo `date '+%s' -d '+ 10 minutes'` > /sys/class/rtc/rtc0/wakealarm

Now that I knew the computer would turn itself back on, I could create a simple bash script to run with cron that would handle startup and shutdown daily:

I stored the file in /usr/bin and made it executable with chmod +x.

Then, I edited crontab -e to run the script daily. Note that this specifies the shutdown time. At 8 PM every day, the computer will shut down. The shutwake script resets the RTC alarm:

0 8 * * * * /usr/bin/shutwake

cron can be picky, so if you need more help, this article helped a lot.

The last thing we needed to work out was muting the audio during sermons so someone didn’t crank out a video in the middle of church. The video will still play with captions (accessibility FTW) and muting the audio turned out to be not too bad. I can toggle the pulse audio driver in Ubuntu with a simple cron job that runs on Sundays at 9:00 and 12:00 to turn the audio on and off:

0 9 * * * 0 amixer -q -D pulse sset Master toggle

flickr photo shared by bennettscience under a Creative Commons ( BY ) license

In the end, I’m really happy with how it turned out. Remote management and simple setup led to a really effective display for the wall.

If you want more specifics about software or construction, leave a comment.

Hacking Together an Auto-Tweeting Spreadsheet

A while back, I had looked at automating tweets from a Google spreadsheet to reduce the insane number of clicks it takes to do in TweetDeck and HootSuite (5 clicks? Really?) I hit some roadblocks and let it slide because in the long run, it wasn’t really important to me. More of a fun experiment.

I jumped back into it a week or so back to try and solve the last little problems. [I was able to create a script](https://docs.google.com/spreadsheets/d/1kbFIfYGm2sGQJ5TistkaxMzqUqQ8HbgUvJUM8MohOb4/edit?usp=sharing) which loops through a spreadsheet checks the current date and whether or not the tweet has been sent. If those conditions are met (`TODAY` and `NOT SENT`), it will automatically post the tweet.

The sheet, like all the other Twitter sheets I’ve used, is run with [Martin Hawksey’s](http://twitter.com/mhawksey) fantastic TwtrService library. It allows you to authenticate and tweet right from Google Apps Script and saves a _ton_ of time.

I ran into a problem that is [as-yet unsolved](https://stackoverflow.com/questions/37308911/pause-a-loop-to-wait-for-rest-api-response): I can’t get the sheet to stop after posting one tweet. So, if you have multiple tweets on a given day, it will send _all_ of them at once. That’s not good, especially if you’re promoting an event over a period of time. I’ve tried a number of solutions, but I can’t seem to find one that works. I’d love to hear if you’re able to take the source and tweak it to work.

In the meantime, Martin also took a (much more elegant) pass at the task. [His sheet is also available](https://docs.google.com/spreadsheets/d/10U7Rrr7lfbRS2A8QYRUWL8enlJfat75-QuGm7slKXRE/edit#gid=0) and works really well. The goal is the same, but his mechanics and implementation are much more refined and effective.

It’s a good example of multiple ways to skin a cat. I’m a novice coder (I tell people I know enough to break something) and he’s an expert doing all kinds of things. The great thing is, all of this code is open and available. I can make a copy of Martin’s page and dig into his solution. I learned a few tricks about checking for multiple conditions, which is what I was struggling with. I became better at scripting through my failure and his success.

Graduate School Final Project

I promised Stacy Lovdahl I’d post this the other day and then promptly forgot. Sorry, Stacy.

I took a graduate course on curriculum development and implementation this semester through Ball State. Not many of the assignments were open-ended enough to make for interesting blog fodder, so I didn’t post much. For my final project, I chose to redesign the course of study for my school based on an inquiry model. There are two components: the redesign document and the theory backing it up. They’re both available as Google docs with comments opened.

Many, many thanks to Michelle Baldwin and Kelly Tenkely from Anastasis Academy for late-night questions about inquiry mapping, standards reporting, and pretty much anything I couldn’t wrap my head around. Check out the work they’re doing…it’s amazing.

Updating the getText Extension

In October, I wrote about a bookmark application to get the text of a website for analysis in the Lexile analyzer tool. It’s a pain in the keister to copy text from a website, open a document, paste the text, save as a plain text file, then reupload to the Lexile website. The bookmark tool does the hard part for you (the clicks…all the clicks…) so you just download the formatted file and upload it to the Lexile site.

Depending on what you try and analyze, you might get an angry message like this one:

Seriously, who uses ASCII?

Nerdspeak, engage!

The only reason you’d use ASCII is if you, A) want to support legacy browsers (Netscape Navigator 1.0 anyone?) or, B) need to speed up query time on a string or a database. Modern web browsers are so much more efficient now, most sites use something called UTF-8. I’m guessing ASCII is needed for the Lexile analyzer so it can give you the score faster than it could with modern encodings.

Anyways, I pushed a fix to the applet tonight. It works by taking the text you highlight and encodes it to ASCII before downloading. So, still no conversions. And no angry red messages. And no more worrying about encodings.

You can grab the extension here or take a look at the source for yourself and tinker around.

More Scripts to Make Life Easier

It’s been an intense week of teaching, church band practice, and Google Apps Scripting. I’m really focusing this year on using the computer to do what it does well so I can focus on doing my job better. In particular, I’m using the desktop to do repetitive, marginal jobs as efficiently as possible.

This week, I’ve got two new tools in my belt to help out.

docTranslate

I happen to have a higher number of ESL students this year, some of whom are brand new to the country. Besides feeling more and more awkward about only speaking one language myself, I needed to find a way to help them with the language barrier.

After speaking with our ESL specialist, she gave the okay for me to print Spanish on the back of my English notes pages. (I was concerned about creating crutches, but she assured me that it would be more helpful than harmful in the long run. I need to learn Spanish.) Taking my notes, one by one, and putting them through Google Translate would have taken way too much time. So, I turned to a script.

doctranslate.gs

// This function converts a document from English to Spanish quickly.
// The post from http://stackoverflow.com/questions/25509159/how-i-can-get-the-textwrap-image-in-google-doc/25509591#25509591
// was helpful for creating the logic to check for images in the document.

function translate() {
  var doc = DocumentApp.getActiveDocument();
  var body = doc.getBody();

  // Add a page break for the translated material.
  body.appendPageBreak();

  // Get the number of elements in the document
  var elements = body.getNumChildren();

  // Use the number to loop through each element in the document.
  for( var i=0;i

This script is still a little incomplete, but it does the trick. You can read through the code to see what exactly each section does. I’m probably going to turn this into a Doc Add-On in the future, but that’s a little fuzzy right now because I can’t imagine when I’ll have the time to do that at this point. Some things to pay attention to:

  1. All images and drawings have to be inline for it to work. Googles Apps Scripts can’t see other types of images yet. It’s faster to make the copy and then reformat how you’d like.
  2. Formatting isn’t always carried perfectly. Again, it’s about the minimum-viable-product right now. Spot check the translation for format errors if that matters to your doc.
  3. This is document-specific…at the moment. You’ll need to recopy the script each time you want to use it.

I had a fluent Spanish reader check the grammar (Google Translate can be notorious for some weird translations at times) and he gave it a thumbs up, so take that how you will.

driveBox

Whiteboarding is a big part of this year in class. I want students investigating, collecting information, manipulating it, and building an argument. A lot of times, class ends before they have a chance to get clean work on paper. I needed a way for them to send photos of their work in at the close of class.

Of course, email is out. I guess that makes me old now.

We’re barely scratching the surface of Google Apps for Education at school – teachers are starting training this semester and student’s haven’t had their accounts opened up yet, so sharing back and forth isn’t really possible yet.

Some Googling turned up a great alternative, still using Google Apps, to create a public dropbox with scripts. In 20 minutes of finnagling, I had a working dropbox page which allowed students to submit things straight to my Drive with three taps. I’ve modified mine slightly from the blog post linked above. Unlike the translate function, this one requires two files.

form.html

 


  

Student Work Submission

server.gs

/*
    This is based on the template shared by Amit Agarwal (@labnol) on
    the blog, Digital Inspiration. The original post with instructions:
    http://www.labnol.org/internet/receive-files-in-google-drive/19697/
*/

// Find the form that is collecting the information to upload.
function doGet(e) {
  return HtmlService.createHtmlOutputFromFile('form.html');
}

function uploadFiles(form) {

  // Check for a folder called "Student Files" in Drive. If it's not there,
  // create one.
  try {

    var dropbox = "Student Files";
    var folder, folders = DriveApp.getFoldersByName(dropbox);

    if (folders.hasNext()) {
      folder = folders.next();
    } else {
      folder = DriveApp.createFolder(dropbox);
    }

    // Once the folder is found, create the new file
    // The file is named by attaching the Period Number to the Student Name.
    var blob = form.myFile;    
    var file = folder.createFile(blob);
    file.setName(form.classPer + form.myName);

    // Display a success message to the user.    
    return "File uploaded successfully. You can now close this window.";

    // If it fails, display the error message.
  } catch (error) {

    return error.toString();
  }

}

Those blocks of code turn into this:

2015-08-14_16-17-18

Pretty easy to figure out what to do.

If you’re interested in using either of these scripts, let me know on Twitter – @bennettscience and I can help you get them set up. Like I already mentioned, I think the translate function would work well as an Add-On, and if I get there, I’ll write another post with instructions on how to get it.

More on the scripting I’ve been doing lately…

Updated Student Response Template

A while back, I wrote about a new Google Sheets response system I created for use in class. I suggest you go back and read that first if you haven’t read it yet.

First, I uploaded the template in to the gallery. That’s the best place to download and use the spreadsheet because everything copies over correctly.

The biggest update is in the setup process. Rather than posting a URL on your board, it will now create a QR code automatically for your students to scan.

qr

Notice the tab at the bottom wasn’t there before clicking the button – it adds the code to a new sheet. You only need to do this once because the other functions (outlined in the first post) clear out responses in the template with each question.

Cleaning up PHP functions

Last year, I took some time to write a small blog theme for Anchor CMS using some PHP and a lot of CSS. Too much, if you ask me. If I had the time, I’d go back and clean a lot of it up. Maybe during a rainy day.

I haven’t logged into GitHub in a while. Teaching has been all consuming lately and I’ve not really had time to mess around with some coding projects I’d started so long ago. I hopped on tonight to grab some code for something else I’ve been working on, and alas, a problem with my theme.

Someone, somewhere thought my theme was good enough to use, which is pretty cool. But, some of the functions weren’t working correctly, and some were just plain missing (oops). I’d made a mistake somewhere in my cloning and rebasing, and I hadn’t taken the time to make sure I didn’t flub something up along the way. Anyways, long story short, he grabbed some code from my live demo site and got it working. But still. Customer support is our number one priority.

Here’s what I started with:

broke

function twitter_account() {
    return Config::get('meta.twitter');
}

function twitter_url() {
    return 'http://www.twitter.com/' . twitter_account();
}

The idea was to have someone create some metadata (twitter_account) once and then have it populate a link to their Twitter page as well as be the attribution info on any of the tweet links on posts. Easy enough, right? Wrong. The account function wasn’t returning the correct URL because I done messed up my PHP call. It should have been this:

fixed

function twitter_account() {
    return Config::get('meta.twitter');
}

function twitter_url() {
    return 'http://www.twitter.com/' . site_meta('twitter_account');
}

I’d forgotten that the twitter_account meta field only stores the data object. I have to tell the second function that there is some site metadata stored in the twitter_account call, not just the function itself. Oops.

All’s well now and it’s working fine, from what I can tell. Iteration is the game. Back to the workshop…

Science, UDL, and Lesson Design (Maker Experiment 2)

This post is a revision of the original experiment I posted two weeks ago. The main purpose of this is to add more elements of Universal Design for Learning and to elaborate more on the process used to help students build their own understanding of speed based on experimentation.


Additions

This activity will have a larger scope than the immediate physics relationship. Students will work with their biology (and health?) teachers to study human physiological reactions to activity. Things like heart rate, muscle fatigue, breathing patterns, etc can all be studied. Students will be asked to take factors like exercise patterns, sleep habits, and nutrition and evaluate their effect on physical tasks. The bicycle can then be used after a period of experimentation to take new data and draw conclusions.

To address the process of encoding and decoding graphs, I’ll be adding an activity from David Wees, a math teacher who often does experiments with web tools being used to teach through inquiry and games. Not long before I wrote the original experiment, David shared an interactive graphing game that I referenced, but didn’t pay much attention to. The player is asked to move a stickman in such a way that a real-time graph matches a pre-determined line. The graph is labeled and clearly shows the effect of any action in the game. Students can use this to form explanations of the components of graphs and how they relate to one another.

This leads into the bicycle hooked to the Raspberry Pi. The parameters are similar (distance over time) but we’re adding the physical act of pedaling as well as the physics component (speed) as outlined.


Reflection

I have to admit, this re-write is challenging. The components of UDL all seem to focus on choice, multiple means of acquisition and sharing, and multiple opportunities for learning. Rewriting an activity to include more components of UDL by adding parameters seems to be counterproductive.

That being said, my original plan did not do a whole lot to support the task of reading and creating graphs, and I think the addition of David’s stickman game will address that problem. I also think this was more an exercise in writing clearly than it was about incorporating principles of UDL. My original intent was to have simple prompts with multiple points for experimentation, assessment and revision, and I think that has been maintained (for the most part) in this update. Perhaps the wider picture is something I envision frequently, but communicate rarely.

My teaching has always focused on openness…BYOD, open Internet assessments, open-ended assignments…I think all of these things are supported by the UDL framework and are not things I articulate in new lessons. Science is a story…exploration and experimentation help us navigate that narrative. This entire activity is designed to have students do something they’re familiar with and apply it to a new idea.

Standards and goals for activities are good guides for learning, but too much of a focus on how to get students down that path robs them of authentic opportunities to experiment and defend their ideas. Rather than approaching UDL as a checklist for lesson design, we need to look beyond the components and find ways to promote the ideas they represent. Do we need a specific line in a plan that says, “Students will create an online resource for [fill in the blank]?” Or, should we allow them to come to us with the ideas for sharing and support them in that goal?

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’

Isaac Asimov’s words hit home (thanks David Grossman for sharing!) I would argue that we replace “science” with “learning.” It doesn’t happen by having a section in a lesson plan for “provoking sustaining effort or persistence,” and achieving that mindset takes a serious mental shift for the teacher (and student) to achieve.

All this to say: we need to focus on providing the means to support multiple opportunities for students to learn in their own way. I don’t want to worry about what each student “prefers.” I’d rather be open enough so that each can go his or her own way and be successful.


Resources

Graph Game. [Digital]. Retrieved from http://davidwees.com/graphgame/

Astronomy Project – Day 10

I made some quick additions this afternoon to my GitHub repo for this project, which I’ve renamed to PySky. I had to do some research on the Pyephem license before I packaged it with my code. It turns out it’s available to distribute freely with other software as long as I provide my source, so now, it’s packaged with my simple python script.

Update 9:08 PM 10/16/13

Looks like I wrote the post too soon. I was able to spend some time tonight working on the script after my wife went to bed and I was able to get both of my original problems solved. The updated script is posted on GitHub.

I was able to find a nifty little piece of code to help me manage responses to prompts so my code is a little cleaner. I don’t have to have as many conditionals (if, elif, else) in my functions anymore, which makes everything look a little bit nicer.

As far as my coding, I’m working on a couple things:

  1. Users will be able to set their location, rather than having it hardcoded to South Bend (which isn’t useful at all).
  2. The program needs to be able to save the location data for all lookups. I think a global variable is a good way to do this, but I need to learn more.

Things have been crazy lately, so I haven’t touched this in a while. I’m still also working on getting the hardware I need to get my Raspberry Pi up and running. The only video-out it has is an HDMI port. I don’t have a monitor that can take HDMI, and I’m having a hard time tracking an HDMI-to-VGA adapter down. I might need to turn to Amazon for this one.

BUT, I did get another crucial piece of hardware for this little jaunt that I’m very excited about.

Of course, it happened to come on the cloudiest day of the month so far. I’ll post more pictures of the scope in a later post.