Tag: Open Web (Page 1 of 6)

My Tables are Awesome

This past spring, I was having a conversation with Mia ZamoraAlan Levine, and Keegan Long Wheeler about the NetNarr course. Alan was putting together a table for a website that looked really slick, and when I asked what tool he was using, he said Awesome Tables. Four months later, I’m obsessed.

I am particularly susceptible to the charm of Awesome Tables, because I subscribe to the Tom Woodward school of using Google Sheets for everything. Awesome Tables adds a second sheet to a google spreadsheet. This second sheet has cells containing html, css, and js code, all of which format your data into an interactive table. Here’s the table that my Projects page is running:

Here’s the Google spreadsheet driving it. You can see the data on the first sheet and the code on the second.

I’m excited about this stuff for a couple of reasons:

  1. You can use the second sheet to work through basic website programming with real data and see the results by refreshing the table. I could see using this in a class to teach some basic web coding.
  2. There are about a dozen pretty nice templates built so it’s easy to quickly turn a spreadsheet into a decent looking database.
  3. Google Sheets is powerful because of the ability to use google scripts to collect data. You can use HTTP GET calls to mine data and standard javascript to parse the xml or json files into the rows of the table. You can also POST to google sheets from other web apps or use 3rd party services like Zapier or IFTTT to link it with other web apps with APIs.

There are other ways to build similar tables with bootstrap or even raw html and css, but Awesome Tables is fairly easy to use and embed. The connection between the data and the output is fairly intuitive and easy to manipulate.

By way of example of what you can do quickly and easily, here are a couple of Awesome Tables that I’ve been working on in the last couple of weeks:

I am Open (and so can you!)

The book cover for Stephen Colbert's book I am America and so Can YouFor the next few weeks, I will be taking a history course, something I thought I would never do again after I finished my dissertation. Shawn Graham is teaching an online digital history course at Carleton University and has opened it up for non-matriculating students.

The entire design of the course is fantastic for open learners. Rather than just allowing us to watch from the rafters, Shawn has set up a Slack team and is active in the channels. Shawn asks that students blog for the course and use github to keep a record of their progress in the coding exercises. The course readings are all openly accessible and Shawn has asked that the students use Hypothes.is so that their reading notes are open (you can find them in the Hypothes.is stream with the tag hist3814o).

For the first week, Shawn has assigned a set of readings on the concept of open notes research within history and the humanities more broadly. Open notes research is more commonly practiced in the sciences. Jean Claude-Bradley was advocating for the concept in chemistry as early as 2006. Open notes science improves reproducibility and verifiability of results, and it also opens up the vast array of information that is gained during research but never published.

Open notes research has been slow to catch on, and it doesn’t take long to brainstorm possible objections. Publications are the coin of the realm in academia, and sharing your research notes could possibly allow someone to scoop your ideas. However, I agree with the readings that Shawn has curated and his push during this week of the course that open notes research is the best practice. Clearly, as I am writing this blog, I feel that sharing my thought processes helps me to clarify my thoughts and develop my understanding of my research. Sharing our reading notes with Hypothes.is and collecting our code in GitHub take this a step further. We’re thinking, reading, and experimenting (coding) out loud.

I am taking Shawn’s course, because I want to see how he uses Slack, and because I think the course design is fantastic. I’m already a convert for open notes research and have blogged about it repeatedly hereherehere and in many other posts. This site as a whole is an argument for open notes research and the related ideas of getting rid of disposable assignments and empowering students to contribute to a broader knowledge community. However, I did enjoy the week’s readings, especially Caleb McDaniel’s post that turned me on to open notes research a few years ago. I look forward to joining in for the exercises next week (this week everyone was learning MarkDown and Git). Most of all, I want to encourage any readers in the ed tech world who have not taken a digital history or digital humanities course to at least poke around Shawn’s course to see the wonderful overlap between that knowledge domain and the current ed tech focus on digital literacy and citizenship.

Visualizing Domains Projects

One of the big challenges in running a Domains project, and part of my feeling of being adrift at sea, is the highlighting of particularly good work from users and the intelligible visualization of the broader activity from all users.

At OU, we have a couple of sites for this purpose. The Activity site shows the most recent blog posts from all sites that are capable of being read by Feeds WordPress. You can get a preview of each blog post and then click on the link to go to that web site. Each week during the school year, we put out our own blog post featuring the top 5 or 6 posts from this weeks Activity feed.

A second project, called Sites, is a filtered list of all of the web apps currently running on all four of the OU servers. This is really nice for finding the links for all sites running Vanilla Forums or dokuwiki or even more commonly used apps like Drupal and Omeka. However, if you filter for WordPress, you still get a simple list of the 3400 or so sites using WordPress.

At Domains17, Marie Selvanadin, Tom Woodward, Yianna Vovides talked about the answer that they are currently developing for Georgetown. Their design starts with a search bar that searches across all blogs. A second piece is called themes. Another option implements the TimelineJS library to visualize posts by a given user or in a given theme along a timeline.

Currently, Tom is running the first iteration of this visualization of the Georgetown domains off of a Google Spreadsheet, as he is want to do. He then generates a front page with the basic metadata and screenshots of the sites. For each of these sites, there is a dynamically generated page with the text of the various posts, word counts, and a charts visualization of the word counts. You can also see the timelineJS by category for each site.

The next phase of research is to dig into the visualization of community sites. How do we include closer analysis of Drupal, Omeka, and other apps along with html sites, rather than just WordPress sites? What types of questions can/should we ask? What are the ethical questions around mining this data?

Page 1 of 6

Powered by WordPress & Theme by Anders Norén