Tag: Digital Humanities (Page 1 of 7)

Fake News and Fact Checking

Keegan is wrapping up the first week of his Information Literacy Faculty Learning Community as I type.

The FLC, especially for weeks 1 & 2, draws heavily on Mike Caulfield’s work on media and information literacy, especially his recent work around what he calls the ‘Four Moves’ of fact-checking. Mike has built out an challenge bank to test your fact checking at Four Moves and you can delve deeper into his work in his OER textbook Web Literacy for Student Fact-Checkers.

Within the FLC, Keegan is encouraging us to reflect on the material for each week by answering three prompts:

What should we be teaching our students about this topic?

I think there is an overlapping set of skills that can variously be called digital literacy, information literacy, media literacy, civic literacy, etc. There is a lot of overlap within the Venn diagram of these skill groups and there are identifiable pieces and disciplinary histories that help to define and separate each of the sets.

At OU we’ve had an initiative called ‘Writing Across the Curriculum’ in place for several years that tries to get students to write essays in classes across campus, not just English classes. Similarly, I would like to see a ‘Literacy Across the Curriculum’ initiative that emphasizes whichever sets of literacies are most applicable for each course (media literacy for journalism classes or information literacy for library classes, etc.). These skills are naturally part of many classes already, but a concerted effort to emphasize these skills in all (or as close to all as is practical) classes would both introduce and reinforce these real-world, necessary skills for students.

In my role within the Office of Digital Learning, I advocate for and help instructors integrate digital literacy lessons into their classes. Finding information online and vetting that information is a key real-world skill. In my history of science classes, I teach how scientists fought for authority/respectability and their rhetorical strategies for arguing their scientific theories. I want students to understand how to evaluate the trustworthiness of scientific rhetoric, and I think that’s an obvious place to extend the lesson into evaluating the trustworthiness of all rhetoric.

What’s a small change you can make in your course for the benefit of your students?

I really like Mike’s activity bank. I’ve used activity banks in my classes for a while now. I usually set these up as an array of different activities that reinforce the material from class. Students can choose those activities that reinforce the skills and/or content that most interest them in order to exercise those skills and deepen that knowledge. I think I can integrate Mike’s activity bank into my own to encourage students to practice their fact checking and broader digital literacy skills.

Feel free to share any other thoughts or comments you have on this topic:

I’m hopeful that participants in the FLC will integrate some of Mike’s work into their own teaching and courses. When I look around the ecosystem of Digital Literacy education in higher education, Mike’s work stands out as being incredibly timely, important, and practical.

I really like how Keegan has curated the material for this FLC. I know that we’re going to talk about Chris Gilliard’s work on Digital Redlining in the coming weeks. Amy Collier and her team’s Digital Detox project is another very accessible and adaptable entry into the field and served as a model for Keegan’s work.

I’m currently participating in the #engageMOOC and next week I’m leading a graduate student workshop on Digital Identity. My field, educational technology, is in this space right now, and I think that’s significant of the broader cultural awakening towards the threats of Fake News, digital manipulation, and the eroding of truth and trust in society. I’m hopeful that Keegan’s work and the work of all of the participants in his FLC can do at least some good in addressing these issues on our campus. I’d encourage you to participate in the FLC online and think about how to address them in your own work.

Game Design as Project Based Learning

2016 still sounds more like a made up year in the distant future than that time “a couple of years ago.” Nonetheless, a couple of years ago, Scott Wurdinger came out with a book called The Power of Project-Based Learning

There is a great deal of debate over how to define PBL. Wurdinger recounts how John Dewey had a falling out with his student William Kilpatrick when Kilpatrick (1918) said that a project could be just about anything as long as it was initiated by the student, including just sitting and listening to music (p. 14). Dewey insisted that a teacher needed to be involved to guide learning.

Kilpatrick eventually acquiesced, but we are still left with a broad definition of PBL as projects initiated by students and guided by teachers to achieve desired learning outcomes. Projects can be more or less narrowly defined to fit the subject content of a particular class or a desired final product. A math teacher might ask students to use protractors and planes to build a birdhouse, instructing students along the way to identify the various angles of the walls and their combinations. The staging of a play can be used to discuss the history of food, clothing, politics, and gender roles. Such projects bear a family resemblance in that active students engage for a prolonged period in something that is hopefully memorable, meaningful, and authentic in the contextualization of skills and knowledge. Additionally, project-based learning challenges students to deploy a variety of real-world skills like project management, teamwork, research, design, goal-completion, and on and on.

In this blog and more generally in my work with Keegan Long-Wheeler, I have talked a good bit about using games in the classroom to bolster learning. As with PBL, I think games offer memorable experiences that help to contextualize knowledge.

However, game play is in some ways closer to traditional lecture or text-based instruction than to PBL, in that game play relies on the consumption of a media produced by others. My ‘reading’ of a game, my particular experience of it, is of course grounded in my own experiences and can’t be separated from them. The active playing, especially in terms of the social elements, creates a unique or at least specific experience, but so too can reading when accompanied by a good discussion.

Instead of game based learning (or even gamification) as a parallel to project-based learning, we are working through the concept of game design as project-based learning. Rather than having students play games designed by the teacher or third party games like Civilization, Minecraft, or Reacting to the Past, what happens when students design new games?

Project-based learning and experiential learning more generally rely on feedback loops. The various schema used to describe these loops are all derivative of Dewey’s “patter on inquiry:” 1) identify a problem; 2) pose a solution; 3) test the solution against reality; 4) reflect. David Kolb adapted this model into his schema for “experiential learning:”

David Kolb's 4-part experiential learning cycleSimilar wheel-type schemas have been designed for project-based learning:

This PBL diagram from the Buck Institute for Education suggests a more proscriptive approach certainly than what William Kilpatrick would have wanted. In the modern age of narrowly defined grade-level standards in K-12 schools and integrated learning objectives in higher education curricula, it can be difficult to give up class time and control. Nonetheless, the prompts for projects, the problems being addressed, can provide direction for both the subject matter and skills that the project will develop.

Game Design as PBL

There are several models for game design, but many are variations of an iterative/looping cycle:

As with project based learning, game design starts with a problem or prompt for the students to address. In an English course this past fall, Prof. Honorée Jeffers challenged her students to design a choice-based story (game) that retold a classic children’s story. Students brainstormed alternate plot lines and endings. Then, Keegan coached them on how to build out these games in the text-based game software Twine. The students then built a minimum playable game. Play-testing and modification fed an iterative design loop until the project was finished.

After submitting their games, students reflected with Prof. Jeffers on the structures of their narratives, identifying the inflection points in their alternate plots and the choices that authors make as they write.

Prof. Jeffers’ original writing assignment was already a project-based learning approach to understanding literature. Rather than just reading and dissecting classic children’s story, students produced their own modifications as way to practice the skills they were studying.

The added dimension of game-design helped to further highlight the choices the students were making in their stories. Rather than distracting from the focal content and skills of the English class, the game-design project foregrounded that material. In addition to highlighting character choices in a reading or even creatively writing new choices, the game-design project asked them to map out these choices and figure out why they would be interesting and fun for a game player. Game-design thus reinforced the learning objectives and also introduced students to further skills like project management, multimedia asset (images, audio, & video) sourcing, and some coding.

Multiserver DoOO Data Management

This post can only possibly appeal to about 12 people and only when they’re really in the mood for weedsy code stuff. However, that’s about my normal readership, so here we go…

For the OU Create project, we now have 5 servers that are managed by Reclaim Hosting. We’ve got more than 4000 users, and, collectively, they have more than 5000 websites. Keeping track of all of the usernames, domains, and apps in use is difficult.

One of the ways that we study the digital ecology of these servers is by looking at the data.db files created by each server’s instance of Installatron. These database files keep track of all of the web apps installed using Installatron. Thus we have a record of every installation of WordPress, Drupal, Omeka, MediaWiki, or any of the other 140 or so apps with Installatron one-click installation packages. I can find the user’s name, the date, and the domain, subdomain, or subdirectory used for each installation. However, within the data.db file, there are five tables for all of this data and it’s a SQLite file, so it’s not exactly a quick or easy read. Further complicating everything is that each server has it’s own data.db file and each one is buried deep in the directory structure amongst the tens of thousands of files on the server.

Here at OU, we have a couple of websites that were built as derivatives of studying the data.db files. The first was community.oucreate.com/activity.

This site is built on Feed WordPress. We feed it the urls for each WordPress site in OU Create and it aggregates the blog activity into one feed averaging 300+ posts a week.

The other site is community.oucreate.com/sites.

Screen Shot 2017-12-06 of community.oucreate.com/sites

Sites provides a filterable set of cards displaying screen captures, links, and app metadata for all of the sites in OU Create. You can see what sites are running Omeka or Vanilla Forums or whatever other app you’d like.

To maintain these sites, I would normally go into each server and navigate up one level from the default to the server root, then go into the var directory, then the installatron directory, and then download the data.db file. Once I’ve downloaded it, I use some software to open the file, and then export the i_installs table to a csv. Then I find the installations since my last update of the system, copy the urls, and paste them into the Activity site or run a script against them for the Sites site. I repeat this for each server. This process is terrible and I hate doing it, so it doesn’t get done as often as it should.

This week, I wrote some code to do these updates for me. At the most basic level, the code uses secure shell (SSH) to login to a server and download a desired file. My version of the code loops (repeats) for each of my five servers downloading the data.db file and storing them all in one folder. Here is the code and below I’ll explain how I got here and why:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import os
import paramiko
import sqlite3
import csv
 
#The keypasses list holds the passwords for the keys to each of the servers
keypasses=["xxxxxxxxxx", "xxxxxxxxxx", "xxxxxxxxxx", "xxxxxxxxxx", "xxxxxxxxxx"]
counter = 1
csvWriter = csv.writer(open("output.csv", "w"))
 
#loop through the keypass list getting the data.db files
for keypass in keypasses:
 
db = "data%s.db" % counter
servername = "oklahoma%s.reclaimhosting.com" % counter
keyloc="/Users/johnstewart/Desktop/CreateUserLogs/id_oklahoma%s" % counter
k = paramiko.RSAKey.from_private_key_file(keyloc, password=keypass)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
print ("oklahoma%s connecting" % counter)
ssh.connect(servername, username="root", pkey = k)
print ("connected")
sftp = ssh.open_sftp()
localpath = '/Users/johnstewart/Desktop/CreateUserLogs/data%s.db' % counter
remotepath = '../var/installatron/data.db'
sftp.get(remotepath,localpath)
sftp.close()
print ("data%s.db downloaded" % counter)
ssh.close()
#export the desired table from the database to the aggregated csv
with sqlite3.connect(db) as connection:
c = connection.cursor()
c.execute("SELECT * FROM i_users")
rows = c.fetchall()
csvWriter.writerows(rows)
print ("%s i_users exported to csv" % db)
counter=counter+1

 

On Monday, I wrote the first draft of this in bash and then rewrote it in a similar language called expect. With expect, I could ssh into a server and then respond to the various login prompts with the relevant usernames and passwords. However, this exposed the passwords to Man in the Middle attacks. If someone where listening to the traffic between my computer and the server, they would be able to intercept the username and password from within the file. This is obviously not the best way to do things.

The solution was to use an ssh key. These keys are saved to your local computer and provide an encrypted code to login to the server. You in turn use a password to activate the key on your own computer, so there’s no ‘man in the middle.’ Unfortunately, I had no idea how to do this. Luckily for me, Tim Owens is a fantastical web host and has a video explaining how to set up keys on Reclaim accounts:

I set up keys for each of the servers and saved them into my project folder. This also denecessitated the ‘expect’ script because I no longer needed to enter a password for each server.

I turned back to a bash shell script, but couldn’t figure out what to do with my .db files once I had downloaded them all. This morning I turned from bash to python which is very good at handling data files. Python also has the paramiko library, which simplifies the process for logging into and downloading files from servers. You can see in the loop part of the code above where I call several paramiko functions to establish a connection to the server and then use sftp to get the file I want.

Our servers are labeled numerically oklahoma1 through oklahoma5, and I had saved my keys for each server as id_oklahoma1 through id_oklahoma5, so it was easy to repeat the basic procedure for each server by writing a loop that repeated 5 times. Each time the loop occurs it just increases the number associated with the servers and keys. The loop also saves the data.db files locally as data1, data2, etc.

The last step was to use Python to compile the desired data from each of these data.db files. SQLite3 provided the needed methods for handling data files. I could connect to each database after I downloaded them. Then I called the table that I wanted from each table and “fetched” all of the rows from that table. From there, I can use the csv library to write those rows to a csv (an excel like, comma separated variable table). This whole process was part of the larger programmatic loop, so each time I pulled a database from a server, I was adding it’s table rows to the collective csv.

For me, this process will make it easy to pull a daily update of all of the domains in OU Create and then upload those to my two community websites. As we follow Tom Woodward and Marie Selvanadin’s work on the Georgetown Community site, these up-to-date lists of sites will make it easier to build sites that pull together information from the API’s of the many OU Create websites. The process could also be generalized to pull or post any files from a set of similar servers allowing for quicker maintenance and analysis projects. Hopefully Lauren, Jim, and Tim will find fun applications as they continue their Reclaim Hosting server tech trainings.

Page 1 of 7

Powered by WordPress & Theme by Anders Norén

css.php