Martin talked about data surveillance including everything from Twitter metadata to facial recognition.
Hawksey mentioned Francis Galton in his history of data collection and facial analysis / phrenology. Galton, a proud cousin of Charles Darwin, was one of the originators of survey instruments for collecting demographic data and doing statistical demographics.
At one point Hawksey called 9 volunteers up onto the stage, took a selfie, and then had facial recognition tech find the faces, analyze the sentiment (smiling or sad), and add a visual overlay marker using Space Invaders characters. This demo was showing that the Google image analysis API already has facial recognition and sentiment analysis baked in. Minority Report is already looking less like science fiction and more a dystopian critique of current technology.
Hawksey moved into talking about the face race, part of the broader AI race between China and the US. China’s national, gamified credit system combines not just financial information but also data based on your movements (jay walking), the reports of your neighbors, and analysis of social media. While good scores can get you discounts and other perks, bad scores can block you from being able to buy train or plane tickets or getting a loan. In London, they are testing facial recognition to identify pedestrians on various police lists. Martin didn’t mention it, but CNN was running a story as I was coming to the conference on facial recognition being used for crowd control in the Miami airport.
Martin also talked a little bit about the failures of this technology. Companies are already starting to trust facial recognition, either unaware or uncaring about its fail rates. While we (in the room) were feeling good about the high fail rates in facial recognition, when safety systems on automated vehicles fail to ID threats to humans and other similar safety systems fail, there are real problems with our extreme trust in the goodness of technology.
Martin’s talk was particularly interesting to me, because it shows how much of this tech is already available, how much can be combined and remixed off the shelf by techies, and how much we can tech wash both the challenges and concerns behind these new tools. We daily see the “tech (or algorithms) can’t be biased” argument, at the same time that these obviously dystopian technologies are developing.
We are implementing attention tracking and facial rec technologies already in our classrooms, despite the technologies known failures in recognizing people of color. If your attendance grade is determined by these technologies and many classes have built in failure for missing a given percentage of classes, we are setting people up to fail because of failed tech.
In the Q&A, sava saheli singh asked how we balance the “cool factor” of exciting new tech with our critical concerns. I think that’s a great way of identifying the challenge that we face, both in ed tech and technology spheres more broadly. As Martin said, many of us are tech magpies, and we enjoy playing with these new techs. However, we have an ethical duty to think critically and teach others to do the same about these new techs.