Dissertation isolation: Say it ain’t so…

Three years ago, if you’d told me I would be writing a dissertation having anything to do with social media, I’d have laughed at you. Three years ago, I had just gotten a Twitter account and had used it…oh…maybe five times. Social media was a fun distraction, sure, but not much more.

Flickr photo shared by Marc_Smith under a Creative Commons ( BY) license

But for the past few days, I have been intently focused on finally getting my proverbial s*** together and finishing a draft of my dissertation which deals, in large part, with social media and digital identity. But I don’t always have the best attention span. I get distracted by many things – organizing my books, vacuuming, obsessing over how many steps my Fitbit has recorded today, and, of course, social media. Some might even say that social media, and the Internet in general, gets in the way of my productivity. And sure, sometimes it does. Did I really need to re-read that hilarious blog post about why procrastinators procrastinate for the twentieth time? Probably not (but if you haven’t read it, you really should…). Did I have to look through the trending hashtags on Twitter to learn that the odd one that I couldn’t parse was, inevitably, about more One Direction drama (I kid you not – every single time). Well, no.

But.

And that’s a big but (no pun intended).

But.

Social media is also a goldmine of incredible information. The vast majority of the citations in my third comprehensive exam paper, which was about digital identity, came from Twitter – well, more specifically, from what I dug up by searching for my Twitter handle + #identity in order to access the scores of articles on the subject that I had carefully curated from others’ sharing over time. And social media is the gift that keeps on giving. Today, I was writing about why it is so critical that all of us, but especially educators, speak out for social justice in online spaces, even though it is potentially risky (and, as in my case, can lead to being trolled in a not-so-nice way). And on one of my social media breaks, I came across this fantastic post by Bonnie Stewart about the way that social media shapes our world. To quote Bonnie:

“Facebook – and more broadly, social media in general…but Facebook remains for the moment the space of the widest participation across demographics even while targeting ads designed to keep people IN their existing demographics – is the stage upon which the battle over dominant cultural narratives is played out.

Social media is where we are deciding who we are, not just as individual digital identities but AS A PEOPLE, A SOCIETY.”

Thanks for the dissertation material, Bonnie!

Writing, publishing, literacy in general – it truly is now all about participation and collaboration.

So writing my dissertation has been incredibly hard, but perhaps not for the reasons you might think. When I get into my groove, I am a prolific and rapid writer. But these days, I write mostly blog posts, and I find that my ability to write academically has been overtaken, in some ways, by my ability to blog. If I could blog my dissertation, I would. I’m a bit lost without the ability to hyperlink to other blogs or articles or people, and I feel that my writing suffers because of it. Because really, that’s the magic of social media, social writing, and Web 2.0: writing, publishing, literacy in general – it truly is now all about participation and collaboration. A good blog post is a good blog post because it links into a much wider web of knowledge, and it does so in a highly transparent and accessible way. Sure, we cite others in academic papers, but to access a cited work we would usually have to search for it in an academic database or – gasp – go to the library (I have helpfully linked to the Wikipedia page about libraries here in case you’ve forgotten what they are). The way we think about knowledge is changing, at least when it comes to the digital sphere: as David Weinberger said, “The smartest person in the room is the room.” I even watched this shift play out in my research. What began as an ethnographic study/discourse analysis rapidly changed into something much more collaborative. Instead of me sitting alone and analyzing my participants’ words, we sat there and picked them apart together – both their words and, at times, mine. We constructed (well, in the case of my research, deconstructed) understandings collaboratively. And the experience was so much richer because of it.

In a particularly depressing moment of Heart of Darkness, Conrad writes, “We live as we dream – alone.” In many ways, academia seems still to embrace this worldview – it might as well read, “I write my dissertation as I dream – alone.” But just as the magic of Google Drive means I will never have to edit documents alone again, the magic of social media means that I no longer have to write, read, think, or be an “expert” in isolation. Maybe it’s time academia embraced this incredible connected culture that we live in just a little bit more and took up a more social form of learning. After all, “We participate, therefore we are.”

And hey, I might even find a way to work this blog post into my dissertation.

 

(Digital) Identity in a World that No Longer Forgets

This post was written jointly with Alec Couros and also appears on his blog.

In recent weeks, the topic of digital identity has been at the forefront of our minds. With election campaigns running in both Canada and the United States, we see candidate after candidate’s social media presence being picked apart, with past transgressions dragged into the spotlight for the purposes of public judgement and shaming. The rise of cybervigilantism has led to a rebirth of mob justice: what began with individual situations like the shaming of Justine Sacco has snowballed into entire sites intended to publicize bad online behaviour with the aim of getting people fired. Meanwhile, as the school year kicks into high gear, we are seeing evidence of the growing focus on digital identity among young people, including requests for our interning pre-service teachers to teach lessons about digital citizenship.

All this focus on digital identity raises big questions around the societal expectations about digital identity (i.e. that it’s sanitized and mistake-free) and the strategies that are typically used to meet those expectations. When talking to young people about digital identity, a typical approach is to discuss the importance of deleting negative artefacts and replacing them with a trail of positive artefacts that will outweigh these seemingly inevitable liabilities. Thus, digital identity has, in effect, become about gaming search results by flooding the Internet with the desired, palatable “self” so that this performance of identity overtakes all of the others.

But our current strategies for dealing with the idea of digital identity are far from ideal. From a purely practical perspective, it is basically impossible to erase all “negatives” from a digital footprint: the Internet has the memory of an elephant, in a sense, with cached pages, offline archives, and non-compliant international service providers. What’s more, anyone with Internet access can contribute (positively or negatively) to the story that is told about someone online (and while Europe has successfully lobbied Google for the “right to be forgotten” and to have certain results hidden in search, that system only scratches the surface of the larger problem and initiates other troubling matters). In most instances, our digital footprints remain in the control of our greater society, and particularly large corporations, to be (re)interpreted, (re)appropriated, and potentially misused by any personal or public interest.

And beyond the practical, there are ethical and philosophical concerns as well. For one thing, if we feel the need to perform a “perfect” identity, we risk silencing non-dominant ideas. A pre-service teacher might be hesitant to discuss “touchy” subjects like racism online, fearing future repercussions from principals or parents. A depressed teenager might fear that discussing her mental health will make her seem weak or “crazy” to potential friends or teachers or employers and thus not get the support she needs. If we become mired in the collapsed context of the Internet and worry that our every digital act might someday be scrutinized by someone, somewhere, the scope of what we can “safely” discuss online is incredibly narrow and limited to the mainstream and inoffensive.

And this view of digital identity also has implications for who is able to say what online. If mistakes are potentially so costly, we must consider who has the power and privilege to take the risk of speaking out against the status quo, and how this might contribute to the further marginalization and silencing of non-dominant groups.

In a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness

 Our current strategy for dealing with digital identity isn’t working. And while we might in the future have new laws addressing some of these digital complexities (for instance, new laws are currently being proposed around issues of digital legacy) such solutions will never be perfect, and legislative changes are slow. Perhaps, instead, we might accept that the Internet has changed our world in fundamental ways and recognize that our societal mindset around digital missteps must be adjusted in light of this new reality: perhaps, in a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness, emphasizing the need for informed judgment rather than snap decisions.

So what might that look like? The transition to a more forgiving (digital) world will no doubt be a slow one, but one important step is making an effort to critically examine digital artefacts before rendering judgment. Below, we list some key points to consider when evaluating problematic posts or other content.

Context/audience matters: We often use the “Grandma rule” as a test for appropriateness, but given the collapsed context of the online world, it may not be possible to participate fully in digital spaces if we adhere to this test. We should ask: What is the (digital) context and intended audience for which the artefact has been shared? For instance, was it originally posted on a work-related platform? Dating site? Forum? News article? Social network? Was the communication appropriate for the platform in which it was originally posted?

Intent matters: We should be cognizant of the replicability of digital artefacts, but we should also be sure to consider intent. We should ask: Was the artefact originally shared privately or anonymously? Was the artefact intended for sharing in the first place? How did the artefact come to be shared widely? Was the artefact made public through illegal or unethical means?

History matters: In face to face settings we typically don’t unfriend somebody based on one off-colour remark; rather we judge character based on a lifetime of interactions. We should apply the same rules when assessing a digital footprint: Does the artefact appear to be a one time thing, or is it part of a longer pattern of problematic content/behaviour? Has there been a sincere apology, and is there evidence that the person has learned from the incident? How would we react to the incident in person? Would we forever shame the person or would we resolve the matter through dialogue?

Authorship matters: Generations of children and teenagers have had the luxury of having their childhoods captured only by the occasional photograph, and legal systems are generally set up to expunge most juvenile records. Even this Teenage Bill of Rights from 1945 includes the “right to make mistakes” and the “right to let childhood be forgotten.” We should ask: When was the artefact posted? Are we digging up posts that were made by a child or teenager, or is this a recent event? What level of maturity and professionalism should we have expected from the author at the time of posting?

Empathy matters: Finally, we should remember to exercise empathy and understanding when dealing with digital missteps. We should ask: Does our reaction to the artefact pass the hypocrite test? Have we made similar or equally serious mistakes ourselves but been lucky enough to have them vanish into the (offline) ether? How would we wish our sons, daughters, relatives, or friends to be treated if they made the same mistake? Are the potential consequences of our (collective) reaction reasonable given the size and scope of the incident?

This type of critical examination of online artefacts, taking into consideration intent, context, and circumstance, should certainly be taught and practiced in schools, but it should also be a foundational element of active, critical citizenship as we choose candidates, hire employees, and enter into relationships. As digital worlds signal an end to forgetting, we must decide as a society how we will grapple with digital identities that are formed throughout the lifelong process of maturation and becoming. If we can no longer simply “forgive and forget,” how might we collectively develop a greater sense of digital empathy and understanding?
So what do you think? What key questions might you add to our list? What challenges might this emerging framework provide for digital citizenship in schools and in our greater society? We’d love to hear your thoughts.

2015: A Year to Share and to Connect

A few days ago, my friend and colleague Alec Couros asked, via Twitter, about people’s personal and professional goals for 2015. My personal goals are still a bit muddled (2014 was a challenging year in many ways, both for me and, to be honest, for the world in general), but professionally, I have a pretty good idea of where I’m headed:

Screen Shot 2014-12-31 at 6.11.34 PM

So I’m ending this year by getting started on those first two goals with a blog post (second in a week – must be a record or something) that will hopefully hold me accountable in some strange, don’t-let-the-Internet-down kind of way.

I’ve got two plans so far – the first is to take part in the Photo-a-day Challenge, which I’ll be doing on my newly minted Flickr account (yeah, I know, it’s empty so far), and the second is to get much involved in Twitter chats and other Twitter conversations – I share resources a lot, but I could be so much better at engaging with others. I just need to work a bit harder on believing that others care what I have to say.

Thinking about all of this sharing, I’m reminded a pretty cool quote by Eric Raymond about gift economies:

In gift cultures, social status is determined not by what you control but by what you give away.”

This idea seems to align really well with what happens on Twitter – the people I respect the most in that space are the ones who comment, who reshare, and who engage with others: who give away knowledge, insight, sometimes even just a bit of humour or support.

So…in 2015, what will you give away?

Social Networks and the Globalization of Happiness and Grief

This past summer, I wrote about my mother’s battle with a terminal brain illness, which has left her blind and with dementia. After publishing the post, I sent the link to a few family members, but I also shared it on Twitter. Then I headed out for a run to clear my head.

The first person to respond to my tweet was a former student, who thanked me for sharing my story. I remember so clearly seeing the notification, mid-run. My immediate reaction was one of confusion – some part of me had not considered that by sharing my post on Twitter, definitely the most public and professional of social networks for me, everyone would see it; my social contexts, so carefully separated in real life, were collapsed online. I felt vulnerable, but I also felt a sense of comfort and relief.

When I asked my family if it was okay to share what I’d written about my mother, my sister asked me why I wanted to post it online. I wasn’t sure then, and I’m sure not entirely sure now. Sharing online is an odd business, really, one that I’m still trying to wrap my head around. So much has been written about the ways in which social media has changed the way we relate to one another, from the digital dualists who argue that we need to privilege our face to face connections by unplugging, to those, like Nathan Jurgenson, who argue that Facebook is real life – that social media has merely shifted and augmented our relationships. Certainly, social networks have made it possible for us to share wonderful moments with a wide audience (like the recent video of a son who paid off his parents’ mortgage). But they have also shifted the nature of mourning, from private and localized suffering to a new, globalized grief.

Of course, this is not always a positive: our public/publicized mourning has led to hoaxes where people take advantage of human generosity and kindness, and it also backfired recently for Facebook, where the automatically generated “Year in Review” feature brought back painful memories for some users. It has led to the strange phenomenon of grief porn. And it has made it difficult at times for us to move on from traumatic events, as we are constantly reminded of them.

But the sharing of pain and trauma is also (for me, at least), on some level deeply comforting. Research suggests that social networking sites are so satisfying (and at times addictive) because of the endorphins that are released when we post or receive feedback from others in the form of likes, favourites, or comments, so perhaps this plays in role in why we share. And certainly there is something wonderful about receiving words of encouragement and sympathy from complete strangers, who reach out online out of simple human empathy. It is, in a sense, a reassurance of the deep-down, fundamental kindness of people: a reminder that grief is in many ways the great equalizer, a feeling to which we can all relate.

As I write, I watch my terminally-ill mother sleeping on the couch beside me. I am struck by the rapid decline in her functioning even since this summer, when I first shared her story. The moments of clarity come rarely now for her, and this Christmas has been tough for my family. And once again, I am drawn to share this, not only to write about how I am feeling, but to put it out into the world – to feel connected and, perhaps, simply to feel human in the midst of difficulty and pain.

 

Tragedy, Politics, and Grief in an Age of Immediacy and Networks

I’ve been working on a blog post for awhile now about digital identity, in an effort to get my ePortfolio and blog up and running before the craziness of a new semester sets in. I thought that soon, maybe today, I’d finally click publish on that post.

But then, just as I was gathering the courage to post, this Sunday my social media feeds exploded with sadness and injustice and hate, and I felt that I needed to write something different. I watched as news broke about the killing of an unarmed black teen in Missouri by police. Through the tweets of a few people in my Twitter feed, I followed the emerging stories of racism and violence and grief.

Yesterday, those stories continued. Last night, I saw tweets and news stories about the protests in Ferguson and about the rubber bullets and tear gas being used on protesters. And amidst all this, I learned of Robin Williams’ apparent suicide, which sparked an outpouring of grief, of sympathy, and of support for other sufferers of depression across both my Twitter and Facebook feeds.

I’ve seen it before, but perhaps the confluence of these very different tragedies made it more apparent. Networks, the Internet, our digital existence — all of this has changed the way we grieve and experience sadness and loss. Tragedies no longer break in the mainstream media — they break on Twitter, through the voices of the many, not the few.

In a networked world, tragedy and grief are quick to appear and then remain ever-present. There are constant reminders of those we’ve lost and of horrific events; it is hard to escape tragedy when it is everywhere in our new feeds. Of course, networks can bring support, comfort, and a feeling of solidarity, especially when we are far, physically, from those we love. But the immediacy and ubiquity of the news of tragedies also seems to bring quick politicization. In some ways, this is a positive — the coverage on social media will (hopefully) bring necessary attention to the events in Ferguson, Missouri. The outpouring of sadness in the wake of Robin Williams’ death may generate much-needed awareness about mental illness.

But the immediate politicization of tragey is also problematic. In the wake of the Ferguson shooting, I saw an argument arise on Twitter between two activists, Suey Park and Tim Wise, about the idea of privilege and the appropriate ways of being an ally in discussions of race. After Robin Williams’ death, debates over suicide and mental illness sprang up on social media. This is not new: after the Sandy Hook school shooting in 2012, there were immediate calls for gun-control, for arming teachers, for tighter screening of gun owners.

These conversations are important, no doubt. We need to talk about the entrenched racism that surrounds the events in Ferguson. We need to discuss mental illness, and guns, and privilege, and all of the other hard issues that tragedies bring to the surface.

But perhaps, first, we need a little time to mourn, alone or together, individually or through our networks. Perhaps, as outsiders looking in on tragic situations, we can just let grief be grief, for a little while at least.