Rethinking online privacy, safety, and student autonomy
As we’ve grown accustomed to sharing information online, technological advances have made it easy and profitable for third parties to collect, aggregate, and share this information. Digital privacy policies in many school districts have failed to keep up with the changing nature of people’s online presence and the easy accessibility of their information. This results in a significant increase in risks to student safety, well-being, and autonomy.
These privacy policies and practices are grounded in two outdated sets of implicit assumptions about data, privacy, and safety. The first is the 1974 Family Educational Rights and Protections Act (FERPA). The second is conventional wisdom based on “stranger danger” concerns from the early days of the internet. While these provide a baseline to protect students, neither address issues that have emerged in the last 50 years.
FERPA
| Outdated assumption | Modern reality |
|---|---|
| Directory information is distributed directly to a specific, relevant audience in print form. | Student images, names, and other information is posted on web pages accessible to the general public and corporations. |
| The initial collection of data requires significant effort and expense, which naturally limits how much is collected and could fall into the wrong hands. |
Ease of collection and the push to be “data-driven” incentivizes collecting as much information as possible, which puts more data at eventual risk. |
| Specific information can only be located by manually reading and navigating these print resources. | Information can be searched in bulk using traditional search queries, natural language questions posed to AI chat bots, and image recognition algorithms. |
| When the information is no longer relevant, it is disposed of or forgotten on a shelf in someone's home. | Online information remains posted indefinitely unless the district takes proactive steps to remove it. If the information has been archived by a third party, there is no way to remove it. |
| Reproducing, sharing, extracting, or consolidating the information is a costly, time-consuming, manual process that requires physical access to the original print source. | Automated bots can crawl the internet to scrape and save text and images for immediate or future consolidation, processing, and reproduction. |
| This information is discrete, limited in scope, and generally can't provide insight beyond its literal contents. e.g. a directory of home addresses and phone numbers doesn’t provide any insight beyond where a person lives and how to call them at home. |
Multiple data sources can be manually or programmatically cross-referenced to infer and predict additional information. e.g. a photo of a student playing soccer can be cross-referenced with a schedule of practices and games to deduce where that student is likely to be after school. |
Stranger danger
| Outdated assumption | Modern reality |
|---|---|
| The primary online threat students face are strangers who intend to harm them physically. | The internet can be used by multiple categories of people to harm students, including:
|
| The primary way someone could harm a student using online information is by manually browsing websites and looking for information to identify them. | Advances in search tools, AI chat bots, and image recognition algorithms provide automated means for malicious individuals to quickly and efficiently traverse many sources to retrieve and surface information about students. |
| Any incomplete or non-specific piece of information posted about a student is useless to a stranger who is trying to identify them. e.g. it’s safe to post an image of a first name and last initial as long as it’s not associated with other identifying information |
Individuals who already know students and have harmful intentions can supplement their existing knowledge with limited online data e.g. a first name and last initial could help an estranged, abusive parent confirm their child likely attends a particular school |
In addition, current policies are often optimized to protect districts from legal liability, not necessarily to protect students from current threats. These outdated assumptions leave us underprepared to respond to increasingly common situations that arise as technology and society evolve.
Updating Our Policies
Districts need to proactively update their privacy policies to reflect these new realities. Proactive planning allows us to create a cohesive vision and guiding principles that inform our approach to student protection. If an unanticipated situation or new technology arises, a well thought-out policy would still provide a framework for our response.
Waiting until an issue arises is not an option. Responding reactively entails fixing damage, navigating publicity issues, and rewriting policy simultaneously. This leads to hasty and unsustainable policies that are perpetually steps behind dangers. In the meantime, we are increasing the backlog of pictures and information. This will take longer to sort through when situations arise that require urgent removal. In some situations, like those involving harassment, there is no warning or time to react before the damage is done.
The following scenarios highlight some of the issues we must be prepared to address.
Scenarios
Takedown Request
The mother of a 10th grade student alerts the school that her daughter’s estranged, abusive father is moving to the area. She requests all pictures of her daughter be removed from district-related web pages to protect her identity and location. She is especially worried about pictures from when her daughter was younger, as they would be more recognizable to the father. She doesn’t want him to be able to locate her family or risk him showing up to school events.
Complications
- Do we have a way to locate every image of the student posted across 3 school websites, social media platforms, S’mores newsletters, etc. over the last 10 years by individual teachers, coaches, administrators, and members of the PTO?
Group Chat Drama
Four years ago, a staff member posted a photo to the district Instagram page. The photo shows the back of Diego, a first-grader, holding his mom’s hand as they walk into his school for an event.
This week, Adam, a 5th grader, finds the picture and sends it to a group chat with Diego and other students in their grade. He adds the caption “Mama’s boy”. While his face is not visible in the photo, Diego is clearly recognizable to people who know him. In retaliation, Diego takes a picture of Adam from behind as he walks home holding his little sister’s hand and sends it back to the group chat. This back-and-forth escalates until it requires school-based intervention.
Complications
- When families are asked to provide consent for images of their children to be posted, is there a distinction between photos that do and don’t show the student’s face?
- In the initial Instagram post, were Diego and his family aware that a staff member took their photo or intended to post it publicly?
- Do we want elementary-aged students to get used to being photographed by adults they don’t know?
- Do we uphold consistent expectations between staff and students for taking and posting photos of others?
- Do students have the option to remove photos from district platforms as they get older and more self-conscious?
Digital Footprint Cleanup
Jordan, a class of 2023 alum, wants to cull his digital footprint before he sends out internship applications. He comes across a silly track team photo posted on the team’s Instagram account. Jordan’s career counselor suggests the photo may make it harder for Jordan to present himself as a professional and advises him to take it down. Jordan contacts the high school and asks for it to be removed.
Complications
- If the Instagram account was run by a former employee who has since left the district, would current employees even be able to remove the image? What if they passed along the login credentials, but the two-factor authentication was tied to their personal cell phone number?
- What if a younger student in the photo is trying to earn an athletic scholarship and relies on school social media platforms to establish their portfolio? How do we balance competing interests between people in the photo?
List of Names
During a visit to an elementary school, a district staff member is impressed with a display celebrating the school community. They post a picture of the display to the district Instagram account, inadvertently sharing a legible list of the names of almost every student in the building. One of the names is “Zaya A.”, a third grader whose family has a restraining order against a relative in the aftermath of a custody battle. While the list only includes each student’s first name and last initial, Zaya is a fairly uncommon name. Given the age and general location, the relative deduces that this is probably the Zaya they are looking for and shows up at the school during dismissal.
Further complications
- What if Zaya’s family had not signed a media release for her? Would that apply to this scenario? If not, what avenues do they have to ensure her name is not published in this way?
- Are district-level employees in the practice of checking media releases and other opt-outs before publicly posting school-based photos?
Viral Gone Wrong
Erin, a middle school student, goes viral in a video on Tik Tok. The comment section quickly turns negative. One commenter takes a screenshot of Erin’s face in the video and uploads it to FaceOnLive, which gives the commenter a list of web pages that contain other images of Erin’s face. The list includes the homepage of Erin’s school, which has a photo of her in the cast of the spring musical. The commenter posts the name, website, and phone number of Erin’s school. Soon, other commenters use this information to swat Erin by calling and emailing the school with threats of violence pretending to be her.
Complications
- FaceOnLive (and most other face search platforms) save all “faceprints” they’ve encountered. Any images FaceOnLive encounters remain searchable even if they’re removed from the originating webpage. Even if the picture is removed from the middle school website in an attempt to limit the fallout, a search for Erin’s face will still return previously scraped matches indefinitely.
School Outrage
One of our schools gets negative news coverage due to public outrage over a recent decision. An online influencer with a large following publishes a rant against the school. An enraged viewer goes to the school’s website and finds a staff list that includes a photo of each staff member. They use FaceCheck.ID’s API to automatically and efficiently search for each staff member’s face, publishing a directory of staff names, images, and links to webpages where the staff member’s face was found. Other viewers comb through the resulting social media profiles, local news articles, and other images of the staff members and start showing up to intimidate and question them at locations they frequent like coffee shops, yoga classes, and places of worship.
Complications
- Because FaceCheck.ID uses face recognition to search rather than text, staff members who have taken precautions like obscuring their name in social media profiles would be surfaced in this type of attack
- Even if staff members don’t have their own social media profiles or have them set to private, images their friends or family have posted will also be surfaced
Reworking Our Policies
Our new policies need to balance:
- Student safety and autonomy
- Celebration and recognition of student success
- Community building
We need policies shaped by questions like:
- How do adults model consent and give students a voice when taking and sharing photos of them?
- What happens to images after we post them? Do they remain online indefinitely? What risks does this pose? How can we mitigate them?
- Can guardians and students revoke consent for their images to be shared? If so, what does this look like and how is it enforced? If not, how do we ensure they understand they are giving consent in perpetuity?
- Do guardians and students need to provide a reason to request images be removed? If so, who decides whether a reason is valid? Does this change if images feature multiple people? How is this communicated when people decide whether to consent to sharing?
- How do we ensure the district retains administrative control over social media accounts created by individual students or employees who graduate or leave the district?
- How could the information and pictures we post be used to harm students’ physical or mental wellbeing by people they know? What safeguards can we include to reduce harm?
- How could the information and pictures we post be ingested, consolidated, and searched en masse to harm students, families and staff? How do we account for technologies that don’t yet exist (or aren’t publicly known) that are already scraping the web and archiving this data prior to public release?
There are no clear-cut answers to these questions. But if we make time to ensure our policies are robust enough to reflect our values, we will be in the best position to protect our students, staff, and community.
The following section provides some considerations to help guide discussions about digital privacy policies.
Policy Considerations
How long do public photos remain posted by default?
| Solution | Strengths | Weaknesses |
|---|---|---|
| Indefinitely |
|
|
| For a set period of time (e.g. all photos are removed three semesters after they are posted) |
|
|
| Until the subjects of the photo leave their current school |
|
|
How can we share celebrations of students and our community?
| Solution | Strengths | Weaknesses |
|---|---|---|
| Post on public channels District website, social media accounts, etc. |
|
|
| Post on password-protected web pages |
|
|
| Direct message platforms (e.g. messaging app, text messages, emails) |
|
|
| Restricted online communities (e.g private Facebook group) |
|
|
| Paper-based communications |
|
|