<?xml version="1.0" encoding="utf-8"?><rss version="2.0">
  <channel>
    <title>Posts from Zak Kolar</title>
    <link>https://zkolar.xyz/posts</link>
    <lastBuildDate>Wed, 30 Apr 2025 18:50:10 -0400</lastBuildDate>
            <item>
      <title>23andMe (andMyGeneticData)</title>
      <link>https://zkolar.xyz/posts/23andme-and-my-genetic-data</link>
      <guid>https://zkolar.xyz/posts/23andme-and-my-genetic-data</guid>
      <pubDate>Wed, 30 Apr 2025 00:00:00 -0400</pubDate>
      <description><![CDATA[<p>After a <a href="https://www.theguardian.com/technology/2023/dec/05/23andme-hack-data-breach">massive data breach impacting almost 7 million users</a> in October 2023, the DNA testing company 23andMe has been struggling to recover. Last month, they filed for bankruptcy. This means the company (and all its data) is up for sale.</p>
<p>Even before the breach, 23andMe struggled to become profitable. Now that their reputation has taken a hit, the company’s most/only valuable component is the genetic information they’ve collected from millions of users.</p>
<p>A new owner would inherit the current privacy policy and terms. But they would be free to change these terms going forward. These changes are typically buried in a boring-sounding “we’ve updated our privacy policy” email - the kind many of us have been conditioned to immediately delete without reading.</p>
<p>Genetic information would be attractive to a range of entities. Here are a few hypothetical uses:</p>
<ul>
<li>
<p>Retail giants like Amazon  could use this for “personalized” upcharges. Users predisposed to certain health conditions or diseases may see an increase in prices[^ We implicitly assume everyone is shown the same price because that’s the practice in brick-and-mortar stores -- most people wouldn’t know they’re being charged differently unless they manually compare the same product listings with other users.] on products related to management or treatment.</p>
</li>
<li>
<p>Data brokers could consolidate genetic information with other personal data they’ve already collected, allowing anyone willing to pay (including government agencies, journalists, militant advocacy groups, etc.) to purchase lists like "home addresses of people with ancestry in [country]"</p>
</li>
<li>
<p>So-called “grandparent scams”, in which an attacker contacts a grandparent pretending to be a grandchild in distress and urgently in need of money, are already <a href="https://www.fcc.gov/consumers/scam-alert/grandparent-scams-get-more-sophisticated">widespread</a>. The ability to purchase a database of genetic information/familial connections would supercharge this type of scam.</p>
</li>
</ul>
<p>Is any of this legal? For the most part, yes. (Apart from the scams, but scammers aren’t famous for strict adherence to the law). In the United States, the Genetic Information Nondiscrimination Act of 2008 prevents health insurers from denying coverage and calculating premiums and employers from making employment decisions based on genetic information. But uses outside of these contexts are fair game. And as consumer protection agencies in the U.S. get de-fanged and dismantled, there’s no guarantee the law will be enforced.</p>
<h2>So what do we do?</h2>
<p>If you’ve used the service and want to limit how your information is shared, NPR has an <a href="https://www.npr.org/2025/03/25/nx-s1-5339695/how-delete-23andme-data-bankruptcy">easy-to-follow guide</a> to delete your 23andMe data. If you want to delete your data, do so sooner than later - once the company is acquired, the new owner may decide to remove this option.</p>
<p>If you have blood relatives who have used 23andMe (even if you haven’t), you may want to consult with them as well. Because of the nature of DNA, your relatives’ data could still come back to haunt you.</p>]]></description>
    </item>
        <item>
      <title>Wants, shoulds, and have-tos</title>
      <link>https://zkolar.xyz/posts/wants-shoulds-and-have-tos</link>
      <guid>https://zkolar.xyz/posts/wants-shoulds-and-have-tos</guid>
      <pubDate>Sun, 23 Mar 2025 00:00:00 -0400</pubDate>
      <description><![CDATA[<p>When I have a chunk of unscheduled time, my mental list of options is divided into "want tos" and "should dos". If I have items in both categories, I feel guilty for not prioritizing the "shoulds" over the "wants". Instead of picking from one or the other (or maybe even... <em>one of each</em>), I freeze up and find a way to procrastinate that doesn't involve accomplishing items from either list.</p>
<p>I'm pretty good at recognizing "have tos". If there's something that actually needs to get done (usually motivated by some external deadline), I will work on it even if it isn't high on my want list.</p>
<p>But when there's something I feel I should do, but don't have to do, it becomes a barrier to accomplishing anything. Doomscrolling to avoid explicitly picking a "want" over a "should" doesn't actually get the "should" done - it just guarantees I do <em>nothing</em>.</p>
<p>When I'm confronted by the paralysis of a "should", I need to decide whether it's a "have to". This can go both directions - if I consider something a <em>priority</em>, I need to <em>prioritize</em> it. And if it's something I can't justifying prioritizing, then I need to be OK moving on. Either way, I need to spend less time choosing and more time doing.</p>]]></description>
    </item>
        <item>
      <title>Rethinking online privacy, safety, and student autonomy</title>
      <link>https://zkolar.xyz/posts/rethinking-privacy-safety-and-student-autonomy</link>
      <guid>https://zkolar.xyz/posts/rethinking-privacy-safety-and-student-autonomy</guid>
      <pubDate>Thu, 27 Feb 2025 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>As we’ve grown accustomed to sharing information online, technological advances have made it easy and profitable for third parties to collect, aggregate, and share this information. Digital privacy policies in many school districts have failed to keep up with the changing nature of people’s online presence and the easy accessibility of their information. This results in a significant increase in risks to student safety, well-being, and autonomy.</p>
<p>These privacy policies and practices are grounded in two outdated sets of implicit assumptions about data, privacy, and safety. The first is the 1974 Family Educational Rights and Protections Act (FERPA). The second is conventional wisdom based on “stranger danger” concerns from the early days of the internet. While these provide a baseline to protect students, neither address issues that have emerged in the last 50 years.</p>
<h3>FERPA</h3>
<table>
<thead>
<tr>
<th>Outdated assumption</th>
<th>Modern reality</th>
</tr>
</thead>
<tbody>
<tr>
<td>Directory information is distributed directly to a specific, relevant audience in print form.</td>
<td>Student images, names, and other information is posted on web pages accessible to the general public and corporations.</td>
</tr>
<tr>
<td>The initial collection of data requires significant effort and expense, which naturally limits how much is collected and could fall into the wrong hands.<br></td>
<td>Ease of collection and the push to be “data-driven” incentivizes collecting as much information as possible, which puts more data at eventual risk.</td>
</tr>
<tr>
<td>Specific information can only be located by manually reading and navigating these print resources.</td>
<td>Information can be searched in bulk using traditional search queries, natural language questions posed to AI chat bots, and image recognition algorithms.</td>
</tr>
<tr>
<td>When the information is no longer relevant, it is disposed of or forgotten on a shelf in someone's home.</td>
<td>Online information remains posted  indefinitely unless the district takes proactive steps to remove it. If the information has been archived by a third party, there is no way to remove it.</td>
</tr>
<tr>
<td>Reproducing, sharing, extracting, or consolidating the information is a costly, time-consuming, manual process that requires physical access to the original print source.</td>
<td>Automated bots can crawl the internet to scrape and save text and images for immediate or future consolidation, processing, and reproduction.</td>
</tr>
<tr>
<td>This information is discrete, limited in scope, and generally can't provide  insight beyond its literal contents.<br><br>e.g. a directory of home addresses and phone numbers doesn’t provide any insight beyond where a person lives and how to call them at home.</td>
<td>Multiple data sources can be manually or programmatically cross-referenced to infer and predict additional information.<br><br>e.g. a photo of a student playing soccer can be cross-referenced with a schedule of practices and games to deduce where that student is likely to be after school.</td>
</tr>
</tbody>
</table>
<h3>Stranger danger</h3>
<table>
<thead>
<tr>
<th>Outdated assumption</th>
<th>Modern reality</th>
</tr>
</thead>
<tbody>
<tr>
<td>The primary online threat students face are strangers who intend to harm them physically.</td>
<td>The internet can be used by multiple categories of people to harm students, including: <ul><li> Classmates/peers with unwanted interests (bullying, romantic, etc.)</li><li> Family members and other adults in their lives who are looking for their whereabouts for harmful purposes (abuse, circumventing court restrictions, etc.)</li><li> Online “trolls” who inflict  fear and intimidation even if they don’t have the means to carry out physical harm</li></ul></td>
</tr>
<tr>
<td>The primary way someone could harm a student using online information is by manually browsing websites and looking for information to identify them.</td>
<td>Advances in search tools, AI chat bots, and image recognition algorithms provide automated means for malicious individuals to quickly and efficiently traverse many sources to retrieve and surface information about students.</td>
</tr>
<tr>
<td>Any incomplete or non-specific piece of information posted about a student is useless to a stranger who is trying to identify them.<br><br>e.g. it’s safe to post an image of a first name and last initial as long as it’s not associated with other identifying information</td>
<td>Individuals who already know students and have harmful intentions can supplement their existing knowledge with limited online data<br><br>e.g. a first name and last initial could help an estranged, abusive parent confirm their child likely attends a particular school</td>
</tr>
</tbody>
</table>
<p>In addition, current policies are often optimized to protect <em>districts</em> from legal liability, not necessarily to protect <em>students</em> from current threats. These outdated assumptions leave us underprepared to respond to increasingly common situations that arise as technology and society evolve. </p>
<h2>Updating Our Policies</h2>
<p>Districts need to proactively update their privacy policies to reflect these new realities. Proactive planning allows us to create a cohesive vision and guiding principles that inform our approach to student protection. If an unanticipated situation or new technology arises, a well thought-out policy would still provide a framework for our response.</p>
<p>Waiting until an issue arises is not an option. Responding reactively entails fixing damage, navigating publicity issues, and rewriting policy simultaneously. This leads to hasty and unsustainable policies that are perpetually steps behind dangers. In the meantime, we are increasing the backlog of pictures and information. This will take longer to sort through when situations arise that require urgent removal. In some situations, like those involving harassment, there is no warning or time to react before the damage is done. </p>
<p>The following scenarios highlight some of the issues we must be prepared to address.</p>
<h2>Scenarios</h2>
<h3>Takedown Request</h3>
<p>The mother of a 10th grade student alerts the school that her daughter’s estranged, abusive father is moving to the area. She requests all pictures of her daughter be removed from district-related web pages to protect her identity and location. She is especially worried about pictures from when her daughter was younger, as they would be more recognizable to the father. She doesn’t want him to be able to locate her family or risk him showing up to school events.</p>
<h4>Complications</h4>
<ul>
<li>Do we have a way to locate every image of the student posted across 3 school websites, social media platforms, S’mores newsletters, etc. over the last 10 years by individual teachers, coaches, administrators, and members of the PTO?</li>
</ul>
<h3>Group Chat Drama</h3>
<p>Four years ago, a staff member posted a photo to the district Instagram page. The photo shows the back of Diego, a first-grader, holding his mom’s hand as they walk into his school for an event.</p>
<p>This week, Adam, a 5th grader, finds the picture and sends it to a group chat with Diego and other students in their grade. He adds the caption “Mama’s boy”. While his face is not visible in the photo, Diego is clearly recognizable to people who know him. In retaliation, Diego takes a picture of Adam from behind as he walks home holding his little sister’s hand and sends it back to the group chat. This back-and-forth escalates until it requires school-based intervention.</p>
<h4>Complications</h4>
<ul>
<li>When families are asked to provide consent for images of their children to be posted, is there a distinction between photos that do and don’t show the student’s face?</li>
<li>In the initial Instagram post, were Diego and his family aware that a staff member took their photo or intended to post it publicly?</li>
<li>Do we want elementary-aged students to get used to being photographed by adults they don’t know?</li>
<li>Do we uphold consistent expectations between staff and students for taking and posting photos of others?</li>
<li>Do students have the option to remove photos from district platforms as they get older and more self-conscious?</li>
</ul>
<h3>Digital Footprint Cleanup</h3>
<p>Jordan, a class of 2023 alum, wants to cull his digital footprint before he sends out internship applications. He comes across a silly track team photo posted on the team’s Instagram account. Jordan’s career counselor suggests the photo may make it harder for Jordan to present himself as a professional and advises him to take it down. Jordan contacts the high school and asks for it to be removed.</p>
<h4>Complications</h4>
<ul>
<li>If the Instagram account was run by a former employee who has since left the district, would current employees even be able to remove the image? What if they passed along the login credentials, but the two-factor authentication was tied to their personal cell phone number?</li>
<li>What if a younger student in the photo is trying to earn an athletic scholarship and relies on school social media platforms to establish their portfolio? How do we balance competing interests between people in the photo?</li>
</ul>
<h3>List of Names</h3>
<p>During a visit to an elementary school,  a district  staff member is impressed with a display celebrating the school community. They post a picture of the display to the district Instagram account, inadvertently sharing a legible list of the names of almost every student in the building. One of the names is “Zaya A.”, a third grader whose family has a restraining order against a relative in the aftermath of a custody battle. While the list only includes each student’s first name and last initial, Zaya is a fairly uncommon name. Given the age and general location, the relative deduces that this is probably the Zaya they are looking for and shows up at the school during dismissal.<br />
Further complications</p>
<ul>
<li>What if Zaya’s family had not signed a media release for her? Would that apply to this scenario? If not, what avenues do they have to ensure her name is not published in this way?</li>
<li>Are district-level employees in the practice of checking media releases and other opt-outs before publicly posting school-based photos?</li>
</ul>
<h3>Viral Gone Wrong</h3>
<p>Erin, a middle school student, goes viral in a video on Tik Tok. The comment section quickly turns negative. One commenter takes a screenshot of Erin’s face in the video and uploads it to  <a href="https://faceonlive.com/">FaceOnLive</a>, which gives the commenter a list of web pages that contain other images of Erin’s face. The list includes the homepage of Erin’s school, which has a photo of her in the cast of the spring musical. The commenter posts the name, website, and phone number of Erin’s school. Soon, other commenters use this information to <a href="https://raptortech.com/resources/blog/understanding-swatting-hoaxes-fake-threats-of-school-violence/">swat</a> Erin by calling and emailing the school with threats of violence pretending to be her.</p>
<h4>Complications</h4>
<ul>
<li>FaceOnLive (and most other face search platforms) save all “faceprints” they’ve encountered. Any images FaceOnLive encounters remain searchable even if they’re removed from the originating webpage. Even if the picture is removed from the middle school website in an attempt to limit the fallout, a search for Erin’s face will still return previously scraped matches indefinitely. </li>
</ul>
<h3>School Outrage</h3>
<p>One of our schools gets negative news coverage due to public outrage over a recent decision. An online influencer with a large following publishes a rant against the school. An enraged viewer goes to the school’s website and finds a staff list that includes a photo of each staff member. They use <a href="https://facecheck.id/Face-Search/API">FaceCheck.ID’s API</a> to automatically and efficiently search for each staff member’s face, publishing a directory of staff names, images, and links to webpages where the staff member’s face was found. Other viewers comb through the resulting social media profiles, local news articles, and other images of the staff members and start showing up to intimidate and question them at locations they frequent like coffee shops, yoga classes, and places of worship.</p>
<h4>Complications</h4>
<ul>
<li>Because FaceCheck.ID uses face recognition to search rather than text, staff members who have taken precautions like obscuring their name in social media profiles would be surfaced in this type of attack</li>
<li>Even if staff members don’t have their own social media profiles or have them set to private, images their friends or family have posted will also be surfaced</li>
</ul>
<h2>Reworking Our Policies</h2>
<p>Our new policies need to balance:</p>
<ol>
<li>Student safety and autonomy</li>
<li>Celebration and recognition of student success</li>
<li>Community building</li>
</ol>
<p>We need policies shaped by questions like:</p>
<ul>
<li>How do adults model consent and give students a voice when taking and sharing photos of them?</li>
<li>What happens to images after we post them? Do they remain online indefinitely? What risks does this pose? How can we mitigate them?</li>
<li>Can guardians and students revoke consent for their images to be shared? If so, what does this look like and how is it enforced? If not, how do we ensure they understand they are giving consent in perpetuity?</li>
<li>Do guardians and students need to provide a reason to request images be removed? If so, who decides whether a reason is valid? Does this change if images feature multiple people? How is this communicated when people decide whether to consent to sharing?</li>
<li>How do we ensure the district retains administrative control over social media accounts created by individual students or employees who graduate or leave the district?</li>
<li>How could the information and pictures we post be used to harm students’ physical or mental wellbeing by people they know? What safeguards can we include to reduce harm? </li>
<li>How could the information and pictures we post be ingested, consolidated, and searched en masse to harm students, families and staff? How do we account for technologies that don’t yet exist (or aren’t publicly known) that are already scraping the web and archiving this data prior to public release?</li>
</ul>
<p>There are no clear-cut answers to these questions. But if we make time to ensure our policies are robust enough to reflect our values, we will be in the best position to protect our students, staff, and community.</p>
<p>The following section provides some considerations to help guide discussions about digital privacy policies.</p>
<h2>Policy Considerations</h2>
<h3>How long do public photos remain posted by default?</h3>
<table>
<thead>
<tr>
<th>Solution</th>
<th>Strengths</th>
<th>Weaknesses</th>
</tr>
</thead>
<tbody>
<tr>
<td>Indefinitely</td>
<td><ul><li>No ongoing maintenance</li></ul></td>
<td><ul><li>Creates a large backlog of photos to search if someone requests blanket removals</li><li> Increases the chances of being locked out of forgotten platforms (e.g.  old club or team Instagram accounts)</li><li> Maximizes photos available to cause potential harm</li></ul></td>
</tr>
<tr>
<td>For a set period of time<br>(e.g. all photos are removed three semesters after they are posted)</td>
<td><ul><li>Deletion schedule only relies on the date a photo was posted </li><li> Can be built to ensure we still have content to celebrate recent accomplishments</li></ul></td>
<td><ul><li> Requires a systematic approach to reviewing photos on an ongoing basis</li></ul></td>
</tr>
<tr>
<td>Until the subjects of the photo leave their current school</td>
<td><ul><li> Allows the maximum number of relevant photos to remain posted     </li></ul></td>
<td><ul><li> Requires a systematic approach to reviewing photos on an ongoing basis</li><li> Complicated for photos with students who leave the school at different times </li><li> Leaves photos up for a relatively long time (especially in elementary school)</li></td>
</tr>
</tbody>
</table>
<h3>How can we share celebrations of students and our community?</h3>
<table>
<thead>
<tr>
<th>Solution</th>
<th>Strengths</th>
<th>Weaknesses</th>
</tr>
</thead>
<tbody>
<tr>
<td>Post on public channels<br><br><em>District website, social media accounts, etc.</em></td>
<td><ul><li>Maximum sharing and engagement</li><li> No additional setup, logistics, or communications</li></ul></td>
<td><ul><li> No control over who sees content</li><li> Content can be scraped and saved by third party aggregation services</li><li> Content can be ingested and searched by individuals</li><li> Content can be used to train or create technology we don’t currently know about</li><li> Need to keep track of and maintain control over social media accounts</li><li> Needs a policy for content removal</li></ul></td>
</tr>
<tr>
<td>Post on password-protected web pages</td>
<td><ul><li>Data is unlikely to be scraped by bots even if the password is shared with other individuals</li><li> No need to create and communicate individual login credentials</li><li> No need for community members to create accounts or install apps</li></ul></td>
<td><ul><li> Password must be communicated to community members</li><li> No way to track who the password has been shared with and whether they’re meant to have it</li><li>Needs a policy for changing passwords </li><li> A shared password is more secure than no password but less secure than individual passwords. Without proper communication about the rationale behind this decision, it could create the appearance of poor security practices </li></ul></td>
</tr>
<tr>
<td>Direct message platforms<br>(e.g. messaging app, text messages, emails)</td>
<td><ul><li>Messaging is targeted directly to people who need it</li><li>Information can’t be scraped by bots </li></ul></td>
<td><ul><li> Ongoing costs</li><li> Requires setup, troubleshooting, and maintenance</li><li>Information can be individually shared with others  </li></ul></td>
</tr>
<tr>
<td>Restricted online communities <br>(e.g private Facebook group)</td>
<td><ul><li>Built in two-way engagement</li><li> Maintains a list of who has access to information </li></ul></td>
<td><ul><li>Community members need an account with this service</li><li> Subject to third party rules, data practices, and availability</li><li> Requires ongoing community maintenance (approving members, removing old members, moderation, etc.)</li></ul></td>
</tr>
<tr>
<td>Paper-based communications</td>
<td><ul><li>Requires the most effort to share beyond the original recipient</li></ul></td>
<td><ul><li> Expensive</li><li> Not environmentally friendly</li><li> Less convenient for recipients to keep track of  </li></ul></td>
</tr>
</tbody>
</table>]]></description>
    </item>
        <item>
      <title>Best Books of 2024</title>
      <link>https://zkolar.xyz/posts/best-books-of-2024</link>
      <guid>https://zkolar.xyz/posts/best-books-of-2024</guid>
      <pubDate>Sun, 22 Dec 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>Each year, I keep a "best of" list with the books I've enjoyed and recommend the most. Here's the 2024 list in the order I read them. Lists from previous years are <a href="https://zkolar.xyz/books">here</a>.</p>
<h2>Digital Minimalism: Choosing a Focused Life in a Noisy World</h2>
<p>By Cal Newport</p>
<p>Cal Newport offers strategies to make our relationship with digital devices more intentional. <em>Digital Minimalism</em> directly and indirectly inspired changes to my use of technology that I've kept up pretty consistently  throughout the year.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/5c6f6141b2-1734783394/digital-minimalism.png" alt="Book cover of  Digital Minimalism: Choosing a Focused Life in a Noisy World by Cal Newport" /></p>
<p><a href="https://app.thestorygraph.com/books/2e5af995-80a6-444e-b287-03b6414492ef">StoryGraph</a> |  <a href="https://bookshop.org/p/books/digital-minimalism-choosing-a-focused-life-in-a-noisy-world-cal-newport/12081448?ean=9780525536512">Bookshop.org</a></p>
<hr />
<h2>24/6: The Power of Unplugging One Day a Week</h2>
<p>By Tiffany Shlain</p>
<p>Similar to Digital Minimalism, <em>24/6</em> suggests realistic ways to reduce the amount of time we spend with screens and meaningful alternative ways to utilize the time.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/9705eb122f-1734784092/24-6.jpeg" alt="Book cover of 24/6: The Power of Unplugging One Day a Week by Tiffany Shlain" /></p>
<p><a href="https://app.thestorygraph.com/books/a414f960-549c-4ee2-bc6d-803004377899">StoryGraph</a> | <a href="https://bookshop.org/p/books/24-6-giving-up-screens-one-day-a-week-to-get-more-time-creativity-and-connection-tiffany-shlain/6706988?ean=9781982116873">Bookshop.org</a></p>
<hr />
<h2>You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place</h2>
<p>By Janelle Shane</p>
<p>Through funny metaphors and real-life examples, Janelle Shane explains the inner workings and shortcomings of various technologies under the AI umbrella. No technical background necessary to follow along.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/0dfc3b8201-1734783484/you-look-like-a-thing-and-i-love-you.jpeg" alt="Book cover of You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It&#039;s Making the World a Weirder Place by Janelle Shane" /></p>
<p><a href="https://app.thestorygraph.com/books/603304cb-d45d-4ecc-955d-2ed6ee70b3e1">StoryGraph</a> | <a href="https://bookshop.org/p/books/you-look-like-a-thing-and-i-love-you-how-artificial-intelligence-works-and-why-it-s-making-the-world-a-weirder-place-janelle-shane/114149?ean=9780316525220">Bookshop.org</a></p>
<hr />
<h2>Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence</h2>
<p>By Kate Crawford</p>
<p>Kate Crawford breaks down the politics and power dynamics that go into creating various AI systems, including:</p>
<ul>
<li>the environmental and economic impacts of extracting materials (usually from the Global South) to create hardware to run AI-based technology</li>
<li>the sources data, often from the most vulnerable members of society, used to train AI models</li>
<li>the underpaid, highly surveilled behind-the-scenes human labor required to label data used to train AI models</li>
<li>the power and politics of classification that lead to these labels</li>
</ul>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/6cf9d19001-1734783531/atlas-of-ai.jpeg" alt="Book cover of Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford" /></p>
<p><a href="https://app.thestorygraph.com/books/a93a498d-91ab-4b42-91b1-49a3ba3cb132">StoryGraph</a> | <a href="https://bookshop.org/p/books/atlas-of-ai-power-politics-and-the-planetary-costs-of-artificial-intelligence-kate-crawford/17465404?ean=9780300264630">Bookshop.org</a></p>
<hr />
<h2>How You Say It: Why We Judge Others by the Way They Talk--And the Costs of This Hidden Bias</h2>
<p>By Katherine D. Kinzler</p>
<p>Katherine Kinzler demonstrates how our brains are hard-wired to categorize people by their accents (more inherently than biases such as race and gender, which we learn through socialization) and the impacts of this hidden bias with no legal protections.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/06d0ce7cc7-1734783684/how-you-say-it.jpeg" alt="Book cover of How You Say It: Why We Judge Others by the Way They Talk--And the Costs of This Hidden Bias by Katherine D. Kinzler" /></p>
<p><a href="https://app.thestorygraph.com/books/0574130c-d042-42bc-a8e7-b6c2df1a3ac3">StoryGraph</a> | <a href="https://bookshop.org/book/9780358567103">Bookshop.org</a></p>
<hr />
<h2>Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State</h2>
<p>Byron Tau explores the evolution of national security, data brokerage, and surveillance for profit over the last few decades, leading to the circumvention of the 4th Amendment by distinguishing "seized" data from "purchased" data.</p>
<p>By Byron Tau</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/d946cf6922-1734783728/means-of-control.jpeg" alt="Book cover of Means of Control: How the Hidden Alliance of Tech and Government is Creating a New American Surveillance State by Byron Tau" /></p>
<p><a href="https://app.thestorygraph.com/books/168523fe-76fd-43af-810f-5ed9f5aea07d">StoryGraph</a> | <a href="https://bookshop.org/book/9780593443224">Bookshop.org</a></p>
<hr />
<h2>Humble Pi: When Math Goes Wrong in the Real World</h2>
<p>By Matt Parker</p>
<p>In Humble Pi, Matt Parker uses humor and case studies to show the ways mathematical mistakes lead to real-world consequences.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/196325a219-1734783863/humble-pi.jpeg" alt="Book cover of Humble Pi: When Math Goes Wrong in the Real World by Matt Parker" /></p>
<p><a href="https://app.thestorygraph.com/books/9a64cef5-4f89-41e9-86c9-7199580ab7c5">StoryGraph</a> | <a href="https://bookshop.org/book/9780593084694">Bookshop.org</a></p>
<hr />
<h2>Unmasking AI: My Mission to Protect What Is Human in a World of Machines</h2>
<p>Dr. Joy Buolamwini tells the story of her journey into the world of AI research to illuminate the ways these technologies reinforce societal biases and power structures when left unchecked.</p>
<p>By Dr. Joy Buolamwini</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/be29b062d6-1734784016/unmasking-ai.jpeg" alt="Book cover of Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini" /></p>
<p><a href="https://app.thestorygraph.com/books/1b62ff0d-df83-4554-8da2-29f43398a577">StoryGraph</a> | <a href="https://bookshop.org/book/9780593241844">Bookshop.org</a></p>
<hr />
<h2>Attack from Within: How Disinformation is Sabotaging America</h2>
<p>By Barbara McQuade</p>
<p>Barbara McQuade traces the ways disinformation has been used by past and present authoritarians to alter the perception of reality and seize power.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/1adf457a55-1734783911/attack-from-within.jpeg" alt="Book cover of Attack from Within: How Disinformation is Sabotaging America by Barbara McQuade" /></p>
<p><a href="https://app.thestorygraph.com/books/f01d076e-86cb-4c2f-8b39-dc7f8197d439">StoryGraph</a> | <a href="https://bookshop.org/book/9781644213636">Bookshop.org</a></p>
<hr />
<h2>Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts</h2>
<p>By Annie Duke</p>
<p><em>Thinking in Bets</em> frames decision-making as an exercise in probabilities. We can never be 100% certain of a given outcome, so sometimes the best decisions lead to bad results and vice-versa. Rather than reveling in the clarity of hindsight, we must embrace uncertainty and make the best decisions we can with the information available to us at the time. </p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/2d054edb3b-1734783965/thinking-in-bets.jpeg" alt="Book cover of Thinking in Bets: Making Smarter Decisions When You Don&#039;t Have All the Facts by Annie Duke" /></p>
<p><a href="https://app.thestorygraph.com/books/c7410394-dc00-4249-8ad3-bf98a718e1f4">StoryGraph</a> | <a href="https://bookshop.org/book/9780735216372">Bookshop.org</a></p>
<hr />
<h2>AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference</h2>
<p>By Sayash Kapoor and Arvind Narayanan</p>
<p>This is my new favorite AI book. Sayash Kapoor and Arvind Narayanan break down the realities and limitations of various AI technologies to help readers cut through the hype and understand what is actually possible. No technical background is necessary to follow this book.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/e2055db739-1734979754/ai-snake-oil.jpeg" alt="Book cover of AI Snake Oil: What Artificial Intelligence Can Do, What It Can&#039;t, and How to Tell the Difference by  Sayash Kapoor and Arvind Narayanan" /></p>
<p><a href="https://app.thestorygraph.com/books/f2e8cb20-643a-474e-8312-0c2b43be6910">StoryGraph</a> | <a href="https://bookshop.org/p/books/ai-snake-oil-what-artificial-intelligence-can-do-what-it-can-t-and-how-to-tell-the-difference-arvind-narayanan/21324674?ean=9780691249131">Bookshop.org</a></p>
<hr />
<h2>Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It</h2>
<p>By Kashmir Hill</p>
<p>Kashmir Hill documents the creation and rise of Clearview AI, a secretive company that provides facial recognition to search for faces against its database of over 30 billion images scraped from the internet.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/b5034d5d91-1734784180/your-face-belongs-to-us.jpeg" alt="Book cover of Your Face Belongs to Us: A Secretive Startup&#039;s Quest to End Privacy As We Know It by Kashmir Hill" /></p>
<p><a href="https://app.thestorygraph.com/books/5b809184-c34a-432a-b14e-7ff155afc67e">StoryGraph</a> | <a href="https://bookshop.org/book/9780593448571">Bookshop.org</a></p>
<hr />
<h2>Border Hacker: A Tale of Treachery, Trafficking, and Two Friends on the Run</h2>
<p>By Levi Vonk and Axel Kirschner</p>
<p>Anthropologist Levi Vonk joins migrants in caravans and shelters on the dangerous journey from Guatemala to Mexico to the United States. Along the way, Levi befriends Axel Kirschner, who was born in Guatemala, brought to the U.S. as a young child by his mother, and eventually deported back to Guatemala after a minor traffic incident where he is not at fault. After arriving in Guatemala, he discovers his birth records have been destroyed and no country has a record of his existence. He and Levi attempt to navigate the hostile immigration system to reunite Axel with his wife and young children.</p>
<p><img src="https://zkolar.xyz/media/pages/books/best-of-2024/ff0041e5f0-1734784228/border-hacker.jpeg" alt="Book cover of Border Hacker: A Tale of Treachery, Trafficking, and Two Friends on the Run by Levi Vonk and Axel Kirschner" /></p>
<p><a href="https://app.thestorygraph.com/books/96c78d62-acc9-4b9f-be98-bbeadda558a7">StoryGraph</a> | <a href="https://bookshop.org/book/9781645037064">Bookshop.org</a></p>]]></description>
    </item>
        <item>
      <title>Why can&#8217;t LLMs understand?</title>
      <link>https://zkolar.xyz/posts/why-cant-llms-understand</link>
      <guid>https://zkolar.xyz/posts/why-cant-llms-understand</guid>
      <pubDate>Mon, 25 Nov 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>I have a language made up of the following emojis:</p>
<p>🚓 🚕 🚜<br />
🔺 🔻<br />
🥦 🍋 🍅<br />
🤿 ⛸️ 🏐</p>
<p>Here is a list of "sentences" written in this language. Do you notice any patterns?</p>
<p>🚕🔺🥦<br />
🚜🔻⛸️<br />
🚓🔻🍋<br />
🚜🔺🤿</p>
<details>
  <summary>Feeling stuck? Click here for a few example patterns.</summary>
<ul>
<li>Each sentence is made up of exactly three emojis</li>
<li>Each sentence starts with a vehicle</li>
<li>Each sentence has a triangle in the middle</li>
</ul>
</details>
<p>Click in the blank spaces below to see if you can create your own coherent sentences. Use the "test" button to check. In this context, "coherent" means a native speaker of the emoji language would believe a sentence was written by another native speaker.</p>
<p>If you're unsure where to start, try guessing random sequences. Use the feedback to refine your guesses.</p>
<script type="text/javascript" src="https://llmtraining.techlit.tools/embed/v1.js"></script>
<p><a href="https://llmtraining.techlit.tools" rel="noreferrer" target="_blank">Open sentence tester in a new window</a></p>
<p>If you play with the tester long enough, you'll eventually come up with enough patterns to create a coherent sentence every time.</p>
<p>Now consider this sequence:</p>
<p>🚜🔺⛸️</p>
<p>The tester will tell you this sentence is coherent. But it <em>true</em>?</p>
<h2>Missing context</h2>
<p>You may have a handle on which emojis are likely to appear in a particular order. But at this point, we can only arrange the emojis relative to each other. There's no context to understand what the emojis map to in other contexts.</p>
<p>Maybe each emoji represents a word or concept we're familiar with. Or a pitch in a humpback whale song. Or a frequency in a radio signal originating from somewhere in space.</p>
<p>To construct a coherent sentence, we don't need to know what the emojis represent. We just recreate and extrapolate from the patterns we've observed. But to discern meaning and evaluate truth, we’d need additional context.</p>
<p>Increasing data and processing power isn't the same as providing context. Even if we had hundreds of emojis, thousands of example sentences, and unlimited time to test patterns, the best we could do is identify more patterns and produce more complex sentences. But we wouldn't be any closer to understanding what the patterns represent.</p>
<h2>Back to the emojis</h2>
<p>🚜🔺⛸️ is not true (at least as I’m writing this). We can add this feedback to the list of rules and patterns, avoiding that particular sentence in the future.</p>
<p>But we still don't know <em>why</em> it's untrue. And unless we get a list that enumerates every possible false statement in the language, we'll have to rely on external feedback to compensate for our lack of understanding.</p>
<h2>Translation</h2>
<p>Here are translations from the emoji language into English:</p>
<p>🚓 = The traffic light<br />
🚕 = The banana<br />
🚜 = The weather</p>
<p>🔺 = is<br />
🔻 = is not</p>
<p>🥦 = green<br />
🍋 = yellow<br />
🍅 = red</p>
<p>🤿 = raining<br />
⛸️ = snowing<br />
🏐 = sunny</p>
<p>This means 🚜🔺⛸️ translates to “The weather is snowing”. Even this is not enough context on its own. In Massachusetts in late November, it's false. But in another place or at a different time, the same sentence may be true. </p>
<h2>LLMs</h2>
<p>Large language models work by finding patterns to construct coherent sentences. They lack the context necessary to “understand” what they’re generating. As humans provide feedback about problematic or untrue statements, they "learn" specific patterns to avoid. But this is not the same as generalizable understanding.</p>
<p>Regardless of advances in hardware and training data, these models will always be prone to constructing false sentences (or “hallucinating”). In the disclaimer “ChatGPT may make mistakes”, there is no implicit “for now”.</p>]]></description>
    </item>
        <item>
      <title>My generative AI guidelines</title>
      <link>https://zkolar.xyz/posts/my-generative-ai-guidelines</link>
      <guid>https://zkolar.xyz/posts/my-generative-ai-guidelines</guid>
      <pubDate>Tue, 12 Nov 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>As we grapple with the current generative AI boom, it is important to consider the implications and impacts of such tools. This is my personal policy on utilizing generative AI for instructional materials. It's a living document that I intend to change as I expand my own understanding and new technologies take root. I would love to <a href="/contact">hear feedback</a>.</p>
<p>This list specifically refers to learning materials I create for student use (e.g. graphics, text content, etc.). It does not cover cases where students interact with generative AI platforms and tools.</p>
<h2>Generative AI content cannot be used as a source of information.</h2>
<p>Large language models (e.g. ChatGPT) and text-to-image models (e.g. DALL-E)  are prone to constructing false, biased, and misleading text and images. Even with refinements to limit known blatant falsehoods and biases, technical limitations of these models prevent them from ever fully dropping these habits.</p>
<p>Using generated media as a source of information risks introducing biased and incorrect information. More fundamentally, it risks teaching students to utilize these models as a reliable source.</p>
<h2>The use of generative AI should be disclosed to students.</h2>
<p>I want to be transparent about my uses and rationales of these tools to help students learn to recognize the content “in the wild”. This also models responsible disclosure of the use of AI-generated content.</p>
<h2>Inaccuracies, flaws, and biases that come to light in generated materials should be addressed with students to help them become critical creators and consumers of AI-generated content.</h2>
<p>Part of using these tools responsibly is helping students understand their strengths and limitations. Addressing real-life examples of these limitations will help students differentiate contexts where their use is or isn’t appropriate.</p>
<h2>The use of generative AI tools cannot reduce or replace paid human labor that would be utilized if the tools did not exist.</h2>
<p>I only use AI tools to augment and improve the materials I would otherwise personally create. AI cannot be used as a cost-cutting measure that impacts people’s livelihood.</p>
<h2>Materials created or supplemented with generative AI content cannot generate revenue.</h2>
<p>All AI-generated media is a collective product of training data created by countless individual creators that aren’t able to be named or compensated. As such, any materials I create that utilize such media don’t fully belong to me. If I share them beyond my students, it must be in a way that others can freely use and build upon.</p>
<h2>The use of generative AI should contribute to student learning and engagement.</h2>
<p>Training and generating AI content (especially images and videos) consumes a lot of computing power. At scale, this leaves a large environmental footprint. Uses of these tools should be focused on creating reusable media that will have a tangible, positive impact on student learning. Experiments and excessive revisions should be limited.</p>]]></description>
    </item>
        <item>
      <title>Words without thoughts</title>
      <link>https://zkolar.xyz/posts/words-without-thoughts</link>
      <guid>https://zkolar.xyz/posts/words-without-thoughts</guid>
      <pubDate>Mon, 20 May 2024 00:00:00 -0400</pubDate>
      <description><![CDATA[<p>Language is a proxy for thoughts. We can't send information directly between brains, so we use language to  approximate what we're thinking. When we hear or read what others have said, we don't just think about the words. We attempt to recreate and understand the original thought behind them. </p>
<p>Sometimes, we encounter thoughts without words. An infant can't say "I'm hungry", but they know when they have an unpleasant feeling. They may even know the solution, or at least that they need help.</p>
<p>But prior to the recent explosion of generative AI, we never encountered words without thoughts. Coherent sentences that aren't simplifications of a lifetime of thoughts, feelings, and experiences are foreign to us. Our expectation that words imply thoughts lead us to overestimate the intelligence of these models.</p>]]></description>
    </item>
        <item>
      <title>When measures meet reality</title>
      <link>https://zkolar.xyz/posts/when-measures-meet-reality</link>
      <guid>https://zkolar.xyz/posts/when-measures-meet-reality</guid>
      <pubDate>Sat, 27 Jan 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>In response to <a href="https://hachyderm.io/@mekkaokereke/111811357308187714">Mekka Okereke</a>:</p>
<blockquote>
<p>The 10 women might answer the "Can people from all backgrounds succeed here" question with an answer of only 50% positive.</p>
<p>The 100 men, after observing the women's experience, usually lower their scores.🤯 They might now answer the question with an average score of 70% positive.</p>
<p>This is a good thing!</p>
<p>The world that exists in the minds of your men employees, and the one that exists in the minds of your women employees, are coming closer into alignment with the way the world actually exists.</p>
</blockquote>
<p>(<a href="https://hachyderm.io/@mekkaokereke/111811238685187340">Full thread</a>)</p>
<p>This reminds me of "A Nation At Risk", the Reagan-era report about the state of American education. It lamented lower average SAT scores, which was true but misleading. At the time, the college-bound population was growing. More students who had generally been excluded (including women, Black people, people from low-income backgrounds) were able to attend. This means the pool of SAT test-takers was bigger and more diverse than in prior years.</p>
<p>The dip reflected not a decline in the overall quality of education, but a <a href="https://www.npr.org/sections/ed/2018/04/29/604986823/what-a-nation-at-risk-got-wrong-and-right-about-u-s-schools">more holistic picture</a> of the landscape at the time. When the scores were separated by subgroup, most groups' averages had actually improved from previous years (<a href="https://en.wikipedia.org/wiki/Simpson&#039;s_paradox">Simpson’s paradox</a>). That said, this exposed long-lived inequities between the opportunities afforded to these groups that needed (and still need) to be fixed.</p>
<p>With more nuance in reporting and interpretations, the country could have addressed systemic inequities that disproportionately limit opportunities available to Black, Hispanic, and Indigenous children. Instead, the one-size-fits-all approach has perpetuated them through policies like No Child Left Behind, Race to the Top, and the general obsession with high-stakes testing.</p>
<p>View on Mastodon:<br />
<a href="https://mastodon.social/@zakkolar/111819595393574311">Part 1</a> <a href="https://mastodon.social/@zakkolar/111819596369889118">Part 2</a> <a href="https://mastodon.social/@zakkolar/111819597325323405">Part 3</a></p>]]></description>
    </item>
        <item>
      <title>Fixing Autocorrect Woes</title>
      <link>https://zkolar.xyz/posts/fixing-autocorrect-woes</link>
      <guid>https://zkolar.xyz/posts/fixing-autocorrect-woes</guid>
      <pubDate>Fri, 05 Jan 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>Every few months, my iPhone’s keyboard seems to degrade. More mistakes slip past autocorrect, and I have to type words 3-4 times in a row to remove typos. Sometimes, it even adds its own typos when I get a word right.</p>
<p>My theory is I rely on autocorrect too much, and that creates a feedback loop over time. I tend to make the same mistakes, like tapping the space bar too close to the N and getting phrases like “I amngoingnto”. In the beginning, autocorrect assumes this is a mistake and fixes it. But as my sloppy typing turns into muscle memory, the keyboard’s machine learning algorithm treats the frequency as intentional and adjusts accordingly. It starts ignoring my most frequent stray Ns and even replaces spaces with Ns.</p>
<p>Whatever the reason, I’ve found resetting my keyboard dictionary when these issues arise tends to help. To do this on iOS, go to <strong>Settings</strong> &gt; <strong>General</strong> &gt; <strong>Transfer or Reset</strong> &gt; <strong>Reset</strong> &gt; <strong>Reset Keyboard Dictionary</strong>. These taps get progressively scarier - I’m always afraid I’ll accidentally reset the entire phone. But, at least in iOS 17, the <strong>Reset</strong> button triggers another prompt to choose specifically what to reset.</p>
<p>I go through this process every few months. There’s a small adjustment period in the beginning while the keyboard re-learns my most frequently used words, but this is much less annoying than fighting the bad habits it picks up over time.</p>]]></description>
    </item>
        <item>
      <title>Digital Minimalism Book Takeaways</title>
      <link>https://zkolar.xyz/posts/digital-minimalism-book-takeaways</link>
      <guid>https://zkolar.xyz/posts/digital-minimalism-book-takeaways</guid>
      <pubDate>Wed, 03 Jan 2024 00:00:00 -0500</pubDate>
      <description><![CDATA[<p>Finished reading: <a href="https://micro.blog/books/9780525536512">Digital Minimalism</a> by Cal Newport 📚</p>
<p>This is a great complement to <a href="https://micro.blog/books/9780593138526">Stolen Focus</a> by Johann Hari. It has strategies to help reprioritize your time, attention, and relationship to technology. It’s not an all-or-nothing approach - you can pick out which approaches will work for you and skip over the ones that may be less relevant or beneficial.</p>
<p>These are some ideas I’m trying, some directly taken from the book and others indirectly inspired.</p>
<h2>Thinking time</h2>
<p>I’m setting aside 30 minutes each evening to just sit without my phone or any content to consume (TV, books, etc). I’ve decided to allow music as long as it’s instrumental or I can tune out the lyrics. While the goal of this time isn’t explicitly to write, I’ve already filled a few notebook pages with thoughts ideas from doing this the last few nights.</p>
<h2>Apple Watch</h2>
<p>I’ve stopped wearing my Apple Watch during the day. I’ve already noticed a huge improvement in concentration without getting disrupted by notifications the last few days. Since I’m so used to the watch, I’m not in the habit of checking my phone frequently without being prompted. I want to avoid falling into that when the novelty wears off.</p>
<h2>Screen time limits</h2>
<p>Mostly for privacy purposes, I deleted many social media apps from my phone several years back and switched to the mobile websites when I wanted to check them. I use the Duck Duck Go browser for social media sites. This was another privacy move initially, but an added benefit is that it signs me out each time I clear my tabs so I need to sign back into my accounts each time. This adds just enough friction to prevent mindless scrolling. It’s also reduced my impulse Amazon purchses.</p>
<p>All that said, I’ve found other apps to get sucked into that are more privacy-friendly (Readwise Reader, Mona for Mastodon, micro.blog). I’m going back and forth between setting screen time limits and deleting some of those apps as well. </p>
<h2>Conversation Office Hours</h2>
<p>One of my favorite suggestions from the book is to create “conversation office hours”. This is a dedicated recurring time when people can call you (or drop by) to have casual, yet deeper interactions than social media comments or even texting provide. It reduces the anxiety and awkwardness around calling out of the blue because callers know they aren’t interrupting you. At the same time, it’s organic and flexible because you aren’t scheduling specific times with specific people.</p>
<p>One example in the book is a person whose “office hours” are during his evening commute. His friends and family know they can call any weekday at 5:30 and he’ll be available. My commute is (thankfully) short, but I could use the time when I prep/cook dinner. Turning this into a routine might even help avoid the temptation to order out, which is another goal of mine.</p>]]></description>
    </item>
      </channel>
</rss>