A lesson plan: Exploring the manifestation of information biases through engaging in personalized content.
Logistics
The lesson plan proposed below will be 50 minutes, including the content, activities and discussion opportunities.
The lesson will be held online on WebEx, to support the accessibility and restrictions on meeting in-person due to the Covid-19 pandemic. WebEx allows a large capacity of participants, has a chat portal for real time feedback, and the ability to record the session for further sharing capabilities. Students can join the online classroom with a location link that will be provided. The content materials will be presented through the sharing screen capability that is offered on WebEx.
Focus
I would like to explore the current landscape of information biases through analyzing the current technologies of suggestive content orchestration. Through exploring this topic, I wish to create a higher awareness for individuals to challenge their information collecting abilities to help eliminate negative psychological and emotional impacts yielded from advancing technologies moving forward.
Subsequently, I wish to educate fellow designers and software producers to be mindful of the positive and negative impacts of the suggestive technology and algorithms to further enhance positive outcomes.
Motivation
Currently working as a user experience software designer, I am unsure if providing findable information using personalized algorithms is the best solution for the end user regarding their information literacy. How does this interaction impact software consumers in their information collecting process? How do end users determine if the software organizations are making the right suggestions for them? Is this ethical?
Intended audience
Information consumers and software developing stakeholders that have a relationship with suggestive content (Netflix, Instagram, Twitter, Facebook, Amazon, Google)
Goals for Part 1: Understanding the current landscape of suggestive (personalized) algorithm driven digital platforms
Menti: Sentiment analysis activity using word cloud generator
The goal of this activity is for me to get a better sense of of the students’ preferences and expectations when engaging with digital products. Understanding this will help me direct the scope of the presentation moving forward.
Lecturette: Softwares that are built to support your information needs and preferences
To highlight how the digital platforms are designed and deployed, with examples including Netflix, Facebook, and Amazon. All of these tools are designed for the user to find relevant information faster. For instance, Netflix targets different film artworks to suit users’ viewing habits. Based on the mostly viewed movies and shows, the images are surfaced to suit their preferences of genres and actors. Another interesting example is Amazon. Their justification of suggestive shopping experience is that ‘unpersonalized’ content is overwhelming for the user (Amazon). It also benefits the company that it creates customer engagement and conversion through targeted marketing and promotions. Lastly, Facebook surfaces content from ranking categories including: who the user interacts with; the type of media in the post (video, link, photos); and the popularity of the post (Hootsuite).
Goals for Part 2: The process of yielding information biases in the human brain
In this section, different types of biases will be introduced to the students, according to Daniel Kahneman’s literature, Thinking Fast and Slow to explore how information that is consumed is processed to create biases. Once the types are introduced, students will become aware of how they consume information, and this impacts their thinking process.
The subsequent activity is designed to challenge how students engage with the suggestive content by identifying the pros and cons of the digital products that they use. They will be put into groups to engage in a discussion to realize how their content is organized and whether if they find the system useful.
The activity debrief is designed for me to get a better understanding of their discussion, with questions to the group including how they identified the pros and cons; based on their experiences, how this affects their thinking process; and how they would determine if the suggestive content is the right information for them.
Goals for Part 3: Balancing the negative and positive impacts through suggestive (personalized) content.
Based on the feedback collected from the previous activity, Part 3 is to explore ways in balancing the negative and positive impacts of suggestive media. Specifically, students will start challenging any given information to find ways of balancing what is provided, vs. searching for other perspectives and sources.
One solution to decrease bias creation is that the digital organizations can start to implement how we engage with their platforms by surfacing user usage and source analytics as a way of information literacy. Below diagram depicts how the reinforcing loop of bias creation (right) can be balanced with organizations implementing user usage and source analytics yielding information literacy, which decreases the time spent on digital platforms (left).
The class will conclude with the definitions of information literacy, and the purpose and importance of making critical judgements in all information that we consume:
“Information literacy is the ability to think critically and make balanced judgements about any information we find and use. It helps to understand the ethical and legal issues associated with the use of information, including privacy, data protection, freedom of information, open access/open data and intellectual property.” (CILIP, 3)
“IL empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals. It is a basic human right in a digital world and promotes social inclusion in all nations.” (CILIP, 8)
References
Margais, D., Vassilakis, C., Georgiadis, P. (2018). Query personalization using social network information and collaborative filtering techniques. Retrieved from https://journals-scholarsportal-info.myaccess.library.utoronto.ca/details/ 0167739x/v78ipart_1/440_qpusniacft.xml
Kahneman, D. (2011). Thinking, fast and slow. Part II. Heuristics and Biases, 109-185.
Nikolov, D., Lalmas, M., Flammini, A., Menczer, F. (2018). Quantifying Biases in Online Information Exposure. Retrieved from https://doi- org.myaccess.library.utoronto.ca/10.1002/asi.24121
Association of College & Research Libraries. (2016). Framework for information literacy for higher education. Retrieved from http://www.ala.org/acrl/standards/ ilframework
Yalcinkaya, G. (2017). Netflix targets different film artworks to suit users’ viewing habits. Retrieved from https://www.dezeen.com/2017/12/20/netflix-targets-film- artwork-depending-users-viewing-habits-design-technology/
Meadows, Donella H. (2008). Thinking in Systems. Chelsea Green Publishing, 1-29. CILIP Information Literacy Group. (n.d.). Models & frameworks.
Retrieved from https://infolit.org.uk/definitions-models/
Amazon. (2020). Amazon personalize.
Retrieved from https://aws.amazon.com/personalize/
Netflix. (n.d.). How Netflix’s Recommendations System Works. Retrieved from https://help.netflix.com/en/node/100639
Amazon. (2020). Recommendations. We make recommendations based on your interests. (n.d.). Retrieved from https://www.amazon.ca/gp/help/customer/ display.html?nodeId=GE4KRSZ4KAZZB4BV
Cooper, P. (2020). How the Facebook algorithm works in 2020 and how to make it work for you. Retrieved from https://blog.hootsuite.com/facebook-algorithm/