Fontys

Technology Impact Cycle Tool

Courses Login
Depending on the technology you are assessing, some categories are more important than others. Here you can indicate how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes.'
Who will have access to this technology and who won’t? Will people or communities who don’t have access to this technology suffer a setback compared to those who do? What will that setback look like? What new differences will there be between the “haves” and “have-nots” of this technology?


Go to the crash course:
Section One - Accessibility

An example: Griefbot

Yes. However we charge a subscription fee for users. Also, our griefbot can only be based on an active social media and data profile. Our Griefbot is based on western datahubs. Chinese and Russian datahubs are not possible. We do understand that their is a difference between have and have nots, but we do not see this as a reason to change our technology.

(Only the gray shaded text is printed on the Quick Scan canvas)

Do a brainstorm. Can you find a built-in bias in this technology? Maybe because of the way the data was collected, either by personal bias, historical bias, political bias or a lack of diversity in the people responsible for the design of the technology? How do you know this is not the case? Be critical. Be aware of your own biases. Tip: pretend the opposite of your assumptions about your core user are true - how does that change your product?


Go to the crash course:
Section Three - Racial bias in technology

(This question is part of the Quick Scan)

An example: Griefbot

Yes. Of course. The idea of the technology / griefbot is that it is biased. We have only one version of the griefbot for all users. There can only be one subscription and so there can only be one griefbot of the deceased. This subscription can only be requested by the person that has access to usernames and passwords and a certificate of death. This subscriber can give more people access by buying additional licenses.
If this technology makes automatic decisions through algorithms or through machine learning can they be verified or explained? Can a decision be explained? If not, how do you know the decision of this technology is not biased and/or inclusive? Are there alternative procedures in place?


Go to the crash course:
Section Two - Bias in technology

An example: Griefbot

We think it is important that our technology is biased, personal and does not account for decisions. Just like a real person. That is why we designed our technology as a black box. Users will only experience the user interaction without knowing why the griefbot does what it does!
How great is the disruptive impact on current social structures and economic stability? What existing social relations will be disturbed and what economic activities will be reduced? Are new social relations and economical opportunities replacing the old ones or is only a small group benefitting from the new technology? Did you think about how cultural habits might change how your product is used? Or how your product might change cultural habits?


Go to the crash course:
Section Four - Gender bias in technology

An example: Griefbot

We believe that we add a new service to society, so we are not disrupting existing economical models. We do believe social relations can be disturbed but also enhanced. We think this is the responsibility of the users. Since social media is widespread and our subscription fee is low (think around 10 dollar per month) we do not believe a small group will benefit.
The design of technology is often a reflection of the people that are designing the technology. A diverse team provides the opportunity to see the technology in diverse ways. How diverse is this team? Do you see a team that is a representation of the target group? Is this team able to build an inclusive technology?


Go to the crash course:
Section Five - Diversity in teams

An example: Griefbot

Yes. Our design & development - team consists of a lot of different people. We even made sure that cultural differences (we have people from 4 continents), gender and age (we also have older / retired designers) are part of our team!

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about accessibility to this technology. If you think about built in biases or automatic decisions that may be biased. If you think about who is benefitting from this technology and the diversity of the team that creates the technology. If you think about all that, what improvements would you (want to) make? In the technology? In context? In use? The answers on questions 1-5 help you to gain insight into the potential impact of this technology when it comes to fairness. The goal of these answers is not to provide you with a 'go' or ' no go' - decision. The goal is to make you think HOW you can improve the use of the technology. This can be by making changes to the technology or making changes to the context in which the technology is used, or making changes in the way the technology is used.


Go to the crash course:
Section Six - Additional materials

(This question is part of the Improvement Scan)

An example: Griefbot

Yes. Based on this discussion we decided to limit the accessibility of the griefbot to one person. A person that has a death certificate and usernames and passwords. The user can buy additional licenses. Secondly we started a discussion with life Insurance companies to include a griefbot - possibility so we make it even more affordable for everyone.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Frameworks

How to mitigate the risks of automatic decision systems
(https://technofilosofie.com/wp-content/uploads/2016/12/FPF-Automated-Decision-Making-Harms-and-Mitigation-Charts.pdf)
Implicit Association Test
(https://implicit.harvard.edu/implicit/netherlands/)
Impact Assessment Human Rights and Algorithms (Dutch)
(https://www.uu.nl/sites/default/files/Rebo-IAMA.pdf)

Videos:

Paradise for Nerds (on the risks of not having a diverse designteam)
(https://www.youtube.com/watch?v=pIQuDvHPPuQ&t=7s)