Fontys

Technology Impact Cycle Tool

Login
Depending on the technology your assessing, some categories are more important than others. Here you can indicatie how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes'.
Who will have access to this technology and who won’t? Will people or communities who don’t have access to this technology suffer a setback compared to those who do? What does that setback look like? What new differences will there be between the “haves” and “have-nots” of this technology?

An example: Griefbot

Yes. However we charge a subscription fee for users. Also, our griefbot can only be based on an active social media and data profile. Our Griefbot is based on western datahubs. Chinese and Russian datahubs are not possible. We do understand that their is a difference between have and have nots, but we do not see this as a reason to change our technology.

(Only the gray shaded text is printed on the Quick Scan canvas)

Do a brainstorm. Can you find a built in bias in this technology? Maybe because of the way the data was collected, either by personal bias, historical bias, political bias or a lack of diversity in the people responsible for the design of the technology? How do you know this is not the case? Be critical. Be aware of your own biases.

(This question is part of the Quick Scan)

An example: Griefbot

Yes. Of course. The idea of the technology / griefbot is that it is biased. We have only one version of the griefbot for all users. There can only be one subscription and so there can only be one griefbot of the deceased. This subscription can only be requested by the person that has access to usernames and passwords and a certificate of death. This subscriber can give more people access by buying additional licenses.
If this technology makes automatic decisions through algorithms or through machine learning can they be verified or explained? Can a decision be explained? If not, how do you now the decision of this technology is not biased and/or inclusive? Are there alternate procedures in place?

An example: Griefbot

We think it is important that our technology is biased, personal and does not account for decisions. Just like a real person. That is why we designed our technology as a black box. Users will only experience the user interaction without knowing why the griefbot does what it does!
How much is the disruptive impact on current social structures and economic stability? What existing social relations will be disturbed and what economic activities will be reduced? Are new social relations and economical opportunities replacing the old ones or is only a small group benefitting from the new technology?

An example: Griefbot

We believe that we add a new service to society, so we are not disrupting existing economical models. We do believe social relations can be disturbed but also enhanced. We think this is the responsibility of the users. Since social media is widespread and our subscription fee is low (think around 10 dollar per month) we do not believe a small group will benefit.
The design of technology is often a reflection of the people that are designing the technology. A diverse team provides the opportunity to see the technology in diverse ways. How diverse is this team? Do you see a team that is a representation of the target group? Is this team able to build an inclusive technology?

An example: Griefbot

Yes. Our design & development - team consists of a lot of different people. We even made sure that cultural differences (we have people from 4 continents), gender and age (we also have older / retired designers) are part of our team!

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about accessibility to this technology. If you think about built in biases or automatic decisions that may be biased. If you think about who is benefitting from this technology and the diversity of the team that creates the technology. If you think about all that, what improvements would you (want to) make to this technology. The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution in this case on the topic of inclusivity (question 1-5). The answers can lead to improvements in design or implementation, which is a good thing. Please list them here.

(This question is part of the Improvement Scan)

An example: Griefbot

Yes. Based on this discussion we decided to limit the accessibility of the griefbot to one person. A person that has a death certificate and usernames and passwords. The user can buy additional licenses. Secondly we started a discussion with life Insurance companies to include a griefbot - possibility so we make it even more affordable for everyone.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Frameworks

How to mitigate the risks of automatic decision systems
(https://technofilosofie.com/wp-content/uploads/2016/12/FPF-Automated-Decision-Making-Harms-and-Mitigation-Charts.pdf)
Implicit Association Test
(https://implicit.harvard.edu/implicit/netherlands/)

Videos:

Paradise for Nerds (on the risks of not having a diverse designteam)
(https://www.youtube.com/watch?v=pIQuDvHPPuQ&t=7s)