Technology Impact Cycle Tool

Login
If this category is not applicable for this technology, please select 'Yes'.
Who will have access to this technology and who won’t? Will people or communities who don’t have access to this technology suffer a setback compared to those who do? What does that setback look like? What new differences will there be between the “haves” and “have-nots” of this technology

An example: Griefbot

Yes. However we charge a subscription fee for users. Also, our griefbot can only be based on an active social media and data profile. Our Griefbot is based on western datahubs. Chinese and Russian datahubs are not possible. We do understand that their is a difference between have and have nots, but we do not see this as a reason to change our technology.
Does your technology have a built in bias because of the way the data was collected, either by personal bias, historical bias,  political bias or a lack of diversity in the people responsible for the design of the technology? How do you know this is not the case? How do you deal with your own biases?

An example: Griefbot

Yes. Of course. The idea of the technology / griefbot is that it is biased. We have only one version of the griefbot for all users. There can only be one subscription and so there can only be one griefbot of the deceased. This subscription can only be requested by the person that has access to usernames and passwords and a certificate of death. This subscriber can give more people access by buying additional licenses.
If your technology makes automatic decisions through algorithms or through machine learning can you verify or audit them? Can you explain a decision? If not, how do you now your technology is not biased and/or inclusive? Or do you have alternate procedures in place?

An example: Griefbot

We think it is important that our technology is biased, personal and does not account for decisions. Just like a real person. That is why we designed our technology as a black box. Users will only experience the user interaction without knowing why the griefbot does what it does!
How much is the disruptive impact on current social structures and economic stability? What existing social relations will be disturbed and what economic activities will be reduced? Are new social relations and economical opportunities replacing the old ones or is only a small group benefitting from the new technology?

An example: Griefbot

We believe that we add a new service to society, so we are not disrupting existing economical models. We do believe social relations can be disturbed but also enhanced. We think this is the responsibility of the users. Since social media is widespread and our subscription fee is low (think around 10 dollar per month) we do not believe a small group will benefit.
The design of technology is often a reflection of the people that are designing the technology. A diverse team provides the opportunity to see the technology in diverse ways. How diverse is your team? Do you see a team that is a representation of the target group? Is your team able to build an inclusive technology?

An example: Griefbot

Yes. Our design & development - team consists of a lot of different people. We even made sure that cultural differences (we have people from 4 continents), gender and age (we also have older / retired designers) are part of our team!
The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution (question 1-5). The answer can lead to changes in design or implementation, which is a good thing. Please list them here.

An example: Griefbot

Yes. Based on this discussion we decided to limit the accessibility of the griefbot to one person. A person that has a death certificate and usernames and passwords. The user can buy additional licenses. Secondly we started a discussion with life Insurance companies to include a griefbot - possibility so we make it even more affordable for everyone.

Frameworks

How to mitigate the risks of automatic decision systems
(https://technofilosofie.com/wp-content/uploads/2016/12/FPF-Automated-Decision-Making-Harms-and-Mitigation-Charts.pdf)
Implicit Association Test
(https://implicit.harvard.edu/implicit/netherlands/)

Videos:

Paradise for Nerds (on the risks of not having a diverse designteam)
(https://www.youtube.com/watch?v=pIQuDvHPPuQ&t=7s)