Fontys

Technology Impact Cycle Tool

Login
Depending on the technology your assessing, some categories are more important than others. Here you can indicatie how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes'.

(Only the gray shaded text is printed on the Quick Scan canvas)

Is it easy for users to find out how your technology works? Can a user understand or find out why your technology behaves in a certain way? Are the goals explained? Is the idea of the technology explained? Is the technology company transparent about the way their business model works?

(This question is part of the Quick Scan)

An example: Griefbot

We do explain - in broad terms - how the technology works. We list the data sources and social media channels we use to feed the AI to create the chatbot. On our website we explain the idea behind the technology. We explain our mission and the impact we want to have on society. However we do NOT exactly explain why the griefbot is giving certain answers. There are two reasons for that. One we do not always exactly know how the AI reaches a certain response and (two) we believe that the impact of the Griefbot is bigger if you can not track down how an answer was reached. After all, that is also impossible with human interaction.
This question is only relevant when algorithms or artificial intelligence is used. If so, is the technology transparent about how the decision making process works? Which data is collected? How conclusions are reached? And, if you use something like machine learning where conclusions are reached in a 'black box' are you transparant about that?

An example: Griefbot

We do not. The decisions the Griefbot makes are: which answer am I going to give. When am I going to sent which app-message, and so on. We do not explain these and we tell people on our website. The reasons for this are listed under the question above.
Is the (technology) company easy to reach? Are there procedures in place for complaints? Is it easy for users and other parties to ask questions about the technology? Are there people available to answer those questions?

An example: Griefbot

No. We believe that a Griefbot is a very sensitive and personal experience. We want to give you an experience as good as possible, but we can not have a discussion about the behavior of the griefbot. Users can always choose to terminate the subscription. Of course we do provide a FAQ.
Is the technology transparent to users or stakeholders about possible negative consequences or shortcomings of the technology? Is considered that though the system may be fair in itself, it still can have negative consequences in the real world? For example Google Maps suggesting detours through a residential area. Do you communicate about that?

An example: Griefbot

We are not. This is something we can improve on. We listed some improvements in the change-question (below).

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about the communication on the way the technology works and the businessmodel. If you think about the explanation on automatic decisions that are made. If you think about complaint procedures and transparency on possible negative effects. If you think about all that, what would you (want to) improve? The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution, in this case on the topic of transparency. (question 1-4). The answer can lead to improvements in design or implementation, which is a good thing. Please list them here.

(This question is part of the Improvement Scan)

An example: Griefbot

Based on the last question we created a specific section on our website in which we explicitly - by a questionnaire - help users to make the right decision. We tell them about the large social impact of the Griefbot, we share user stories and we explicitly tell them that the griefbot is an impersonation of the deceased, that is created by an AI that is by design a black box for us. This helps the user to make the right decision. We also have plans to provide a feedback-section, reviews and community for users.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Frameworks

An Algorithmic Impact Assessment from Canada.
(https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html)

Further reading

Fair systems still can have negative consequences in the real world
(https://blog.acolyer.org/2020/02/05/protective-optimization-technologies/)