Fontys

Technology Impact Cycle Tool

Courses Login
Depending on the technology you are assessing, some categories are more important than others. Here you can indicate how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes.'

(Only the gray shaded text is printed on the Quick Scan canvas)

- Is it easy for users to find out how the technology works? - Can a user understand or find out why your technology behaves in a certain way? - Are the goals explained? - Is the idea of the technology explained? - Is the technology company transparent about the way their business model works?

(This question is part of the Quick Scan)

An example: Griefbot

We do explain - in broad terms - how the technology works. We list the data sources and social media channels we use to feed the AI to create the chatbot. On our website we explain the idea behind the technology. We explain our mission and the impact we want to have on society. However we do NOT exactly explain why the griefbot is giving certain answers. There are two reasons for that. One we do not always exactly know how the AI reaches a certain response and (two) we believe that the impact of the Griefbot is bigger if you can not track down how an answer was reached. After all, that is also impossible with human interaction.
This question is only relevant when algorithms or (certain kinds of) artificial intelligence are used. If so: - is the technology transparent about how the decision-making process works? - Which data is collected? - How conclusions are reached? And, if you use something like machine learning where conclusions are reached in a 'black box' are you transparant about that? In what way?

An example: Griefbot

We do not. The decisions the Griefbot makes are: which answer am I going to give. When am I going to sent which app-message, and so on. We do not explain these and we tell people on our website. The reasons for this are listed under the question above.
- Is the (technology) company easy to reach? - Are there procedures in place for complaints? - Is it easy for users and other parties to ask questions about the technology? - Are there people available to answer those questions?

An example: Griefbot

No. We believe that a Griefbot is a very sensitive and personal experience. We want to give you an experience as good as possible, but we can not have a discussion about the behavior of the griefbot. Users can always choose to terminate the subscription. Of course we do provide a FAQ.
- Is the technology transparent to users or stakeholders about possible negative consequences or shortcomings of the technology? - Is it considered that though the system may be fair in itself, it still can have negative consequences in the real world? For example, Google Maps suggesting detours through a residential area. Are you aware of these consequences? Do you communicate about them?

An example: Griefbot

We are not. This is something we can improve on. We listed some improvements in the change-question (below).

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about the communication on the way the technology works and the businessmodel. If you think about the explanation on automatic decisions that are made. If you think about complaint procedures and transparency on possible negative effects. If you think about all that, what would you (want to) improve? In the technology? In context? In use? The answers on questions 1-4 help you to gain insight into the potential impact of the need for transparency on this technology. The goal of these answers is not to provide you with a 'go' or ' no go' - decision. The goal is to make you think HOW you can improve the use of the technology. This can be by making changes to the technology or making changes to the context in which the technology is used, or making changes in the way the technology is used.

(This question is part of the Improvement Scan)

An example: Griefbot

Based on the last question we created a specific section on our website in which we explicitly - by a questionnaire - help users to make the right decision. We tell them about the large social impact of the Griefbot, we share user stories and we explicitly tell them that the griefbot is an impersonation of the deceased, that is created by an AI that is by design a black box for us. This helps the user to make the right decision. We also have plans to provide a feedback-section, reviews and community for users.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Frameworks

Ethical Toolkit for the Design of AI (Hidalgo e.o)
(https://repository.tudelft.nl/islandora/object/uuid:b5679758-343d-4437-b202-86b3c5cef6aa?collection=education&ref=hackernoon.com)
European Guidelines for responsible AI
(https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)
Inventory of guidelines by Algorith Watch
(https://inventory.algorithmwatch.org/)
Ethical AI in education
(https://fb77c667c4d6e21c1e06.b-cdn.net/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf)

Further reading

Fair systems still can have negative consequences in the real world
(https://blog.acolyer.org/2020/02/05/protective-optimization-technologies/)
AI No LONGER HAS A PLUG. About Ethics in the design process (van Belkom e.a.)
(https://detoekomstvanai.nl/wp-content/uploads/2020/06/AInolongerhasaplug_vanBelkom.pdf)