Technology Impact Cycle Tool

Login
If this category is not applicable for this technology, please select 'Yes'.
Is it easy for users to find out how your technology works? Can a user understand or find out why your technology behaves in a certain way? Do you explain the goals and the idea of the technology? Are you transparent about the way your business model works?

An example: Griefbot

We do explain - in broad terms - how the technology works. We list the data sources and social media channels we use to feed the AI to create the chatbot. On our website we explain the idea behind the technology. We explain our mission and the impact we want to have on society. However we do NOT exactly explain why the griefbot is giving certain answers. There are two reasons for that. One we do not always exactly know how the AI reaches a certain response and (two) we believe that the impact of the Griefbot is bigger if you can not track down how an answer was reached. After all, that is also impossible with human interaction.
This question is only relevant when you use algorithms or artificial intelligence. If so, do you tell your users how the decision making process works? Which data is collected ? How conclusions are reached? And, if you use something like machine learning where conclusions are reached in a 'black box' are you transparant about that?

An example: Griefbot

We do not. The decisions the Griefbot makes are: which answer am I going to give. When am I going to sent which app-message, and so on. We do not explain these and we tell people on our website. The reasons for this are listed under the question above.
Are you easy to reach? Do you have procedures in place for complaints? Is it easy for users and other parties to ask questions about your technology? And do you have people to answer those questions?

An example: Griefbot

No. We believe that a Griefbot is a very sensitive and personal experience. We want to give you an experience as good as possible, but we can not have a discussion about the behavior of the griefbot. Users can always choose to terminate the subscription. Of course we do provide a FAQ.
Do you tell your users or stakeholders about possible negative consequences or shortcomings of your technology? Did you consider that - though your system may be fair in itself - it still can have negative consequences in the real world? Do you communicate about that?

An example: Griefbot

We are not. This is something we can improve on. We listed some improvements in the change-question (below).
The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution (question 1-4). The answer can lead to changes in design or implementation, which is a good thing. Please list them here.

An example: Griefbot

Based on the last question we created a specific section on our website in which we explicitly - by a questionnaire - help users to make the right decision. We tell them about the large social impact of the Griefbot, we share user stories and we explicitly tell them that the griefbot is an impersonation of the deceased, that is created by an AI that is by design a black box for us, This helps the user to make the right decision. We also have plans to provide a feedback-section, reviews and community for users.

Frameworks

An Algorithmic Impact Assessment from Canada.
(https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai/algorithmic-impact-assessment.html)

Further reading

Fair systems still can have negative consequences in the real world
(https://blog.acolyer.org/2020/02/05/protective-optimization-technologies/)