Fontys

Technology Impact Cycle Tool

Courses Login
Depending on the technology you are assessing, some categories are more important than others. Here you can indicate how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes.'

(Only the gray shaded text is printed on the Quick Scan canvas)

To help you answer this question think about sub questions like: - If two friends use your product, how could it enhance or detract from their relationship? - Does your product create new ways for people to interact? - Does your product fill or change a role previously filled by a person? - Can the technology be perceived as stigmatising? - Does the technology imply or impose a certain belief or world view? - Does the technology affects users' dignity? - Is the technology in line with the person the user wants to be perceived as? - Does the technology empower people? In what way? - Does the technology change people? In what way?


Go to the crash course:
Section One - Our choice in human values

(This question is part of the Quick Scan)

An example: Griefbot

There are two kind of users. The user that will become a Griefbot and the user that will have conversations with the Griefbot. We understand that the identity of both users will be affected by the Griefbot. This is a very personal choice. However, we also believe that providing the opportunity to digitally live on can inspire people to have a better life and provide loved ones with a possibility to ease suffering, which both are very valuable. A person that uses the Griefbot during life will - on a regularly basis - talk to the app. This also will provoke the user to reflect on his/her life and life choices. We believe this helps the user to become a better person. For the beloved ones, we believe that too much suffering is wrong. If we can ease suffering by offering a Griefbot than that is in line with what persons want to be: someone who mourns but is helped in the process. We think that religious people can use the Griefbot, because there is a clear distinction between the soul and digital technology. We are just providing a tool that helps your mourn (like photo or video). The technology is matching values that are important for people, we think. People live on when they are remembered. Parents or grand parents get a voice. We think that is very valuable. It is also an opportunity to share your insights and wisdom with your loved ones after you die.
To help you answer this question think about sub questions like: - Does the technology allow users to make their own decisions or are decisions made for the user? - Does the technology make users dependent? On the technology? Or on others? - Is the technology addictive? Is it easy to disconnect? - Do users need someone else to use the technology? - Does the technology empower users to make better decisions? Why? - In what situations might it be inappropriate to use your product? How do you help users avoid that?


Go to the crash course:
Section Two - The attention economy

An example: Griefbot

First, we believe that there a lot of people that do not want to live on as a Griefbot. They should not be pressured to do so by their relatives. Using the app and/or becoming a Griefbot should be a personal decision. We are also very aware that the technology can be addictive. People can get addicted to chatting with the person they miss so much, and start disconnecting from the real world. This might be a serious problem and it influences the autonomy to make choices. Also, we think people should make autonomous choices. They can not trust a Griefbot to make choices for them. We think this an issue to be addressed. People live on, circumstances change. It is bad idea to trust a Griefbot to help you make decisions. The Griefbot provides you with new choices. The autonomy of people is empowered to make new choices. People can decide to live on digitally. People can decide to mourn with the help of the Griefbot.
To help you answer this question think about sub questions like: - Can the technology be confusing, stressful, manipulative, distracting or frightening? - Can the technology cause pain or injuries? What are the effects of extreme use? - Does the technology has a positive impact on health and/or well-being? How?


Go to the crash course:
Section Four - Technology Addiction

An example: Griefbot

If people talk to the app on a regulary basis about their life, we believe this helps you improve your choices. It inspires reflection on your thoughts, action and feelings, which is a good thing. Maybe sometimes you will regret the things you said to the app, but we believe that if you use it for a longer period, you will come to terms with who you are. For beloved ones, the app helps with the mourning process. However there is a chance that the user loses connection with real life. We believe that the Griefbot helps with the mourning and we monitor that effect closely. Finally, we really believe that it is helpfull and inspirational to provide grand children with the opportunity to talk to a digital representation of their deceased grandparents.

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about the impact of this technology on human values and needs. If you think about how this technology affects the identity of the user, the autonomy of the user (can the users make their own decisions?) and the health and well-being of the user. If you think about all that, what improvement would you (want to) make? In the technology? In context? In use? The answers on questions 1-3 help you to gain insight into the potential impact of this technology on human values. The goal of these answers is not to provide you with a 'go' or ' no go' - decision. The goal is to make you think HOW you can improve the use of the technology. This can be by making changes to the technology or making changes to the context in which the technology is used, or making changes in the way the technology is used.


Go to the crash course:
Section Six - Additional Materials

(This question is part of the Improvement Scan)

An example: Griefbot

For the person that becomes a Griefbot we will build some controls in the bot. First we give the opportunity to erase or activate or erase the Griefbot (so it can be connected with a testament). Second we do not provide the possibility to correct or change data during life (so you do not feel the urge to optimize the data). user we decided to build controls in the bot. For the beloved ones, we inform the user of extreme use, we give the option to restrict access and we provide a distress-button for when the Griefbot misbehaves.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Articles / Websites

A path to humane technology (essay by Koert van Mensvoort)
(https://nextnature.net/2020/09/a-path-to-humane-technology)
Six Principles of Humane Technology (by NextNature Net)
(https://nextnature.net/2020/09/six-principles-of-humane-technology)

Frameworks

All human bias in one visual
(https://www.visualcapitalist.com/every-single-cognitive-bias/)
Six ways of seducing people by Robert Cialdini
(https://www.influenceatwork.com/principles-of-persuasion/)
Design for Happiness Cards TU Delft
(https://diopd.org/design-for-happiness-deck/)
Take Control FrameWork by the Center of Humane Technology
(https://www.humanetech.com/take-control)
Youth Toolkit from the Center of Humane Technology
(https://www.humanetech.com/youth)