Fontys

Technology Impact Cycle Tool

Courses Login

(Only the gray shaded text is printed on the Quick Scan canvas)

This technology is designed to solve a problem. That is why it is important to exactly define which problem this technology is going to solve. Can you make a clear definition of the problem? What 'pain' does this technology want to ease? Whose pain? The problem definition will help you to determine and discuss if you are solving the right problem.

An example: Griefbot

The purpose of the Griefbot is to reduce suffering for relatives or friends of a deceased person. We believe that - especially with tragic and sudden deaths - there is incredible pain by relatives and friends. The Griefbot is an advanced way of looking at photos or listening to that one voicemail. We believe accepting death is easier if you can have a conversation with your deceased loved one. Furthermore we believe that there a lot of people who never knew their parents or grandparents. The Griefbot partly solves that by enabling a conversation with a deceased parent of grandparent.

(Only the gray shaded text is printed on the Quick Scan canvas)

Can you imagine ways that this technology can or will be used to break the law? Think about invading someone's privacy. Spying. Hurting people. Harassment. Fraud/identity theft and so on. Or will people use this technology to avoid facing the consequences of breaking the law (using trackers to evade speed radars or using bitcoins to launder money, for example).

An example: Griefbot

Yes, under certain circumstances it can be used to break the law. If a young, underaged person dies and the friend or family member of the deceased gets access to the Griefbot account he or she can use it to draw young kids into dangerous situations, as he or she can impersonate a kid.On the other end, a bad actor can abuse the Griefbot for scamming purposes if he or she can "rewire" the AI behind the bot. Individuals can abuse the trust put into the deceased to manipulate people to undertake illegal activities, e.g. scam them out of money or hurt other people. It is even possible to imagine that a Griefbot will be taken hostage and only returned to the Original owner after paying a lot of money.

(Only the gray shaded text is printed on the Quick Scan canvas)

If this technology registers personal data you have to be aware of privacy legislation and the concept of privacy. Personal data can be interpreted in a broad way. Maybe this technology does not collect personal data, but can be used to assemble personal data. If this technology collects special personal data (like health or ethnicity) you should be extra aware.

An example: Griefbot

The General Data Protection Regulation defines personal data as data relating to an identified or an identifiable natural person. Natural persons are living persons, so the GDPR in principle does not apply to deceased persons. However, our Griefbot is also filled with data of living persons as well, especially those with a close relation to the deceased, to which data the GDPR will apply. Also the data of the persons that use the griefbot is processed in different ways, and some of these data falls within the scope of the GDPR. For both of these categories we know that some of the data that feeds the Griefbot contains sensitive categories of data as well. For example, some Griefbots are fueled with mail conversations and that means that a lot of very sensitive personal data can be included. A lot of people share with their loved ones very sensitive data like passwords and bankaccounts etc. which can be a huge risk of potential hazard.

(Only the gray shaded text is printed on the Quick Scan canvas)

To answer this question think about sub questions like: Can the technology be perceived as stigmatising? Does the technology imply or impose a certain belief or world view? Does the technology affects users' dignity? Is the technology in line with the person the user wants to be perceived as?

An example: Griefbot

There are two kind of users. The deceased person and the 'real' user. In this case we decided to focus on the 'real' user. However, we do explore the opportunity of some kind of a 'donor codicil' in which a person gives permission to live on as a Griefbot. We believe that too much suffering is wrong. If we can ease suffering by offering a Griefbot than that is in line with what persons want to be: someone who mourns but is helped in the process. We think that people that believe in heaven can still use the Griefbot, because there is a clear distinction between the soul and digital technology. We are just providing a tool that helps your mourn. The technology is matching values that are important for people, we think. People live on when they are remembered. Parents or grand parents get a voice. We think that is very valuable.
For the Quick Scan, you only have to list the stakeholders. Can you think of the people that are directly or indirectly affected by this technology? There are a lot of stakeholders that are obvious (like users) but we invite you also to think about the less obvious ones. Missing a stakeholder can have great consequences. Later, it helps to think about further questions. Questions like: Can you write down in a few words in what manner the users or stakeholders will be affected by this technology? You can limit yourself to the main / core effect (you think) this technology will have on the stakeholders. Did you really consult a stakeholder? Did you consult all stakeholders listed or did you assume the position of some stakeholders? Are you going to take the stakeholder into account? Do you think you should take all stakeholders into account? Are there any conflicting interests between groups of stakeholders? How will you resolve these conflicts?

An example: Griefbot

[object Object],[object Object],[object Object],[object Object]

(Only the gray shaded text is printed on the Quick Scan canvas)

There are fundamental issues with data. Data is always subjective. Data collections are never complete. Correlation and causation are tricky concepts. Data collections are often biased. Reality is way more complex than a million datapoints. Are you aware of these issues? How does this technology take these issues into account? We strongly recommend to do crash course four to properly understand the pitfalls and shortcomings of data (and it is fun!).

An example: Griefbot

Yes, dependent on the available data, the personality of the griefbot might be close or far from the deceased person. The limits are clear to us. The griefbot can't "cope" with it, so we would make the users aware of its limitations.

(Only the gray shaded text is printed on the Quick Scan canvas)

Do a brainstorm. Can you find a built-in bias in this technology? Maybe because of the way the data was collected, either by personal bias, historical bias, political bias or a lack of diversity in the people responsible for the design of the technology? How do you know this is not the case? Be critical. Be aware of your own biases.

An example: Griefbot

Yes. Of course. The idea of the technology / griefbot is that it is biased. We have only one version of the griefbot for all users. There can only be one subscription and so there can only be one griefbot of the deceased. This subscription can only be requested by the person that has access to usernames and passwords and a certificate of death. This subscriber can give more people access by buying additional licenses.

(Only the gray shaded text is printed on the Quick Scan canvas)

Is it easy for users to find out how your technology works? Can a user understand or find out why your technology behaves in a certain way? Are the goals explained? Is the idea of the technology explained? Is the technology company transparent about the way their business model works?

An example: Griefbot

We do explain - in broad terms - how the technology works. We list the data sources and social media channels we use to feed the AI to create the chatbot. On our website we explain the idea behind the technology. We explain our mission and the impact we want to have on society. However we do NOT exactly explain why the griefbot is giving certain answers. There are two reasons for that. One we do not always exactly know how the AI reaches a certain response and (two) we believe that the impact of the Griefbot is bigger if you can not track down how an answer was reached. After all, that is also impossible with human interaction.

(Only the gray shaded text is printed on the Quick Scan canvas)

One of the most prominent impacts on sustainability is energy efficiency. Consider what service you want this technology to provide and how this could be achieved with a minimal use of energy.

An example: Griefbot

We offer cloudservices. These cloudservices are energy consuming. However, we host our servers with suppliers that have high standards in environmentally friendly datacenters. Our product could use more resources from the local client (laptop, tablet or phone) so there is less traffic and energy consumption in the datacenters.

(Only the gray shaded text is printed on the Quick Scan canvas)

Discuss this quickly and note your first thoughts here.

An example: Griefbot

The Griefbot can be an important support for people and a normal part of grieving, on the other hand, there is a lot of potential for future abuse. A better Griefbot does not automatically mean that there will be a better world.