Fontys

Technology Impact Cycle Tool

Courses Login

(Only the gray shaded text is printed on the Quick Scan canvas)

Can you exactly define what the challenge is? What problem (what 'pain') does this technology want to solve? Can you make a clear definition of the problem? What 'pain' does this technology want to ease? Whose pain? Is it really a problem? For who? Will solving the problem make the world better? Are you sure? The problem definition will help you to determine and discuss what problem exactly you are solving and if it is a problem worth solving.

An example: Griefbot

The purpose of the Griefbot is to reduce suffering for relatives or friends of a deceased person. We believe that - especially with tragic and sudden deaths - there is incredible pain by relatives and friends. The Griefbot is an advanced way of looking at photos or listening to that one voicemail. We believe accepting death is easier if you can have a conversation with your deceased loved one. Also the app will connect grandchildren to the grandparents they never knew. The Griefbot partly solves that problem by enabling a conversation with a deceased parent of grandparent.

(Only the gray shaded text is printed on the Quick Scan canvas)

Can you imagine ways that the technology can or will be used to break the law? Think about invading someone's privacy. Spying. Hurting people. Harassment. Steal things. Fraud/identity theft and so on. Or will people use the technology to avoid facing the consequences of breaking the law (using trackers to evade speed radars or using bitcoins to launder money, for example).

An example: Griefbot

Yes, under certain circumstances it can be used to break the law. If a young, underaged person dies and the friend or family member of the deceased gets access to the Griefbot account he or she can use it to draw young kids into dangerous situations, as he or she can impersonate a kid.On the other end, a bad actor can abuse the Griefbot for scamming purposes if he or she can "rewire" the AI behind the bot. Individuals can abuse the trust put into the deceased to manipulate people to undertake illegal activities, e.g. scam them out of money or hurt other people. It is even possible to imagine that a Griefbot will be taken hostage and only returned to the Original owner after paying a lot of money.

(Only the gray shaded text is printed on the Quick Scan canvas)

If this technology registers personal data you have to be aware of privacy legislation and the concept of privacy. Think hard about this question. Remember: personal data can be interpreted in a broad way. Maybe this technology does not collect personal data, but can be used to assemble personal data. If the technology collects special personal data (like health or ethnicity) you should be extra aware.

An example: Griefbot

The General Data Protection Regulation defines personal data as data relating to an identified or an identifiable natural person. Natural persons are living persons, so the GDPR in principle does not apply to deceased persons. However, our Griefbot is also filled with data of living persons as well, especially those with a close relation to the deceased, to which data the GDPR will apply. Also the data of the persons that use the griefbot is processed in different ways, and some of these data falls within the scope of the GDPR. For both of these categories we know that some of the data that feeds the Griefbot contains sensitive categories of data as well. For example, some Griefbots are fueled with mail conversations and that means that a lot of very sensitive personal data can be included. A lot of people share with their loved ones very sensitive data like passwords and bankaccounts etc. which can be a huge risk of potential hazard.

(Only the gray shaded text is printed on the Quick Scan canvas)

To help you answer this question think about sub questions like: - If two friends use your product, how could it enhance or detract from their relationship? - Does your product create new ways for people to interact? - Does your product fill or change a role previously filled by a person? - Can the technology be perceived as stigmatising? - Does the technology imply or impose a certain belief or world view? - Does the technology affects users' dignity? - Is the technology in line with the person the user wants to be perceived as? - Does the technology empower people? In what way? - Does the technology change people? In what way?

An example: Griefbot

There are two kind of users. The user that will become a Griefbot and the user that will have conversations with the Griefbot. We understand that the identity of both users will be affected by the Griefbot. This is a very personal choice. However, we also believe that providing the opportunity to digitally live on can inspire people to have a better life and provide loved ones with a possibility to ease suffering, which both are very valuable. A person that uses the Griefbot during life will - on a regularly basis - talk to the app. This also will provoke the user to reflect on his/her life and life choices. We believe this helps the user to become a better person. For the beloved ones, we believe that too much suffering is wrong. If we can ease suffering by offering a Griefbot than that is in line with what persons want to be: someone who mourns but is helped in the process. We think that religious people can use the Griefbot, because there is a clear distinction between the soul and digital technology. We are just providing a tool that helps your mourn (like photo or video). The technology is matching values that are important for people, we think. People live on when they are remembered. Parents or grand parents get a voice. We think that is very valuable. It is also an opportunity to share your insights and wisdom with your loved ones after you die.
When thinking about the stakeholders, the most obvious one are of course the intended users, so start there. Next, list the stakeholders that are directly affected. Listing the users and directly affected stakeholders also gives an impression of the intended context of the technology. There are a lot of stakeholders that are obvious (like users) but we invite you also to think about the less obvious ones. Missing a stakeholder can have great consequences. For the quick scan it is not necessary to describe how the stakeholders are affected, but later it helps to think about further questions. Questions like: Can you write down in a few words in what manner the users or stakeholders will be affected by this technology? You can limit yourself to the main / core effect (you think) this technology will have on the stakeholders. Did you really consult a stakeholder? Did you consult all stakeholders listed or did you assume the position of some stakeholders? Are you going to take the stakeholder into account? Do you think you should take all stakeholders into account? Are there any conflicting interests between groups of stakeholders? How will you resolve these conflicts?

An example: Griefbot

[object Object],[object Object],[object Object],[object Object]

(Only the gray shaded text is printed on the Quick Scan canvas)

There are fundamental issues with data. For example: - Data is always subjective; - Data collections are never complete; - Correlation and causation are tricky concepts; - Data collections are often biased; - Reality is way more complex than a million datapoints; Are you aware of these issues? How does the technology take these issues into account? We strongly recommend to do crash course four to properly understand the pitfalls and shortcomings of data (and it is fun!).

An example: Griefbot

Yes, dependent on the available data, the personality of the griefbot might be close or far from the deceased person. The limits are clear to us. The griefbot can't "cope" with it, so we would make the users aware of its limitations.

(Only the gray shaded text is printed on the Quick Scan canvas)

Do a brainstorm. Can you find a built-in bias in this technology? Maybe because of the way the data was collected, either by personal bias, historical bias, political bias or a lack of diversity in the people responsible for the design of the technology? How do you know this is not the case? Be critical. Be aware of your own biases. Tip: pretend the opposite of your assumptions about your core user are true - how does that change your product?

An example: Griefbot

Yes. Of course. The idea of the technology / griefbot is that it is biased. We have only one version of the griefbot for all users. There can only be one subscription and so there can only be one griefbot of the deceased. This subscription can only be requested by the person that has access to usernames and passwords and a certificate of death. This subscriber can give more people access by buying additional licenses.

(Only the gray shaded text is printed on the Quick Scan canvas)

- Is it easy for users to find out how the technology works? - Can a user understand or find out why your technology behaves in a certain way? - Are the goals explained? - Is the idea of the technology explained? - Is the technology company transparent about the way their business model works?

An example: Griefbot

We do explain - in broad terms - how the technology works. We list the data sources and social media channels we use to feed the AI to create the chatbot. On our website we explain the idea behind the technology. We explain our mission and the impact we want to have on society. However we do NOT exactly explain why the griefbot is giving certain answers. There are two reasons for that. One we do not always exactly know how the AI reaches a certain response and (two) we believe that the impact of the Griefbot is bigger if you can not track down how an answer was reached. After all, that is also impossible with human interaction.

(Only the gray shaded text is printed on the Quick Scan canvas)

One of the most prominent impacts on sustainability is energy efficiency. Consider what service you want this technology to provide and how this could be achieved with a minimal use of energy. Are improvements possible?

An example: Griefbot

We offer cloudservices. These cloudservices are energy consuming. However, we host our servers with suppliers that have high standards in environmentally friendly datacenters. Our product could use more resources from the local client (laptop, tablet or phone) so there is less traffic and energy consumption in the datacenters.

(Only the gray shaded text is printed on the Quick Scan canvas)

Discuss this quickly and note your first thoughts here. Think about what happens when 100 million people use your product. How could communities, habits and norms change?

An example: Griefbot

The Griefbot can be an important support for people and a normal part of grieving, on the other hand, there is a lot of potential for future abuse. A better Griefbot does not automatically mean that there will be a better world.