Who actually reads privacy policies?
This is a question that has popped into my mind in the last few weeks. In talks with several entrepreneurs and clients, we see that there is a great drive for innovative solutions, and sometimes, privacy is conceived as an afterthought (when considered at all!).
Let’s take, for example, AI: it is the hot topic and, most importantly, is here to stay. More and more companies want to leverage all the data that is being processed in their servers to create new solutions that can add value to their users. But, with this rush to the training and development of our own AI, B2B, and B2C tech companies need to keep the focus on all the potential risks that may arise when collecting data to develop an AI system.
Because it seems that your stakeholders care about data privacy.
And when that time comes, you will need to give the right answers to sign a deal or start an investment. So that’s the primary reason for compliance, protection, and data privacy in the B2B landscape: start and keep selling.
Not taking into account the importance of protecting your users’ privacy and their personal data may (in the worst-case scenario) damage your brand reputation and get you out of business. We have seen companies lose huge deals in the B2B space, and B2C startups lose partnerships and reimbursement agreements in healthcare in particular.
So, let’s see together how to navigate this topic and look at why Privacy Policies (and Terms of Services) are so important when you are trying to leverage data for new purposes.
And we will do that by taking as an example recent cases that sparked heated debates among privacy experts.
1️⃣ In March, Zoom updated its terms and conditions, declaring that users’ generated content could be used to train their AI solution.
2️⃣ After a significant debate revolving around the lack of transparency on how such data could be used to train the AI and the lack of a proper consent request, Zoom clarified with a second update to the terms and conditions, stating that AI training could only occur with prior consent.
3️⃣ Finally, after taking full responsibility, Zoom updated its terms and conditions again, clarifying that users’ data won’t be used to train AI and deciding against using data for that purpose altogether.
As you can see, tech companies (no matter if you are a giant or a startup) must find a delicate balance between user data protection and technological advancement. Let’s see why:
➡️ Without user data protection, any company’s reputation will be shot.
➡️ Without tech advancement, no business will attract new customers or keep satisfying the needs of those already using the product or the service.
So, to find the right balance between the two sides, let’s look at some don’ts and dos when it comes to collecting personal data for your AI.
❌Lack of transparency between the company and its users.
➡️ It was unclear which data were included in the “service-generated data” that Zoom could use to train its AI models. Main takeaway: make sure to specify (and communicate) what data you will use to train your AI.
➡️ The purposes were not specific or clear: we can assume that data subjects were not aware of what specifically would have happened to their data. Main takeaway: Don’t forget to state the purpose why you are collecting the data you are asking for.
❌ Tie GDPR consents with agreements to the Terms and Conditions.
➡️ Signing up to a platform and agreeing to its Terms of and Conditions does not constitute valid GDPR consent (even if you state in your Terms of Service that you are going to use personal data)
➡️ Consent has to be specific and voluntary. If consent is not given on a voluntary basis, it cannot be considered a valid legal basis to process data.
Main takeaway: you should allow the user to give the consent manually (and it is easy to withdraw)
➡️ There were no specific consent requests and no option to opt out of such data collection, effectively failing to satisfy the right to withdraw consent.
Main takeaway: make sure you provide all the tools to allow your users to revoke their consent in an easy and fast way.
Transparency is also crucial when it comes to AI training, as the AI Act will also require companies to provide clear and detailed information to their users regarding what they are doing with their data (and, in the case of AI, how the algorithm works).
And, probably, some of these points will become stricter as the AI Act comes into force. If you want to know more about it, read our previous article.
As technology - and AI in particular - evolves, fostering an open dialogue about data usage, and privacy will be critical in preserving (or, in some cases, rebuilding) trust among users.
Here's a combination of measures that could help you reduce risks and enable you to access data.
🟢 Set up a proper GDPR project that considers both technical and legal implications of the changes you intend to make around how your users' data is processed.
📌 GDPR is the starting point, not the finish line: before starting any activity involving personal data, make sure you have implemented all technical and organisational measures required by GDPR.
📌 You will always need a valid legal basis to process personal data, which may vary depending on opinions from the Data Protection Authorities.
📌 If you rely on consent for processing data, collect it transparently by implementing a consent management tool to keep track of consent validity and potential withdrawals.
📌 Technically implement an opt-out method for those users who don't want to share those data with you.
📌 But that's not everything you need to do: in some cases, you will also have to take into account specific national legislation, court rulings, or other Regulations.
Need help figuring out where to start? Make sure to check out our free GDPR checklist to get an overview of the tasks you need to do.
🟢 Show that you care about your users and their data.
Have clear and transparent communication with your customers and stakeholders, explaining why you need specific data to train your AI, what the purpose is, and how you will achieve it.
Click here to get your free copy 😉
🟢 Show that you mitigate all risks that can arise throughout the process;
Be ready to prove your compliance with all applicable laws and regulations and to show the steps you took to get there.
So, if you are a digital health company interested in learning more about collecting health data for your AI, read this blog article.
Looking at how to deal with GDPR and legal requirements?
Working with experts can reduce time-to-market and technical debt and ensure a clear roadmap you can showcase to partners and investors.
At Chino.io, we have been combining our technological and legal expertise to help hundreds of companies like yours navigate through EU and US regulatory frameworks, enabling successful launches and reimbursement approvals.
We offer tailored solutions to support you in meeting the GDPR, AI-Act, HIPAA, DVG, or DTAC mandated for listing your product as DTx or DiGA.
Want to know how we can help you? Reach out to us and learn more