Discover...
Defining legal priorities in the age of AI
As AI revolutionises the life sciences industry, companies are confronting novel legal issues. What do you need to consider as you adopt, integrate, and invest in AI?
IP and licensing
Clarify who owns the IP rights for AI-generated innovation.
Determine whether AI-powered inventions are patentable — meeting key requirements including novelty and non-obviousness — or whether a trade secrets strategy is better suited to protect your IP.
Negotiate robust licensing agreements for third-party or open source AI models and datasets, ensuring you have the right to use and commercialise the relevant AI inputs.
Draft contracts to address IP ownership and licensing in collaboration with research institutions and third parties.
Related article
M&A
Accurately value the target’s AI assets and assess the potential revenue streams or ongoing royalties.
Conduct thorough due diligence to confirm the target company owns all relevant IP and is adequately protected.
Check existing licensing agreements related to AI models and data to ensure terms are transferable or renegotiable post-acquisition.
Confirm the target complies with data protection laws (eg GDPR, HIPAA) and that proper consents are in place for any data used in AI applications.
Identify any existing liabilities, including product defects, IP disputes, or pending litigation that you might inherit.
Plan how to integrate AI technology with existing systems, ensuring a smooth transition and minimising compatibility risks.
Include indemnity clauses in acquisition agreements to protect against IP disputes, data breaches, or other risks arising from the target’s prior activities.
Confirm that the AI technology you’re acquiring complies with relevant regulations and meets ethical standards on transparency, fairness, and bias.
Data privacy
Ensure personal health data is collected, managed, transferred, and deleted in accordance with relevant regulations (eg GDPR, HIPAA).
Secure consent where necessary from data subjects, ensuring transparency about how their data will be processed and used.
Implement robust data anonymisation, pseudonymisation, and minimisation techniques for sensitive patient or health data used in AI models.
Implement data transfer mechanisms where necessary and comply with transfer risk assessment requirements.
Review data-sharing agreements with third parties to ensure data compliance and protection.
Set strict access controls to limit who can view, use, and analyse sensitive data.
Ensure AI models are transparent in their decision-making processes and regularly audit AI models for biases that could lead to discrimination or misinformation.
Apply robust cyber security measures to protect data used in AI models from breaches or unauthorised access.
Related articles
Regulation and governance
Stay up to date with emerging AI-specific regulations like the EU AI Act, and ensure your AI technologies comply with horizontal frameworks for safety and efficacy.
Perform regular assessments to evaluate how your use of AI technologies impacts regulatory requirements.
Develop internal governance frameworks to oversee AI projects. This could include an ethics review board or committee to evaluate AI projects.
Maintain detailed records of how AI models make decisions, especially in high-risk areas like diagnostics.
Prepare for re-certification as regulations evolve and AI algorithms are improved or modified.
Proactively engage with regulatory bodies during the early stages of AI development to avoid delays during product approval or market entry.
Implement robust AI risk management frameworks that align with regulatory and governance requirements.