Regarding the previous Tech News Thursday article discussing AI Standards and Legal Intervention, there was a question on whether or not the issues surrounding false claims (ignited by artificial intelligence) can potentially become a wider issue for the new US, UK, and EU treaty due to the limited penalties against a nation not adhering to the treaty rules and values amidst the disturbance AI has already caused within the political and musical landscapes.
Subsequently, this article will explore what the Law Society has to say on AI concerns and the idea of legal intervention, as well as whether or not it is still possible to uphold the Rule of Law accordingly, and finally, legal authority within a digital landscape.
In our previous article, we shed light on the latest agreement between the US, UK, and Brussels that subsequently emphasises one’s rights and democratic values (as welcomed in the Financial Times article), setting out a legally binding framework that essentially covers AI systems and programs. The agreement comes swiftly after the EU’s recent attempt at national authority by enforcing the European Artificial Intelligence Act [2024] OJ L, 2024/1689; but, as previously mentioned, the lack of deterrents for a nation not adhering to the values of the agreement has been previously scrutinised.
Do you think the lack of sanctions for a nation could be an issue?
As of October 2023, the UK Law Society made regulation recommendations ahead of the November AI Safety Summit; the recommendations, made in response to the government’s white paper on regulation, include:
- The expertise of the legal profession should be recognised and harnessed in the AI regulatory approach;
- Legal professional privilege must be protected in the future regulation of AI;
- The UK Government should take a balanced approach to regulation to safeguard social interests while not impeding technological progression;
- Legislation should establish parameters where the use of AI is unacceptable or where it is inappropriate for AI to make zero-sum decisions;
- The UK Government should set out a definition for ‘meaningful human intervention’ in AI;
- Organisations should appoint an AI officer when needed, and
- Mandatory transparency is needed for the use of AI in government or public services and establishing a due diligence system to boost public trust.
Do you think these recommendations are fair and obtainable?
Should they possibly be included into the US, UK, and EU agreement followed by sanctions if not adhered to?
It is obvious that the legal profession has immense influence on the way in which AI will be regulated, and as a result, the Rule of Law needs to be taken into consideration when it comes to regulating AI. It is stated by Lisa James and Jan Van Zyl Smit in The Rule of Law: What is it, and why does it matter? The Constitution Unit Blog, UCL, says that ‘the rule of law is a fundamental principle underpinning the UK constitution. Its core principles include limits on state power, protection for fundamental rights, and judicial independence.’Additionally, they argue that it is for not only the public but also politicians and officials to hold responsibility for upholding the Rule of Law.
Taking that into account, how will we be able to uphold the Rule of Law in a digital landscape?
Considering the fact that the legal profession is at the forefront of this AI shift and nations are forming legally binding frameworks to tackle human rights issues within AI, the UK could see itself subjecting large tech organisations to fines if they do not comply with new rules promoting competition in digital markets.
Being discussed in the UK law will let regulators fine Big Tech without court approval (The Verge) article, the legal profession is certainly at the forefront of this AI shift and nations forming legally binding frameworks to tackle human rights issues within AI. Having said that, this article determines that the UK could see itself subjecting large tech organisations to fines if they do not comply with new rules promoting competition in digital markets.
Could financial deterrents possibly be a ruling that the US, UK, and EU agreement adopts for nations that do not comply with the new agreement?
See also:
Taylor Swift endorses Kamala Harris following presidential debate (nbcnews.com)
Will Lawyers be Replaced by AI? | LinkedIn
The Law and Technology Regarding Generative AI | LinkedIn
Artificial intelligence (AI) strategy | The Law Society
Upholding the Rule of Law in the Age of Artificial Intelligence by UNESCO
Full article: Law, authority, and respect: three waves of technological disruption (tandfonline.com)
Original Post https://www.linkedin.com/pulse/upholding-rule-law-digital-landscape-tyrell-drysdale-rhuqe/