As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
I imagine there will be limits set, through precedent.
For example, if a customer is chatting with an AI bot regarding a refund for a pair of $89 sneakers, and the bot tells the customer to report to the nearest office to collect 1 million dollars, I can see the courts ruling the plaintiff is not owed $1 million dollars.
Although, if the plaintiff ended up flying a few States over to try and collect, maybe travel costs and lost wages? Who knows.
If a company marketing fee for service legal advice, their might be a higher standard. Say a client was given objectively bad legal advice, the kind that attorneys get sanctioned or reprimanded for, and subsequently acts upon that advice. I think it’s likely the courts would take a different approach and determine the company has a good bit of liability for damages.
Those are both just hypothetical generic companies and scenarios I made up to highlight how I can see the question of liability being determined by the courts. Unless some superceding laws and regulations enacted.
Or fuck it, maybe all AI companies have to do is put an arbitration clause in their T&C’s, and then contract out to an AI arbitration firm. And wouldn’t you know it, the arbitration AI model was only trained on cases hand picked by Federalist Society interns.
The one I want to see in court at some point is AI being able to give refunds or credits and someone getting it to give them more than it cost. Or having it create a 100% off promo code.
I imagine there will be limits set, through precedent.
For example, if a customer is chatting with an AI bot regarding a refund for a pair of $89 sneakers, and the bot tells the customer to report to the nearest office to collect 1 million dollars, I can see the courts ruling the plaintiff is not owed $1 million dollars.
Although, if the plaintiff ended up flying a few States over to try and collect, maybe travel costs and lost wages? Who knows.
If a company marketing fee for service legal advice, their might be a higher standard. Say a client was given objectively bad legal advice, the kind that attorneys get sanctioned or reprimanded for, and subsequently acts upon that advice. I think it’s likely the courts would take a different approach and determine the company has a good bit of liability for damages.
Those are both just hypothetical generic companies and scenarios I made up to highlight how I can see the question of liability being determined by the courts. Unless some superceding laws and regulations enacted.
Or fuck it, maybe all AI companies have to do is put an arbitration clause in their T&C’s, and then contract out to an AI arbitration firm. And wouldn’t you know it, the arbitration AI model was only trained on cases hand picked by Federalist Society interns.
The one I want to see in court at some point is AI being able to give refunds or credits and someone getting it to give them more than it cost. Or having it create a 100% off promo code.