The EU definition of AI is pointless

Here is a paraphrased version of your text:

The European Union’s recent AI Act defines an “AI system” (emphasis added):

An ‘AI system’ is a machine-based system that operates with varying degrees of autonomy, potentially displaying adaptive behavior post-deployment. It can infer from inputs to produce outputs like predictions, recommendations, content, or decisions, which can impact physical or virtual environments.

If we strip away the examples, which are not essential for classifying something as AI, the definition becomes:

An ‘AI system’ is a machine-based system designed to function with different levels of autonomy and infer outputs from the inputs it receives.

This broad definition could apply to nearly any software created since basic programming courses in university.

To initiate the discussion:

The word “infer” could be understood in a statistical context, but that would be too narrow. More importantly, “infer” is not the same as “statistical inference,” which deals with uncertainty, confidence levels, etc. Instead, “infer” refers to the process of reasoning, as defined by Merriam-Webster: “to derive a conclusion from facts or premises.”

The term “how” is also problematic, as most AI systems do not decide how to generate outputs—algorithms do not typically change during execution.

Lastly, “varying levels of autonomy” is too vague; it does not specify a minimum threshold for what constitutes an AI system.

And no, “laws must be interpreted by judges” is not a valid argument here. In the EU’s Civil Law system, laws are intended to be as clear and unambiguous as possible, unlike Common Law, where judicial interpretations form the basis of legal precedents. According to Wikipedia, “While civil law is codified in legal codes, common law is derived from uncodified case law developed through judicial decisions.”

9 Likes

As part of my job, I’ve looked into AI law standards. If you take them at face value, they cover a lot of software. I believe that is the best scientific description of AI there is.

It’s trying to make laws about an idea that isn’t even fully formed yet.

7 Likes

Okay, I get it. But they say some really stupid things. The act is a 459-page document, so I know it’s hard to read. But if you do, you’ll see that it:

is very wide overall

doesn’t matter what they mean

lists exceptions with an example

doesn’t really weigh the pros either

The government should not be able to do that.

On page 29, for example, they say that “social control practices” are things that are “reasonably likely” to change people’s behavior in ways that do “significant harm.” And in this part, it doesn’t even bother to weigh the rewards against the harm. Psychological care, physical therapy, and “common and legitimate commercial practices, such as in the field of advertising,” are the only things that aren’t covered. We’re pretty much stuck with the same advertising frameworks and algorithms because any new or unusual ones can be banned. Everything: movies, TV, social networks, and more. The worst part is that if it’s not business, it can be banned. That means a website that helps people write fan stories with AI for free could also be blocked.

Intention is hard to prove because of “factors that may not be reasonably foreseeable and therefore not possible for the provider or the deployer of the AI system to mitigate.” However, they say that intention DOESN’T MATTER because “it is not necessary for the provider or the deployer to have the intention to cause significant harm.”

On page 193, Amendments to Annex III, they finally say that they care about purpose and that the pros and cons of AI should be weighed. That being said, this is only used to change Annex III, which lists specific cases. In other words, the committee could decide to go back and “amend or modify use-cases” if something that does more good than harm is banned. Unless it’s one of the cases already in Annex III, the ban based on harm alone will stay in place until they do that.

TL;DR By their very nature, most media systems, games, and anything else can be banned as long as it’s likely to hurt someone, which is almost always the case. There are three cases, but they are very restricted and stop new ideas from coming up. The committee may add something to the exception list at most if it does more good than harm.

6 Likes

It doesn’t seem like we disagree from what I’ve read.

It seems like every law I’ve seen depends on a meaning of AI that is way too broad.

6 Likes

It would be easier to understand everything if we just talked about machine learning. The moral problems are only with machine learning and not with other kinds of AI.

3 Likes

This is almost as bad an explanation as the one for AI.

A lot of statistics could be thought of as machine learning if they are done by a computer instead of by hand.
That’s machine learning: you use two data points to draw a line.

2 Likes

There is only one mathematics function that can be used to describe a neural network. A description of AI that only talks about the technical details doesn’t make sense and is easy to get around. You’re right that the term is broad so that a judge can decide if something fits the law.

Just curious, how could the circumvention look like? Will it stem from the fact that ML refers to how a system was made, not to what it is/how does it functions? So, in principle, one can obfuscate the origin of the system and pretend it isn’t related to ML?

If is ethnicity then deny loan.

That is an AI decision tree, and also entirely unethical.

“When I remove the examples, this could mean any software!”

Yes. In that case, the samples would be helpful. The word “AI” has become so general that it doesn’t mean anything by itself.

So what do these models do to help?

A microwave did the job for me. It worked on its own in some ways, since I wasn’t in the kitchen to watch over it. It took my inputs (power and length) and made choices (turning on the magnetron at set times for a set amount of time) that changed the real world to reach an unstated goal (heating my food).

You could say that a microwave is not meant to be covered with anything. What about things that don’t happen every day?