Here is a paraphrased version of your text:
The European Union’s recent AI Act defines an “AI system” (emphasis added):
An ‘AI system’ is a machine-based system that operates with varying degrees of autonomy, potentially displaying adaptive behavior post-deployment. It can infer from inputs to produce outputs like predictions, recommendations, content, or decisions, which can impact physical or virtual environments.
If we strip away the examples, which are not essential for classifying something as AI, the definition becomes:
An ‘AI system’ is a machine-based system designed to function with different levels of autonomy and infer outputs from the inputs it receives.
This broad definition could apply to nearly any software created since basic programming courses in university.
To initiate the discussion:
The word “infer” could be understood in a statistical context, but that would be too narrow. More importantly, “infer” is not the same as “statistical inference,” which deals with uncertainty, confidence levels, etc. Instead, “infer” refers to the process of reasoning, as defined by Merriam-Webster: “to derive a conclusion from facts or premises.”
The term “how” is also problematic, as most AI systems do not decide how to generate outputs—algorithms do not typically change during execution.
Lastly, “varying levels of autonomy” is too vague; it does not specify a minimum threshold for what constitutes an AI system.
And no, “laws must be interpreted by judges” is not a valid argument here. In the EU’s Civil Law system, laws are intended to be as clear and unambiguous as possible, unlike Common Law, where judicial interpretations form the basis of legal precedents. According to Wikipedia, “While civil law is codified in legal codes, common law is derived from uncodified case law developed through judicial decisions.”