Indice dei contenuti
ToggleBy August 2026, manufacturing companies will be required to fully comply with the European AI Act. Through a practical checklist, we guide you toward a safe transition, turning regulatory obligations into a competitive advantage for the industry of the future—built on trust and worker safety.
The countdown to the European regulatory revolution has entered its most critical phase. As the full implementation of the AI Act approaches in August 2026, artificial intelligence in manufacturing is evolving: it is no longer just an efficiency lever, but an asset that demands operational compliance and rigorous responsibility management.
For a manufacturing company, implementing safe AI today means integrating transparency and robustness requirements directly into the production workflow.
What the AI Act is and what it means for businesses
The AI Act (EU Regulation 2024/1689) is the world’s first comprehensive law on artificial intelligence. Approved by the European Parliament to ensure that AI systems used within the Union are safe, transparent, and respectful of fundamental rights, the regulation introduces a legal framework based on risk classification.
Unlike other technology regulations, the AI Act does not govern the technology “per se,” but regulates its specific uses. This means the same algorithm can be considered low-risk when used to filter spam emails, or high-risk if used to monitor the safety of a hydraulic press on the factory floor.
What does it concretely mean for businesses?
The enforcement of the regulation brings a paradigm shift on three fronts:
- Mandatory classification: Every company must map its software to determine whether it falls into the “Unacceptable” (prohibited), “High,” “Limited,” or “Minimal” risk categories.
- Certification and labeling: For high-risk systems, compliance becomes a legal requirement for sale and use, similar to electrical or mechanical safety (CE marking).
- Data governance: Companies are required to demonstrate how AI has been trained, minimizing the risk of faulty decisions that could cause physical harm or discrimination among workers.
In summary, the AI Act moves artificial intelligence from the “wild west” of experimental development to a regulated environment where accountability is the central pillar.
August 2026: the deadline for manufacturing
Although the AI Act has been phased in gradually, August 2, 2026, marks the date when the regulation becomes fully binding for most systems.
Companies now have less than six months to complete the transition. Ignoring this deadline exposes them to penalties, which in 2026 are already being monitored by national authorities through preventive audits. Fines can reach €35 million or 7% of global turnover, but the greatest risk is the forced shutdown of non-compliant systems, which can paralyze entire production lines.
Are you a “Provider” or a “Deployer”? The chain of responsibility
One of the most common mistakes in factories is assuming that all responsibility lies with the software provider. The AI Act clearly distinguishes the roles:
- Provider: the entity that develops the AI system. Bears the heaviest obligations: CE marking, technical documentation, and post-market monitoring systems.
- Deployer: the manufacturing company that uses AI on the factory floor. Responsible for using the system according to instructions, ensuring the quality of input data, and—most importantly—providing human oversight.
Warning: If your company substantially modifies a third-party AI system to adapt it to a specific production line, you may be reclassified as a “Provider,” inheriting all the legal obligations of certification.
Identifying high-risk AI
According to Annex III, an AI system is considered high-risk if it impacts workers’ health and safety.
Critical examples in manufacturing:
- Safety and Robotics: Algorithms that control the trajectory of collaborative robots (Cobots) in the presence of humans.
- Critical Predictive Maintenance: Systems that decide when to shut down equipment to prevent explosions or environmental accidents.
- HR and Monitoring: Software that evaluates operator performance or monitors their behavior for workplace safety (note: emotion recognition is prohibited).
- Digital Infrastructure: AI systems that manage the factory’s energy networks.
Integration with the Machinery Directive (EU Regulation 2023/1230)
In 2026, compliance will not happen in isolation. For manufacturers or integrators of machinery, the AI Act overlaps with the new Machinery Regulation. If AI performs a safety function for a machine, the conformity assessment must be integrated. A “safe AI” must therefore be tested not only as software but as a mechatronic component that meets functional safety standards.
The 5 Technical Pillars for Operational Compliance
To navigate the compliance curve, companies should focus on these pillars:
- Data Governance: Datasets used for training and fine-tuning must be “relevant, representative, and error-free.” On the factory floor, this means tracking the origin of every sensor data point.
- Automated Logging: The system must generate logs that allow every decision made by the algorithm to be reconstructed. Traceability is at the heart of accountability.
- Transparency (Explainability): Line operators must understand why the AI flags an anomaly. Clear technical manuals are legally required.
- Robustness and Cybersecurity: AI must withstand attempts at manipulation (data poisoning) that could alter production parameters.
- Human-in-the-Loop: It must always be possible for a human to “pull the plug” or override the AI’s decision.
The Human Factor: AI Literacy
Article 4 of the AI Act introduces the obligation for AI literacy. Companies must ensure that personnel operating these systems have the necessary skills. This is not just technical training, but also awareness of the risks and limitations of the technology. An informed workforce is the first line of defense against non-compliance.
What to do between now and August 2026?
- Inventory Audit: Map every software that uses probabilistic logic (not just those “sold” as AI).
- Gap Analysis: Identify which systems fall into the High-Risk category.
- Vendor Assessment: Request compliance certifications or update plans from your industrial software providers.
- Regulatory Sandboxes: If you are developing AI in-house, consider accessing national “Sandboxes” to test systems in a controlled environment before deployment.
Compliance as a Competitive Advantage
The AI Act is not a “necessary evil,” but a framework that will build market trust. Companies that achieve compliance by August 2026 will be the only ones able to operate without interruptions and provide their international partners with a transparent and secure supply chain.
At AzzurroDigitale, we guide companies through this journey, turning regulatory constraints into operational excellence.