Regulation in IT is not very strong. For example, if you want to be a construction engineer or an architect, you need to comply with multiple rules and there is a chamber, which can grant and remove your license. You also have to follow an ethical code put forward by the regulating body. The same is true for doctors, lawyers or for example in Slovakia also for barbers.
Why no licenses for software engineers, system engineers or software architects? Probably because there was not yet a major catastrophe as a result of a software issue. Sure, there are known incidents such as space shuttle blow up, data being exposed or some money lost, but not a catastrophe as big as Chernobyl, Fukushima or even a building collapse where lots of people died and environment was affected on a large scale (if you know of some, let me know in the comments).
Also, there is quite a lot of self-regulation in IT, when companies agree on common standards (ISO, IEEE etc.) to follow and adhere to. Besides, most critical infrastructure follows strict rules without a law in place (think of NASA coding policies or nuclear reactors code validations).
Now, with AI, the self-regulation seems to be going a bit out of the window. Everyone is too excited and there is such a competitiveness, that security is secondary. Will we see the first major catastrophe that will bring hard regulation instead of soft (internal) one to IT? We will see.