All,
sooner or later we will face the issue, might as well start talking about it now.
I am not planning to start any technical discussion here about models, or prompts or tools. This is strictly about the legal aspects of AI. If you have the urge to discuss tools, please do so in another thread :)
The HA projects are currently lacking a policy/policies to deal with AI related code submission.
It is my understanding that a lack of guidelines, implicitly allows people to submit contributions generated by AI.
I started drafting a rather conservative doc here: https://github.com/ClusterLabs/ai-policies
with the goals of all projects under the ClusterLabs/Corosync/Kronosnet umbrellas to use a common set of rules by adopting the ClusterLab/ai-policies, and to work over time to expand the policy to allow AI usage for safe use cases.
Please submit comments, ideas, etc. on github directly.
I have added only a subset of contributors to ai-policies, please let me know if you would like to be involved. The list of people submitting code/patches to all projects, on a regular base, is rather large and I am sure I am going to miss someone anyway :)
Cheers Fabio