"Fabio M. Di Nitto" <fdinitto(a)redhat.com> writes:
> there is a PR open to merge stable1-proposed that will keep running CI
> on each cherry-pick backport for now, and we feel the time is ready to
> cut 1.1, we will just merge that PR at once.
So, this sounds like the point of stable1-proposed is to have CI, but I
can't see why stable1 couldn't be set up the same way. What do I miss?
"Fabio M. Di Nitto" <notifications(a)github.com> also writes:
> On 2/19/2018 10:43 AM, wferi wrote:
>> I don't really understand the master/stable1/stable1-proposed
>> distinction at the moment, because I can't see anything on master
>> which isn't 1.1 material, so we could pretty much release 1.1
>> straight from master.
> you are right, at the moment master and stable1-* haven´t diverged yet
> but they will in future .
It's getting an interesting question nearing the 1.1 release.
Basically, why complicate history by pretending 1.1 is being developed
in parallel with something else? Surely, there'll be a time when some
breaking change (not suitable for 1.x) will have already been merged
into master and still the need arises to cut a 1.x+1 release. Then we
could branch off stable1 from master right before the breaking change
and start doing parallel development when we're forced to. Till then,
I'd find it perfectly fine to tag new minor releases on the master
branch. Is that an oversimplification?
-----BEGIN PGP SIGNED MESSAGE-----
We are pleased to announce the general availability of kronosnet v1.1
kronosnet (or knet for short) is the new underlying network protocol
for Linux HA components (corosync), that features ability to use
multiple links between nodes, active/active and active/passive link
failover policies, automatic link recovery, FIPS compliant encryption
(nss and/or openssl), automatic PMTUd and in general better
performances compared to the old network protocol.
Highlights in this release:
* Fix plugins loader by switching from RPATH to RUNPATH
* Man pages are now automatically generated at build time
* Better error reporting from crypto plugins
* Fix and improve the whole build system
* Add support for some older lz4 versions
* Fix issue with UDP sockets that could cause knet to spin
* Add new API call to run knet unprivileged
Known issues in this release:
* When configuring compression plugins, the compress_level checks are
not always aligned with what the underlying library can really do.
This has been discussed over the devel mailing list and will be fixed
in libknet 1.2. Only compression advanced users might notice this
* Tarballs downloaded directly from github will fail to build
(https://github.com/kronosnet/kronosnet/issues/133) please use the
official release tarballs or git clone.
The source tarballs can be downloaded here:
Upstream resources and contacts:
https://kronosnet.org/https://github.com/kronosnet/kronosnet/https://ci.kronosnet.org/https://trello.com/kronosnet (TODO list and activities tracking)
https://goo.gl/9ZvkLS (google shared drive with presentations and
IRC: #kronosnet on Freenode
The knet developer team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
original discussion started here (for reference):
int knet_handle_compress(knet_handle_t knet_h,
struct knet_handle_compress_cfg *knet_handle_compress_cfg);
Where knet_handle_compress_cfg contains int compress_level.
As you can see from libknet.h and relative code, we try (too hard) to
validate the values for compress_level.
Feri had a proper concern about applying incorrect filtering and I agree
that it is already becoming complex to maintain a matrix of valid
values, per plugin, per plugin-version, and whatever combination.
AFAIR all plugins accept an int value for compress_level so from that
perspective it shouldn´t be too much of a problem (aka no API/ABI changes).
Historically, the reason why I added the validation was lzo2, that
presents a rather unusual compress configuration vs all other plugins
(as you can see).
I think Feri´s suggestion to drop the filter completely might be too
much, instead, after thinking a bit, it might be perfectly reason to
change the internals of val_level to try and compress a small buffer to
test that the value is good.
The val_level is used only at configuration time, so it´s not too
expensive to compress one small data chunk for validation and it would
remove the whole hardcoded filter of different compress_levels.
For special plugins like lzo2, I think it´s important we improve the
logging at configuration time to make sure users at least warned of
supported values (as you can see each values map to a specific lzo2 API
call to compress).