Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Gentoo Development (http://www.linux-archive.org/gentoo-development/)
-   -   Multiple ABI support through package appending/partial removal (http://www.linux-archive.org/gentoo-development/706768-multiple-abi-support-through-package-appending-partial-removal.html)

Michał Górny 09-23-2012 10:09 PM

Multiple ABI support through package appending/partial removal
 
Hello,

Since my previous idea of DYNAMIC_SLOTS proved too complex to design
and implement, I would like to offer an another idea, based partially
on what Ciaran mentioned. Before I start getting into details, I'd like
to know your opinions, and what possible problems am I missing. To keep
it clean, I will focus on Python ABIs but other languages and multilib
could be handled in a similar manner.


The problem
===========

Right now, building packages for multiple Python ABIs is done using
USE_EXPAND-based useflags. This is a working solution but it requires
rebuilding the package for all ABIs whenever the chosen ABI list
changes.

While it may be not that important for most of the Python packages, it
becomes such when it comes to things like boost or -- if we'd extend
that to multilib -- say, llvm. In that case, whenever a newly-installed
package requests a specific ABI, user has to spend twice as much time
to rebuild the same version.


The general idea
================

While not getting too deep into ebuild syntax, the core part
of the idea is to mark some of the USE_EXPAND variables 'special'.
In this particular example, such a special flag group would be
'PYTHON_TARGETS'.

Now, let's consider user installs a new package with one
python_targets_python2_7 enabled. The package is built and installed
like usual but aside to regular vdb files an additional file
is introduced, listing all the installed files as 'belonging'
to python_targets_python2_7.

If user enables python_targets_python3_2 on the same package, the PM
doesn't trigger a full rebuild. Instead, it builds the package with
the new flag being the only flag in PYTHON_TARGETS. The new files are
installed over the installed package (and added to CONTENTS in vdb),
and the files in install image are listed in vdb as 'belonging'
to python_targets_python3_2.

Whenever files from two ABIs collide, package manager either replaces
the installed files if the 'new' ABI is considered 'better' than
the old one or preserves it. This follows the current behavior when
multiple ABIs are built, and later builds overwrite files from earlier
ones.

At the point, the additional file contains something like
(ugly pseudo-syntax):

/usr/lib64/python2.7/foo.py python_targets_python2_7
/usr/lib64/python3.2/foo.py python_targets_python3_2
/usr/share/doc/foo-1.2.3/README.bz2 python_targets_python2_7
python_targets_python3_2

Now, if user requests disabling python_targets_python2_7
on the package, the package manager may not rebuild it as well.
Instead, it removes python_targets_python2_7 from the above list,
and unmerges the files which don't belong into any other ABI.

Sadly, this will not 'downgrade' common files to another ABI
but I believe that it is not really a killer-feature.


Installing new packages and upgrading existing
==============================================

Whenever a new package is to be built and multiple ABIs are requested,
the package manager should split the build process between particular
ABIs. Preferably, it should build all of them one-by-one, recording
the 'belongs' entries from the image and then install them as a single
package.

Whenever a package is to be upgraded, all ABIs have to rebuilt.
The package manager can handle it as a regular package upgrade, not
considering 'belongs' entries more than in a fresh package install.

Whenever a package is removed completely, the 'belongs' entries need
not to be considered at all.


Backwards compatibility
=======================

The solution aims to be fully compatible with package managers
not supporting it. They should see it as a regular package with
selected useflags, and an additional opaque vdb file.

When such a package manager attempts to rebuild or upgrade such
package, the vdb file should be removed, thus not introducing any
ambiguity for PMs supporting it. The package removal is unaffected
at all.

--
Best regards,
Michał Górny

Ian Stakenvicius 09-24-2012 02:09 PM

Multiple ABI support through package appending/partial removal
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 23/09/12 06:09 PM, Michał Górny wrote:
> Hello,
>
> Since my previous idea of DYNAMIC_SLOTS proved too complex to
> design and implement, I would like to offer an another idea, based
> partially on what Ciaran mentioned. Before I start getting into
> details, I'd like to know your opinions, and what possible problems
> am I missing. To keep it clean, I will focus on Python ABIs but
> other languages and multilib could be handled in a similar manner.
>
>
> The problem ===========
>
> Right now, building packages for multiple Python ABIs is done
> using USE_EXPAND-based useflags. This is a working solution but it
> requires rebuilding the package for all ABIs whenever the chosen
> ABI list changes.
>
> While it may be not that important for most of the Python packages,
> it becomes such when it comes to things like boost or -- if we'd
> extend that to multilib -- say, llvm. In that case, whenever a
> newly-installed package requests a specific ABI, user has to spend
> twice as much time to rebuild the same version.
>
>
> The general idea ================
>
> While not getting too deep into ebuild syntax, the core part of the
> idea is to mark some of the USE_EXPAND variables 'special'. In this
> particular example, such a special flag group would be
> 'PYTHON_TARGETS'.
>
> Now, let's consider user installs a new package with one
> python_targets_python2_7 enabled. The package is built and
> installed like usual but aside to regular vdb files an additional
> file is introduced, listing all the installed files as 'belonging'
> to python_targets_python2_7.
>
> If user enables python_targets_python3_2 on the same package, the
> PM doesn't trigger a full rebuild. Instead, it builds the package
> with the new flag being the only flag in PYTHON_TARGETS. The new
> files are installed over the installed package (and added to
> CONTENTS in vdb), and the files in install image are listed in vdb
> as 'belonging' to python_targets_python3_2.
>
> Whenever files from two ABIs collide, package manager either
> replaces the installed files if the 'new' ABI is considered
> 'better' than the old one or preserves it. This follows the current
> behavior when multiple ABIs are built, and later builds overwrite
> files from earlier ones.
>
> [ Snip! ]


This -could- be done, for testing purposes, entirely within an eclass,
if you'd like. Generate the file lists for each target during the
targets-specific src_install phase and install 'em to
/usr/share/${PN}-${PVR} , and then read 'em back at src_prepare if the
package has already been installed. Worth a shot to see if this is
really doable..

For testing purposes (or maybe as an overall solution) src_install
could copy back all the currently-installed files from ${EROOT} into
${D} for the targets that are being kept.. (probably prior to the
'real' src_install functions so updated files overwrite the old ones)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iF4EAREIAAYFAlBgaZEACgkQ2ugaI38ACPBi9gD/TjXQbsIFrtVpX/9wewNF2tnV
aITgy/JOR67EprpeMucBAKUzGtbDu/8fU4B8jXiy+4VODj6X1T/CqpK7EamzLaV3
=rOcp
-----END PGP SIGNATURE-----

Brian Harring 09-25-2012 08:12 PM

Multiple ABI support through package appending/partial removal
 
On Mon, Sep 24, 2012 at 12:09:49AM +0200, Micha?? G??rny wrote:
> Hello,
>
> Since my previous idea of DYNAMIC_SLOTS proved too complex to design
> and implement, I would like to offer an another idea, based partially
> on what Ciaran mentioned. Before I start getting into details, I'd like
> to know your opinions, and what possible problems am I missing. To keep
> it clean, I will focus on Python ABIs but other languages and multilib
> could be handled in a similar manner.
>
>
> The problem
> ===========
>
> Right now, building packages for multiple Python ABIs is done using
> USE_EXPAND-based useflags. This is a working solution but it requires
> rebuilding the package for all ABIs whenever the chosen ABI list
> changes.
>
> While it may be not that important for most of the Python packages, it
> becomes such when it comes to things like boost or -- if we'd extend
> that to multilib -- say, llvm. In that case, whenever a newly-installed
> package requests a specific ABI, user has to spend twice as much time
> to rebuild the same version.
>
>
> The general idea
> ================
>
> While not getting too deep into ebuild syntax, the core part
> of the idea is to mark some of the USE_EXPAND variables 'special'.
> In this particular example, such a special flag group would be
> 'PYTHON_TARGETS'.
>
> Now, let's consider user installs a new package with one
> python_targets_python2_7 enabled. The package is built and installed
> like usual but aside to regular vdb files an additional file
> is introduced, listing all the installed files as 'belonging'
> to python_targets_python2_7.
>
> If user enables python_targets_python3_2 on the same package, the PM
> doesn't trigger a full rebuild. Instead, it builds the package with
> the new flag being the only flag in PYTHON_TARGETS. The new files are
> installed over the installed package (and added to CONTENTS in vdb),
> and the files in install image are listed in vdb as 'belonging'
> to python_targets_python3_2.

What you're proposing would liter the ebuild/eclass with has_version
checks; in brain dead simple cases, you can replace parts of the pkg
as you're proposing there.

However if it installs scripts, things start getting more complex;
needs to vary how it installs if it's overlaying part of itself.

This proposal also doesn't work in the phase of := slot deps either,
not unless you've got a way to ensure, potentially weeks/months after
the first build, that the node locks to the same slotting.


> Whenever files from two ABIs collide, package manager either replaces
> the installed files if the 'new' ABI is considered 'better' than
> the old one or preserves it. This follows the current behavior when
> multiple ABIs are built, and later builds overwrite files from earlier
> ones.

This is handwavey and kind of crackadled; the PM has no way of knowing
which USE_EXPAND target is considered 'best', so in the case of
multilib (say 64b and 32b subversion) there isn't any way to which svn
binary should be there- 64b or 32b. Best I can tell, your proposal
winds up just being "last one to merge wins" which isn't acceptable.


> At the point, the additional file contains something like
> (ugly pseudo-syntax):
>
> /usr/lib64/python2.7/foo.py python_targets_python2_7
> /usr/lib64/python3.2/foo.py python_targets_python3_2
> /usr/share/doc/foo-1.2.3/README.bz2 python_targets_python2_7
> python_targets_python3_2
>
> Now, if user requests disabling python_targets_python2_7
> on the package, the package manager may not rebuild it as well.
> Instead, it removes python_targets_python2_7 from the above list,
> and unmerges the files which don't belong into any other ABI.

If we're going to do sub-packaging... which is what you're attempting
here... the VDB backend for it minimally cannot be a one off
USE_EXPAND hack. That'll just back us into a corner- which the vdb
already does quite heavily.

Any subpackaging content tracking needs to be generic and usable.
Really is that simple.


> Sadly, this will not 'downgrade' common files to another ABI
> but I believe that it is not really a killer-feature.
>
>
> Installing new packages and upgrading existing
> ==============================================
>
> Whenever a new package is to be built and multiple ABIs are requested,
> the package manager should split the build process between particular
> ABIs. Preferably, it should build all of them one-by-one, recording
> the 'belongs' entries from the image and then install them as a single
> package.

And how does the package know that it's being targetted at multiple
ABIs?

Your proposal is built on the assumption ebuilds will happy overlay
themselves in differing configurations w/out ever fucking up.

That's not the case frankly, and worse... for the cases where it
doesn't fly, your proposal basically requires the PM to hide the
"we're building/installing your ass multiple times" information from
the ebuild, further compounding the issue.


> Whenever a package is to be upgraded, all ABIs have to rebuilt.
> The package manager can handle it as a regular package upgrade, not
> considering 'belongs' entries more than in a fresh package install.
>
> Whenever a package is removed completely, the 'belongs' entries need
> not to be considered at all.
>
>
> Backwards compatibility
> =======================
>
> The solution aims to be fully compatible with package managers
> not supporting it. They should see it as a regular package with
> selected useflags, and an additional opaque vdb file.
>
> When such a package manager attempts to rebuild or upgrade such
> package, the vdb file should be removed, thus not introducing any
> ambiguity for PMs supporting it. The package removal is unaffected
> at all.

*cough* you forgot about the saved environment.

To run the pkg_postinst of an install pkg, you need to run it within
that saved environment. That's hard law when it comes to ebuilds.

You're proposing generating multiple environments, not addressing
which is used. A rather *fatal* flaw there is the assumption that the
ebuild/eclasses in the tree at the time of ABI #2 are going to be the
same/compatible as what's in the tree at the time of ABI #1. That
potential alone makes env handling fucktons worse.

Just heading it off also, you cannot sanely slap together multiple env
dumps and hope it works; minimally, the USE metadata, vars calculated
from the run, etc, will collide and the last one to merge will be
what's seen; whether that's right or wrong.


Bluntly; this feels like an attempt to duct tape multilib on;
Literally, I've spent ~5m reading this proposal, then inlining the
faults I see and there are multiple semi-fatal issues in it.

If you want to do multilib, aim for something that isn't a hack of
existing PM behaviour; whatever we do has to work well, no insne edge
cases, etc.

Keep in mind were this to land, it's not a one off feature; it lands,
it stays as a core part of the pm/format from that point forward
meaning fuckups in it bite us in the ass long term.

~harring

Michał Górny 09-26-2012 06:35 AM

Multiple ABI support through package appending/partial removal
 
On Tue, 25 Sep 2012 13:12:56 -0700
Brian Harring <ferringb@gmail.com> wrote:

> On Mon, Sep 24, 2012 at 12:09:49AM +0200, Micha?? G??rny wrote:
> > Hello,
> >
> > Since my previous idea of DYNAMIC_SLOTS proved too complex to design
> > and implement, I would like to offer an another idea, based partially
> > on what Ciaran mentioned. Before I start getting into details, I'd like
> > to know your opinions, and what possible problems am I missing. To keep
> > it clean, I will focus on Python ABIs but other languages and multilib
> > could be handled in a similar manner.
> >
> >
> > The problem
> > ===========
> >
> > Right now, building packages for multiple Python ABIs is done using
> > USE_EXPAND-based useflags. This is a working solution but it requires
> > rebuilding the package for all ABIs whenever the chosen ABI list
> > changes.
> >
> > While it may be not that important for most of the Python packages, it
> > becomes such when it comes to things like boost or -- if we'd extend
> > that to multilib -- say, llvm. In that case, whenever a newly-installed
> > package requests a specific ABI, user has to spend twice as much time
> > to rebuild the same version.
> >
> >
> > The general idea
> > ================
> >
> > While not getting too deep into ebuild syntax, the core part
> > of the idea is to mark some of the USE_EXPAND variables 'special'.
> > In this particular example, such a special flag group would be
> > 'PYTHON_TARGETS'.
> >
> > Now, let's consider user installs a new package with one
> > python_targets_python2_7 enabled. The package is built and installed
> > like usual but aside to regular vdb files an additional file
> > is introduced, listing all the installed files as 'belonging'
> > to python_targets_python2_7.
> >
> > If user enables python_targets_python3_2 on the same package, the PM
> > doesn't trigger a full rebuild. Instead, it builds the package with
> > the new flag being the only flag in PYTHON_TARGETS. The new files are
> > installed over the installed package (and added to CONTENTS in vdb),
> > and the files in install image are listed in vdb as 'belonging'
> > to python_targets_python3_2.
>
> What you're proposing would liter the ebuild/eclass with has_version
> checks; in brain dead simple cases, you can replace parts of the pkg
> as you're proposing there.
>
> However if it installs scripts, things start getting more complex;
> needs to vary how it installs if it's overlaying part of itself.

That's the idea. You're given a tool, now thinking twice before using
it.

> This proposal also doesn't work in the phase of := slot deps either,
> not unless you've got a way to ensure, potentially weeks/months after
> the first build, that the node locks to the same slotting.

A rebuild then?

> > Whenever files from two ABIs collide, package manager either replaces
> > the installed files if the 'new' ABI is considered 'better' than
> > the old one or preserves it. This follows the current behavior when
> > multiple ABIs are built, and later builds overwrite files from earlier
> > ones.
>
> This is handwavey and kind of crackadled; the PM has no way of knowing
> which USE_EXPAND target is considered 'best', so in the case of
> multilib (say 64b and 32b subversion) there isn't any way to which svn
> binary should be there- 64b or 32b. Best I can tell, your proposal
> winds up just being "last one to merge wins" which isn't acceptable.

This is just an early idea. Details like precedence weren't converted
into any syntax yet.

> > At the point, the additional file contains something like
> > (ugly pseudo-syntax):
> >
> > /usr/lib64/python2.7/foo.py python_targets_python2_7
> > /usr/lib64/python3.2/foo.py python_targets_python3_2
> > /usr/share/doc/foo-1.2.3/README.bz2 python_targets_python2_7
> > python_targets_python3_2
> >
> > Now, if user requests disabling python_targets_python2_7
> > on the package, the package manager may not rebuild it as well.
> > Instead, it removes python_targets_python2_7 from the above list,
> > and unmerges the files which don't belong into any other ABI.
>
> If we're going to do sub-packaging... which is what you're attempting
> here... the VDB backend for it minimally cannot be a one off
> USE_EXPAND hack. That'll just back us into a corner- which the vdb
> already does quite heavily.
>
> Any subpackaging content tracking needs to be generic and usable.
> Really is that simple.

Give a better explanation if you want me to follow your thoughts.

> > Sadly, this will not 'downgrade' common files to another ABI
> > but I believe that it is not really a killer-feature.
> >
> >
> > Installing new packages and upgrading existing
> > ==============================================
> >
> > Whenever a new package is to be built and multiple ABIs are requested,
> > the package manager should split the build process between particular
> > ABIs. Preferably, it should build all of them one-by-one, recording
> > the 'belongs' entries from the image and then install them as a single
> > package.
>
> And how does the package know that it's being targetted at multiple
> ABIs?

It assumes it always is. Much like the python-distutils-ng does now.

> Your proposal is built on the assumption ebuilds will happy overlay
> themselves in differing configurations w/out ever fucking up.
>
> That's not the case frankly, and worse... for the cases where it
> doesn't fly, your proposal basically requires the PM to hide the
> "we're building/installing your ass multiple times" information from
> the ebuild, further compounding the issue.

How it does require anything to be hidden? This is an early proposal,
just because it doesn't say anything about variables it doesn't mean
there can't be any.

> > Whenever a package is to be upgraded, all ABIs have to rebuilt.
> > The package manager can handle it as a regular package upgrade, not
> > considering 'belongs' entries more than in a fresh package install.
> >
> > Whenever a package is removed completely, the 'belongs' entries need
> > not to be considered at all.
> >
> >
> > Backwards compatibility
> > =======================
> >
> > The solution aims to be fully compatible with package managers
> > not supporting it. They should see it as a regular package with
> > selected useflags, and an additional opaque vdb file.
> >
> > When such a package manager attempts to rebuild or upgrade such
> > package, the vdb file should be removed, thus not introducing any
> > ambiguity for PMs supporting it. The package removal is unaffected
> > at all.
>
> *cough* you forgot about the saved environment.
>
> To run the pkg_postinst of an install pkg, you need to run it within
> that saved environment. That's hard law when it comes to ebuilds.
>
> You're proposing generating multiple environments, not addressing
> which is used. A rather *fatal* flaw there is the assumption that the
> ebuild/eclasses in the tree at the time of ABI #2 are going to be the
> same/compatible as what's in the tree at the time of ABI #1. That
> potential alone makes env handling fucktons worse.
>
> Just heading it off also, you cannot sanely slap together multiple env
> dumps and hope it works; minimally, the USE metadata, vars calculated
> from the run, etc, will collide and the last one to merge will be
> what's seen; whether that's right or wrong.

That's a fair point.

> Bluntly; this feels like an attempt to duct tape multilib on;
> Literally, I've spent ~5m reading this proposal, then inlining the
> faults I see and there are multiple semi-fatal issues in it.
>
> If you want to do multilib, aim for something that isn't a hack of
> existing PM behaviour; whatever we do has to work well, no insne edge
> cases, etc.
>
> Keep in mind were this to land, it's not a one off feature; it lands,
> it stays as a core part of the pm/format from that point forward
> meaning fuckups in it bite us in the ass long term.

Also, please watch your language. This is a public mailing list, not
a public toilet.

--
Best regards,
Michał Górny

Brian Harring 09-26-2012 10:57 AM

Multiple ABI support through package appending/partial removal
 
On Wed, Sep 26, 2012 at 08:35:37AM +0200, Micha?? G??rny wrote:
> On Tue, 25 Sep 2012 13:12:56 -0700
> Brian Harring <ferringb@gmail.com> wrote:
>
> > On Mon, Sep 24, 2012 at 12:09:49AM +0200, Micha?? G??rny wrote:
> > > Hello,
> > >
> > > Since my previous idea of DYNAMIC_SLOTS proved too complex to design
> > > and implement, I would like to offer an another idea, based partially
> > > on what Ciaran mentioned. Before I start getting into details, I'd like
> > > to know your opinions, and what possible problems am I missing. To keep
> > > it clean, I will focus on Python ABIs but other languages and multilib
> > > could be handled in a similar manner.
> > >
> > >
> > > The problem
> > > ===========
> > >
> > > Right now, building packages for multiple Python ABIs is done using
> > > USE_EXPAND-based useflags. This is a working solution but it requires
> > > rebuilding the package for all ABIs whenever the chosen ABI list
> > > changes.
> > >
> > > While it may be not that important for most of the Python packages, it
> > > becomes such when it comes to things like boost or -- if we'd extend
> > > that to multilib -- say, llvm. In that case, whenever a newly-installed
> > > package requests a specific ABI, user has to spend twice as much time
> > > to rebuild the same version.
> > >
> > >
> > > The general idea
> > > ================
> > >
> > > While not getting too deep into ebuild syntax, the core part
> > > of the idea is to mark some of the USE_EXPAND variables 'special'.
> > > In this particular example, such a special flag group would be
> > > 'PYTHON_TARGETS'.
> > >
> > > Now, let's consider user installs a new package with one
> > > python_targets_python2_7 enabled. The package is built and installed
> > > like usual but aside to regular vdb files an additional file
> > > is introduced, listing all the installed files as 'belonging'
> > > to python_targets_python2_7.
> > >
> > > If user enables python_targets_python3_2 on the same package, the PM
> > > doesn't trigger a full rebuild. Instead, it builds the package with
> > > the new flag being the only flag in PYTHON_TARGETS. The new files are
> > > installed over the installed package (and added to CONTENTS in vdb),
> > > and the files in install image are listed in vdb as 'belonging'
> > > to python_targets_python3_2.
> >
> > What you're proposing would liter the ebuild/eclass with has_version
> > checks; in brain dead simple cases, you can replace parts of the pkg
> > as you're proposing there.
> >
> > However if it installs scripts, things start getting more complex;
> > needs to vary how it installs if it's overlaying part of itself.
>
> That's the idea. You're given a tool, now thinking twice before using
> it.

This statement doesn't make sense. Clarify it.


> > This proposal also doesn't work in the phase of := slot deps either,
> > not unless you've got a way to ensure, potentially weeks/months after
> > the first build, that the node locks to the same slotting.
>
> A rebuild then?

More logic to dump on the PM. Yes, you can hack around it- I pointed
it out because the existence of that corner case says something about
the proposal.


> > > Whenever files from two ABIs collide, package manager either replaces
> > > the installed files if the 'new' ABI is considered 'better' than
> > > the old one or preserves it. This follows the current behavior when
> > > multiple ABIs are built, and later builds overwrite files from earlier
> > > ones.
> >
> > This is handwavey and kind of crackadled; the PM has no way of knowing
> > which USE_EXPAND target is considered 'best', so in the case of
> > multilib (say 64b and 32b subversion) there isn't any way to which svn
> > binary should be there- 64b or 32b. Best I can tell, your proposal
> > winds up just being "last one to merge wins" which isn't acceptable.
>
> This is just an early idea. Details like precedence weren't converted
> into any syntax yet.

Precedence is a core requirement of anything that's more than just a
non lib; python would've already been required to be addressed if we
didn't have that damn wrapper in place (when a multislotting proposal
goes through, that wrapper should die in the process).


> > > At the point, the additional file contains something like
> > > (ugly pseudo-syntax):
> > >
> > > /usr/lib64/python2.7/foo.py python_targets_python2_7
> > > /usr/lib64/python3.2/foo.py python_targets_python3_2
> > > /usr/share/doc/foo-1.2.3/README.bz2 python_targets_python2_7
> > > python_targets_python3_2
> > >
> > > Now, if user requests disabling python_targets_python2_7
> > > on the package, the package manager may not rebuild it as well.
> > > Instead, it removes python_targets_python2_7 from the above list,
> > > and unmerges the files which don't belong into any other ABI.
> >
> > If we're going to do sub-packaging... which is what you're attempting
> > here... the VDB backend for it minimally cannot be a one off
> > USE_EXPAND hack. That'll just back us into a corner- which the vdb
> > already does quite heavily.
> >
> > Any subpackaging content tracking needs to be generic and usable.
> > Really is that simple.
>
> Give a better explanation if you want me to follow your thoughts.

It's sub-packaging. Having packages within packages, ability to
add/remove chunks of a package without forcing full rebuilds of the
gestalt.

It is as it sounds.

A subpkging example beyond this would be storing splitdebug symbols in
a separate tbz2, and pulling that down- merging/unmerging on the fly.

Any such proposal as yours needs to recognize subpkging, and minimally
not make that situation worse.


> > > Sadly, this will not 'downgrade' common files to another ABI
> > > but I believe that it is not really a killer-feature.
> > >
> > >
> > > Installing new packages and upgrading existing
> > > ==============================================
> > >
> > > Whenever a new package is to be built and multiple ABIs are requested,
> > > the package manager should split the build process between particular
> > > ABIs. Preferably, it should build all of them one-by-one, recording
> > > the 'belongs' entries from the image and then install them as a single
> > > package.
> >
> > And how does the package know that it's being targetted at multiple
> > ABIs?
>
> It assumes it always is. Much like the python-distutils-ng does now.

Installation of binaries/scripts is where that most strongly applies-
python libraries are mostly by nature, slotted in how they install-
with the exception of docs, snakeoil doesn't conflict across multiple
python versions targetted for example.

If however we were talking about pkgcore, which *does* install things
into the ${PATH}, that question stands; it's an offshoot of the
precedence bit I poked at earlier.


> > Your proposal is built on the assumption ebuilds will happy overlay
> > themselves in differing configurations w/out ever fucking up.
> >
> > That's not the case frankly, and worse... for the cases where it
> > doesn't fly, your proposal basically requires the PM to hide the
> > "we're building/installing your ass multiple times" information from
> > the ebuild, further compounding the issue.
>
> How it does require anything to be hidden? This is an early proposal,
> just because it doesn't say anything about variables it doesn't mean
> there can't be any.

Your counterargument is that "nyuh uh; we can add variables to deal
with it". I had to write the critique there based on what you
wrote... not on what's in your head.

Reiterating; any code that is has_version aware, is going to have to
be adjusted for this.

Any code that is REPLACING_VERSIONS or REPLACED_BY_VERSIONS aware,
same thing; the those signals (and ones like it) are now wanged up due
to the fact that chunk of code is going to be invoked multiple times,
not knowing that it's sibling ABI target already ran- potentially
doing something about it.

Basically, if you try to run the phases all on their own with out them
truly knowing what's going on, it's going to make corner cases/issues
for devs pop up.

Now, if you have some vars you'd like to propose to deal with it, do
so; but that core issue is there and must be addressed.


~harring


All times are GMT. The time now is 09:40 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.