FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > ArchLinux > ArchLinux Development

 
 
LinkBack Thread Tools
 
Old 09-16-2012, 01:56 PM
Xyne
 
Default Moving repos to nymeria

Tom Gundersen wrote:

>> Am 16.09.2012 08:34, schrieb Jan Steffens:
>>> I want avoid anything that requires me to upload the DB from my computer.
>
>[...]
>
>>> That would be over 7MB I would have to download and upload

Why can't the following procedure be used?

1) update the database on the server
2) download it
3) check it and sign it
4) upload the signature
5) check that the signature matches on the server

The database would only need to be locked during step 1. If user B updates it
while user A is in the process of signing it, step 5 will ensure that the
uploaded signature from user A is rejected and that user B's signature is kept,
even if user B manages to upload a signature before user A.

Advantages:
* no complicated locking
* local signing (i.e. no keys on server)
* minimal upload



>Would we really need to sign the full 7MB database? Could we not come
>up with something more minimal to sign that would still be sufficient?
>Alternatively, we need to get Jan a better connection

This is something that came up before when discussing signing of the [haskell]
repo. The problem there is that Magnus builds the packages remotely and just
doesn't have the bandwidth to download the entire repo and sign it. We found no
solution, because the only way to verify the integrity of the file is to check
the entire file. Anything generated on the server (e.g. a list of checksums)
could be compromised if an attacker managed to gain access.

Security costs bandwidth. There does not seem to be any way around it.



>We don't need to lock the database for the duration of the
>download/sign/upload. We could simply:
>
> * check the timestamp of the old database
> * download the database
> * check the old signature
> * update the database and sign the new version
> * upload the database
> * lock the database on the server
> * check if the timestamp has changed
> * if yes, release the lock and start from scratch
> * if no, overwrite it with your new version and release the lock
>
>This means that you might need to retry once or twice if more than one
>person is updating the database, so it does not scale that well.
>However, we are not that many people and we don't update the database
>that often, so the chance of actually getting a conflict is low (and
>the additional cost is not that high either).

The procedure that I outlined above should avoid these conflict altogether. The
signature that matches the most recent version of the database wins,
regardless of the order of upload. It should work as long as the database is
locked when updated on the server.


Regards,
Xyne
 
Old 09-16-2012, 02:07 PM
Allan McRae
 
Default Moving repos to nymeria

On 16/09/12 23:56, Xyne wrote:
> Tom Gundersen wrote:
>
>>> >> Am 16.09.2012 08:34, schrieb Jan Steffens:
>>>> >>> I want avoid anything that requires me to upload the DB from my computer.
>> >
>> >[...]
>> >
>>>> >>> That would be over 7MB I would have to download and upload
> Why can't the following procedure be used?
>
> 1) update the database on the server
> 2) download it
> 3) check it and sign it
> 4) upload the signature
> 5) check that the signature matches on the server
>
> The database would only need to be locked during step 1. If user B updates it
> while user A is in the process of signing it, step 5 will ensure that the
> uploaded signature from user A is rejected and that user B's signature is kept,
> even if user B manages to upload a signature before user A.
>
> Advantages:
> * no complicated locking
> * local signing (i.e. no keys on server)
> * minimal upload
>

What does "check it and sign it" mean? Diff it to the old and signed
database?

Anyway, I think it would need locked throughout. If B updates the
database while A is uploading, that is not different to bad guy C
adjusting the database and leaving it for someone to sign on the next
addition. The only way to maintain what would be a chain of trust -
where we can link each database update to the previous database - is to
have the current db signature checked before adding the new packages and
resigning.

Worst case scenario is that you move stuff from [testing] to [core] and
[extra] so you need to download three databases - probably less that 2MB
in total and then upload three signatures. I am ignoring signing the
.files databases...
 
Old 09-16-2012, 04:03 PM
Xyne
 
Default Moving repos to nymeria

Allan McRae wrote:

>What does "check it and sign it" mean? Diff it to the old and signed
>database?

By "check it" I mean check that each signature in the database is authentic and
trusted, and that every package in the database is signed. I thought there was
an easy way to verify each signature's authenticity without also verifying the
file's integrity, i.e. confirm that foo.sig was indeed created by user x without
caring if it matches foo (pacman handles that).

Looking at the command-line options for gpg I do not see any way to do this
directly, but that information is contained in the file, e.g.

$ wget foo.sig
$ touch foo
$ gpg --verify foo.sig
gpg: Signature made ... using RSA key ID ...
gpg: BAD signature from ...

The ID and other data can also be dumped using pgpdump (pgpdump-git in AUR).

It should be possible to write a simple tool to extract the key ID from each
signature (e.g. using gpgme or a wrapper shell script). As long as each file in
the database is or appears to be signed by a trusted key, it should be secure.
Pacman will check each signature during installation. Even if the signature
ID was somehow forged, the integrity check should fail. (If valid signatures
can be forged then the whole system is useless anyway.)

This approach will obviously involve some overhead as the ID of each signature
will need to be extracted and checked, but that should not be significant
compared to the overhead of package building. The advantage that I see in this
approach versus the one below is that you do not need to maintain a chain of
trust. Each database version is verified independently. As mentioned, there is
no locking either.

>Anyway, I think it would need locked throughout. If B updates the
>database while A is uploading, that is not different to bad guy C
>adjusting the database and leaving it for someone to sign on the next
>addition. The only way to maintain what would be a chain of trust -
>where we can link each database update to the previous database - is to
>have the current db signature checked before adding the new packages and
>resigning.
 
Old 09-16-2012, 10:43 PM
Gaetan Bisson
 
Default Moving repos to nymeria

[2012-09-16 16:03:19 +0000] Xyne:
> By "check it" I mean check that each signature in the database is authentic and
> trusted, and that every package in the database is signed.

Signing the DB serves a completely different purpose to all the
signatures on its packages.

--
Gaetan
 
Old 09-16-2012, 11:33 PM
Xyne
 
Default Moving repos to nymeria

Gaetan Bisson wrote:

>[2012-09-16 16:03:19 +0000] Xyne:
>> By "check it" I mean check that each signature in the database is authentic and
>> trusted, and that every package in the database is signed.
>
>Signing the DB serves a completely different purpose to all the
>signatures on its packages.

I see now that what I proposed would not ensure the integrity of package
metadata such as dependencies.

What about individually signing the metadata of each package in the database
when a package is added? The packaging procedure would then be:

1) build and sign package locally
2) generate and sign "depends", "desc", etc. files locally
3) upload package and signatures to server
4) add package and signatures to (locked) database on server
5) download database
6) check metadata signatures
7) sign database and upload signature


Cons:
* redundant generation of metadata files
* more data in database

Pros:
* database integrity can be checked without having to rebuild it locally

To clarify, with a chain of trust you need a trusted starting point. That means
that someone has to verify all of the package signatures and then locally
rebuild the database from scratch. If there is ever a doubt that the chain has
been broken (due to malice, carelessness in updates, whatever) then that needs
to be repeated. Signing per-package metadata should avoid that.


The metadata signatures could be kept out of the database if space is an issue,
but each packager would need to download them to check the database in that
case.

If they are kept in the database then signing the database file itself may be
unnecessary. Pacman could verify the integrity of the metadata for each package
when it downloads the database.
 
Old 09-16-2012, 11:51 PM
Xyne
 
Default Moving repos to nymeria

Xyne wrote:

>If they are kept in the database then signing the database file itself may be
>unnecessary. Pacman could verify the integrity of the metadata for each package
>when it downloads the database.

Adding to that idea, pacman currently verifies database signatures each time it
is run. If the metadata sigs were included in the database then pacman could do
the following:

1) check for matching valid sig for each database
2) if no valid sig, check metadata sigs in db
3) if all metadata sigs are valid, sign database with local key, else die
 
Old 09-17-2012, 12:36 AM
Gaetan Bisson
 
Default Moving repos to nymeria

[2012-09-16 23:33:39 +0000] Xyne:
> I see now that what I proposed would not ensure the integrity of package
> metadata such as dependencies.

As the metadata is found within packages (.pkg.tar.xz), package
signatures (.pkg.tar.xz.sig) ensure their integrity and, more
importantly, authenticity.

The point of signing the DB is to prevent an attacker from distributing
an outdated Arch package (properly signed by one of our packagers) which
has a known vulnerability.

For this, all we really need to sign is a list of unique identifiers for
the most recent version of all packages in each repos. These identifiers
could be the hash of each package, tuples ($pkgname,$pkgver,$pkgrel),
etc. But of course it is more elegant to simply sign the DB. What
matters is that an attacker cannot withhold one package without
withholding all packages (by withholding the DB and its sig).

So, when an official packager updates the DB, to prevent an attacker
with access to our servers to sneak in an old version of some package,
they really need to check that the DB was properly signed by another
official packager before making changes and signing it themselves. That
is the cryptographically secure way.

The other way which has been proposed is based on the assumption that
some "hardened" server cannot be breached; then we push our changes to
this server and rely on it for automatically signing the DB.

--
Gaetan
 
Old 09-17-2012, 08:32 PM
Florian Pritz
 
Default Moving repos to nymeria

On 16.09.2012 00:29, Pierre Schmitz wrote:
> * maybe review our group setup

One group per repo or what do you mean?

> * package files and svn files cannot be accessed by these accounts. Use
> some sudo and dedicated user magic here so that only dbscripts can write
> packages and the svn repo can only be access via an svn client.

I've looked into that and all I found was that you "should" use ssh
forced commands together with separate keys. AFAIK it is not possible to
tell svn to run a different command than "svnserve -t" when connected
via ssh.

It might be possible to use a simple forced commands wrapper that passes
just traps svnserve and executes it with sudo. I haven't checked if that
works with interactive shells.

> We can ave a more advanced setup later.

Good idea.

--
Florian Pritz
 

Thread Tools




All times are GMT. The time now is 12:17 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org