FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > ArchLinux > ArchLinux Pacman Development

 
 
LinkBack Thread Tools
 
Old 11-12-2009, 07:13 AM
Dan McGee
 
Default download: major refactor to address lingering issues

On Thu, Nov 12, 2009 at 2:09 AM, Dan McGee <dan@archlinux.org> wrote:
> Sorry for this being such a huge patch, but I believe it is necessary for
> quite a few reasons which I will attempt to explain herein. I've been
> mulling this over for a while, but wasn't super happy with making the
> download interface more complex. Instead, if we carefully order things in
> the internal download code, we can actually make the interface simpler.
>
> 1. FS#15657 - This involves `name.db.tar.gz.part` files being left around the
> filesystem, and then causing all sorts of issues when someone attempts to
> rerun the operation they canceled. We need to ensure that if we resume a
> download, we are resuming it on exactly the same file; if we cannot be
> almost postive of that then we need to start over.
>
> 2. http://www.mail-archive.com/pacman-dev@archlinux.org/msg03536.html - Here
> we have a lighttpd bug to ruin the day. If we send both a Range: header and
> If-Modified-Since: header across the wire in a GET request, lighttpd doesn't
> do what we want in several cases. If the file hadn't been modified, it
> returns a '304 Not Modified' instead of a '206 Partial Content'. We need to
> do a stat (e.g. HEAD in HTTP terms) operation here, and the proceed
> accordingly based off the values we get back from it.
>
> 3. The mtime stuff was rather ugly, and relied on the called function to
> write back to a passed in reference, which isn't the greatest. Instead, use
> the power of the filesystem to contain this info. Every file downloaded
> internally is now carefully timestamped with the remote file time. This
> should allow the resume logic to work. In order to guarantee this, we need
> to implement a signal handler that catches interrupts, notifies the running
> code, and causes it to set the mtimes on the file. It then rethrows the
> signal so the pacman signal handler (or any frontend) works as expected.
>
> 4. We did a lot of funky stuff in trying to track the DB last modified time.
> It is a lot easier to just keep the downloaded DB file around and track the
> time on that rather than in a funky dot file. It also kills a lot of code.
>
> 5. For GPG verification of the databases down the road, we are going to need
> the DB file around for at least a short bit of time anyway, so this gets us
> closer to that.
>
> Signed-off-by: Dan McGee <dan@archlinux.org>
> ---
> *lib/libalpm/alpm.h * * | * *8 +--
> *lib/libalpm/be_files.c | * 96 ++-------------------------
> *lib/libalpm/dload.c * *| *177 +++++++++++++++++++++++++++++++++---------------
> *lib/libalpm/dload.h * *| * *4 +-
> *src/pacman/pacman.c * *| * 11 +++-
> *5 files changed, 141 insertions(+), 155 deletions(-)

So yes, I know this patch is huge, but I'm really really *really*
hoping someone (or a few people!) would like to review it. Xavier, I
know you asked me about this stuff a while back, and I finally got
back around to it and it is in a much more working form. It is
hopefully well commented too.

I should note this went through about 3 distinct iterations locally
tonight before this final version came about. Function arguments were
dropped, added, and dropped again; calls into libfetch were reordered
and reworked and changed, etc. It took a good amount of time to get it
to work for all of the various partial download cases (both DB *and*
packages, yes, it actually works!) as well as keeping the force flag
working and just downloading in the most straightforward start-to-end
case.

Anyway, it may be a tad easier to look at it here:
http://code.toofishes.net/cgit/dan/pacman.git/commit/?h=download&id=8a15e0cbebaff0c34d2d837b37dd6279cff 44e32
or in a side by side diff format:
http://code.toofishes.net/cgit/dan/pacman.git/commit/?h=download&id=8a15e0cbebaff0c34d2d837b37dd6279cff 44e32&ss=1

Thanks guys! And don't ask why I am up at 2 AM my time working on
this, it feels like the old days of hacking again.

-Dan
 
Old 11-13-2009, 09:20 AM
Xavier
 
Default download: major refactor to address lingering issues

On Thu, Nov 12, 2009 at 9:09 AM, Dan McGee <dan@archlinux.org> wrote:
> + * * * _alpm_log(PM_LOG_DEBUG, "ust.size: %ld local_size: %ld compare: %ld
",
> + * * * * * * * * * * * ust.size, local_size, local_size - ust.size);
>
> + * * * } else if(fileurl->offset) {
> + * * * * * * * _alpm_log(PM_LOG_DEBUG, "resuming download at position %ld
", fileurl->offset);
> * * * *}

This does not compile here :
dload.c:190: error: format '%ld' expects type 'long int', but argument
3 has type 'off_t'
dload.c:190: error: format '%ld' expects type 'long int', but argument
4 has type 'off_t'
dload.c:190: error: format '%ld' expects type 'long int', but argument
5 has type 'off_t'
dload.c:223: error: format '%ld' expects type 'long int', but argument
3 has type 'off_t'

I am always confused about this stuff, so I looked at one older patch :
http://code.toofishes.net/cgit/dan/pacman.git/commit/?id=0669c9bfac7aead01f1400444e691d542f7645c2&ss=1
But this was confusing too. It looks like the same patch used three
different ways to print off_t :
1 - %jd and cast to intmax_t
2 - %ju and cast to intmax_t
3 - %lld and cast to long long

After looking at man types.h and man printf, it seems off_t is a
signed integer, like intmax_t
So 1 seems alright
but maybe 3 is alright too ?

googling a bit leads to a lot of different and confusing answers.
But it seems the most correct and portable way is to use a macro for
the printf format : %" PRId64 "
http://mail-index.netbsd.org/tech-misc/2003/01/20/0002.html
http://stackoverflow.com/questions/586928/how-should-i-print-types-like-offt-and-sizet

It looks like PRId64 is defined in #include <inttypes.h>
But dload.c does not have this include and it still works. Maybe it's
included by another header ? How do I find out ?
Should we add #include <inttypes.h> anyway ?

diff --git a/lib/libalpm/dload.c b/lib/libalpm/dload.c
index 683f514..05555f2 100644
--- a/lib/libalpm/dload.c
+++ b/lib/libalpm/dload.c
@@ -186,7 +186,7 @@ static int download_internal(const char *url,
const char *localpath,

_alpm_log(PM_LOG_DEBUG, "ust.mtime: %ld local_time: %ld compare: %ld
",
ust.mtime, local_time, local_time - ust.mtime);
- _alpm_log(PM_LOG_DEBUG, "ust.size: %ld local_size: %ld compare: %ld
",
+ _alpm_log(PM_LOG_DEBUG, "ust.size: %"PRId64" local_size: %"PRId64"
compare: %"PRId64"
",
ust.size, local_size, local_size - ust.size);
if(!force && ust.mtime && ust.mtime == local_time
&& ust.size && ust.size == local_size) {
@@ -220,7 +220,7 @@ static int download_internal(const char *url,
const char *localpath,
fclose(localf);
localf = NULL;
} else if(fileurl->offset) {
- _alpm_log(PM_LOG_DEBUG, "resuming download at position %ld
",
fileurl->offset);
+ _alpm_log(PM_LOG_DEBUG, "resuming download at position
%"PRId64"
", fileurl->offset);
}
 
Old 11-14-2009, 05:29 PM
Dario Freddi
 
Default download: major refactor to address lingering issues

First of all, the patch and the idea behind it looks great, even if it's not
BC. The code looks fine to me, I had some doubts about the force parameter
being useful, but if that is supposed to tell the frontend "hey, wipe away the
partially downloaded file if there's one", then it's completely fine.

I'm all for having this in (but please, increase the .so version )

On Thursday 12 November 2009 09:09:10 Dan McGee wrote:
> Sorry for this being such a huge patch, but I believe it is necessary for
> quite a few reasons which I will attempt to explain herein. I've been
> mulling this over for a while, but wasn't super happy with making the
> download interface more complex. Instead, if we carefully order things in
> the internal download code, we can actually make the interface simpler.
>
> 1. FS#15657 - This involves `name.db.tar.gz.part` files being left around
> the filesystem, and then causing all sorts of issues when someone attempts
> to rerun the operation they canceled. We need to ensure that if we resume
> a download, we are resuming it on exactly the same file; if we cannot be
> almost postive of that then we need to start over.
>
> 2. http://www.mail-archive.com/pacman-dev@archlinux.org/msg03536.html -
> Here we have a lighttpd bug to ruin the day. If we send both a Range:
> header and If-Modified-Since: header across the wire in a GET request,
> lighttpd doesn't do what we want in several cases. If the file hadn't been
> modified, it returns a '304 Not Modified' instead of a '206 Partial
> Content'. We need to do a stat (e.g. HEAD in HTTP terms) operation here,
> and the proceed accordingly based off the values we get back from it.
>
> 3. The mtime stuff was rather ugly, and relied on the called function to
> write back to a passed in reference, which isn't the greatest. Instead, use
> the power of the filesystem to contain this info. Every file downloaded
> internally is now carefully timestamped with the remote file time. This
> should allow the resume logic to work. In order to guarantee this, we need
> to implement a signal handler that catches interrupts, notifies the running
> code, and causes it to set the mtimes on the file. It then rethrows the
> signal so the pacman signal handler (or any frontend) works as expected.
>
> 4. We did a lot of funky stuff in trying to track the DB last modified
> time. It is a lot easier to just keep the downloaded DB file around and
> track the time on that rather than in a funky dot file. It also kills a
> lot of code.
>
> 5. For GPG verification of the databases down the road, we are going to
> need the DB file around for at least a short bit of time anyway, so this
> gets us closer to that.
>
> Signed-off-by: Dan McGee <dan@archlinux.org>
> ---
> lib/libalpm/alpm.h | 8 +--
> lib/libalpm/be_files.c | 96 ++-------------------------
> lib/libalpm/dload.c | 177
> +++++++++++++++++++++++++++++++++--------------- lib/libalpm/dload.h |
> 4 +-
> src/pacman/pacman.c | 11 +++-
> 5 files changed, 141 insertions(+), 155 deletions(-)
>
> diff --git a/lib/libalpm/alpm.h b/lib/libalpm/alpm.h
> index c433b73..05ad82e 100644
> --- a/lib/libalpm/alpm.h
> +++ b/lib/libalpm/alpm.h
> @@ -86,14 +86,12 @@ typedef void (*alpm_cb_totaldl)(off_t total);
> /** A callback for downloading files
> * @param url the URL of the file to be downloaded
> * @param localpath the directory to which the file should be downloaded
> - * @param mtimeold the modification time of the file previously downloaded
> - * @param mtimenew the modification time of the newly downloaded file.
> - * This should be set by the callback.
> - * @return 0 on success, 1 if the modification times are identical, -1 on
> + * @param force whether to force an update, even if the file is the same
> + * @return 0 on success, 1 if the file exists and is identical, -1 on
> * error.
> */
> typedef int (*alpm_cb_fetch)(const char *url, const char *localpath,
> - time_t mtimeold, time_t *mtimenew);
> + int force);
>
> /*
> * Options
> diff --git a/lib/libalpm/be_files.c b/lib/libalpm/be_files.c
> index ffbaa8d..90e97a5 100644
> --- a/lib/libalpm/be_files.c
> +++ b/lib/libalpm/be_files.c
> @@ -47,76 +47,6 @@
> #include "dload.h"
>
>
> -/*
> - * Return the last update time as number of seconds from the epoch.
> - * Returns 0 if the value is unknown or can't be read.
> - */
> -static time_t getlastupdate(pmdb_t *db)
> -{
> - FILE *fp;
> - char *file;
> - const char *dbpath;
> - time_t ret = 0;
> -
> - ALPM_LOG_FUNC;
> -
> - if(db == NULL) {
> - return(ret);
> - }
> -
> - dbpath = _alpm_db_path(db);
> - /* dbpath + '.lastupdate' + NULL */
> - MALLOC(file, strlen(dbpath) + 12, RET_ERR(PM_ERR_MEMORY, ret));
> - sprintf(file, "%s.lastupdate", dbpath);
> -
> - /* get the last update time, if it's there */
> - if((fp = fopen(file, "r")) == NULL) {
> - free(file);
> - return(ret);
> - } else {
> - char line[64];
> - if(fgets(line, sizeof(line), fp)) {
> - ret = atol(line);
> - }
> - }
> - fclose(fp);
> - free(file);
> - return(ret);
> -}
> -
> -/*
> - * writes the dbpath/.lastupdate file with the value in time
> - */
> -static int setlastupdate(pmdb_t *db, time_t time)
> -{
> - FILE *fp;
> - char *file;
> - const char *dbpath;
> - int ret = 0;
> -
> - ALPM_LOG_FUNC;
> -
> - if(db == NULL || time == 0) {
> - return(-1);
> - }
> -
> - dbpath = _alpm_db_path(db);
> - /* dbpath + '.lastupdate' + NULL */
> - MALLOC(file, strlen(dbpath) + 12, RET_ERR(PM_ERR_MEMORY, ret));
> - sprintf(file, "%s.lastupdate", dbpath);
> -
> - if((fp = fopen(file, "w")) == NULL) {
> - free(file);
> - return(-1);
> - }
> - if(fprintf(fp, "%ju", (uintmax_t)time) <= 0) {
> - ret = -1;
> - }
> - fclose(fp);
> - free(file);
> - return(ret);
> -}
> -
> static int checkdbdir(pmdb_t *db)
> {
> struct stat buf;
> @@ -178,7 +108,6 @@ static int checkdbdir(pmdb_t *db)
> int SYMEXPORT alpm_db_update(int force, pmdb_t *db)
> {
> char *dbfile, *dbfilepath;
> - time_t newmtime = 0, lastupdate = 0;
> const char *dbpath;
> size_t len;
>
> @@ -200,27 +129,17 @@ int SYMEXPORT alpm_db_update(int force, pmdb_t *db)
> RET_ERR(PM_ERR_DB_NOT_FOUND, -1);
> }
>
> - if(!force) {
> - /* get the lastupdate time */
> - lastupdate = getlastupdate(db);
> - if(lastupdate == 0) {
> - _alpm_log(PM_LOG_DEBUG, "failed to get lastupdate time for %s
",
> - db->treename);
> - }
> - }
> -
> len = strlen(db->treename) + strlen(DBEXT) + 1;
> MALLOC(dbfile, len, RET_ERR(PM_ERR_MEMORY, -1));
> sprintf(dbfile, "%s" DBEXT, db->treename);
>
> dbpath = alpm_option_get_dbpath();
>
> - ret = _alpm_download_single_file(dbfile, db->servers, dbpath,
> - lastupdate, &newmtime);
> + ret = _alpm_download_single_file(dbfile, db->servers, dbpath, force);
> free(dbfile);
>
> if(ret == 1) {
> - /* mtimes match, do nothing */
> + /* files match, do nothing */
> pm_errno = 0;
> return(1);
> } else if(ret == -1) {
> @@ -233,8 +152,11 @@ int SYMEXPORT alpm_db_update(int force, pmdb_t *db)
> if(_alpm_rmrf(syncdbpath) != 0) {
> _alpm_log(PM_LOG_ERROR, _("could not remove database %s
"),
> db->treename); RET_ERR(PM_ERR_DB_REMOVE, -1);
> + } else {
> + _alpm_log(PM_LOG_DEBUG, "database dir %s removed
",
> _alpm_db_path(db)); }
>
> +
> /* Cache needs to be rebuilt */
> _alpm_db_free_pkgcache(db);
>
> @@ -250,15 +172,7 @@ int SYMEXPORT alpm_db_update(int force, pmdb_t *db)
> free(dbfilepath);
> RET_ERR(PM_ERR_SYSTEM, -1);
> }
> - unlink(dbfilepath);
> free(dbfilepath);
> -
> - /* if we have a new mtime, set the DB last update value */
> - if(newmtime) {
> - _alpm_log(PM_LOG_DEBUG, "sync: new mtime for %s: %ju
",
> - db->treename, (uintmax_t)newmtime);
> - setlastupdate(db, newmtime);
> - }
> }
>
> return(0);
> diff --git a/lib/libalpm/dload.c b/lib/libalpm/dload.c
> index a4c9f1f..683f514 100644
> --- a/lib/libalpm/dload.c
> +++ b/lib/libalpm/dload.c
> @@ -25,6 +25,9 @@
> #include <errno.h>
> #include <string.h>
> #include <unistd.h>
> +#include <sys/time.h>
> +#include <sys/types.h>
> +#include <sys/stat.h>
> #include <signal.h>
> #include <limits.h>
> /* the following two are needed on BSD for libfetch */
> @@ -85,18 +88,32 @@ static const char *gethost(struct url *fileurl)
> return(host);
> }
>
> +int dload_interrupted;
> +static RETSIGTYPE inthandler(int signum)
> +{
> + dload_interrupted = 1;
> +}
> +
> +#define check_stop() if(dload_interrupted) { ret = -1; goto cleanup; }
> +enum sighandlers { OLD = 0, NEW = 1 };
> +
> static int download_internal(const char *url, const char *localpath,
> - time_t mtimeold, time_t *mtimenew) {
> - fetchIO *dlf = NULL;
> + int force) {
> FILE *localf = NULL;
> - struct url_stat ust;
> struct stat st;
> - int chk_resume = 0, ret = 0;
> + int ret = 0;
> off_t dl_thisfile = 0;
> ssize_t nread = 0;
> char *tempfile, *destfile, *filename;
> - struct sigaction new_action, old_action;
> + struct sigaction sig_pipe[2], sig_int[2];
> +
> + off_t local_size = 0;
> + time_t local_time = 0;
> +
> struct url *fileurl;
> + struct url_stat ust;
> + fetchIO *dlf = NULL;
> +
> char buffer[PM_DLBUF_LEN];
>
> filename = get_filename(url);
> @@ -113,51 +130,80 @@ static int download_internal(const char *url, const
> char *localpath, destfile = get_destfile(localpath, filename);
> tempfile = get_tempfile(localpath, filename);
>
> - if(mtimeold) {
> - fileurl->last_modified = mtimeold;
> - }
> -
> - /* pass the raw filename for passing to the callback function */
> - _alpm_log(PM_LOG_DEBUG, "using '%s' for download progress
", filename);
> -
> if(stat(tempfile, &st) == 0 && st.st_size > 0) {
> - _alpm_log(PM_LOG_DEBUG, "existing file found, using it
");
> - fileurl->offset = (off_t)st.st_size;
> + _alpm_log(PM_LOG_DEBUG, "tempfile found, attempting continuation
");
> + local_time = fileurl->last_modified = st.st_mtime;
> + local_size = fileurl->offset = (off_t)st.st_size;
> dl_thisfile = st.st_size;
> localf = fopen(tempfile, "ab");
> - chk_resume = 1;
> + } else if(!force && stat(destfile, &st) == 0 && st.st_size > 0) {
> + _alpm_log(PM_LOG_DEBUG, "destfile found, using mtime only
");
> + local_time = fileurl->last_modified = st.st_mtime;
> + local_size = /* no fu->off here */ (off_t)st.st_size;
> } else {
> - fileurl->offset = (off_t)0;
> - dl_thisfile = 0;
> + _alpm_log(PM_LOG_DEBUG, "no file found matching criteria, starting from
> scratch
"); }
>
> + /* pass the raw filename for passing to the callback function */
> + _alpm_log(PM_LOG_DEBUG, "using '%s' for download progress
", filename);
> +
> /* print proxy info for debug purposes */
> _alpm_log(PM_LOG_DEBUG, "HTTP_PROXY: %s
", getenv("HTTP_PROXY"));
> _alpm_log(PM_LOG_DEBUG, "http_proxy: %s
", getenv("http_proxy"));
> _alpm_log(PM_LOG_DEBUG, "FTP_PROXY: %s
", getenv("FTP_PROXY"));
> _alpm_log(PM_LOG_DEBUG, "ftp_proxy: %s
", getenv("ftp_proxy"));
>
> - /* libfetch does not reset the error code */
> - fetchLastErrCode = 0;
> -
> /* 10s timeout */
> fetchTimeout = 10;
>
> /* ignore any SIGPIPE signals- these may occur if our FTP socket dies or
> * something along those lines. Store the old signal handler first. */
> - new_action.sa_handler = SIG_IGN;
> - sigemptyset(&new_action.sa_mask);
> - new_action.sa_flags = 0;
> - sigaction(SIGPIPE, NULL, &old_action);
> - sigaction(SIGPIPE, &new_action, NULL);
> -
> - dlf = fetchXGet(fileurl, &ust, "i");
> -
> - if(fetchLastErrCode == FETCH_UNCHANGED) {
> - _alpm_log(PM_LOG_DEBUG, "mtimes are identical, skipping %s
",
> filename); + sig_pipe[NEW].sa_handler = SIG_IGN;
> + sigemptyset(&sig_pipe[NEW].sa_mask);
> + sig_pipe[NEW].sa_flags = 0;
> + sigaction(SIGPIPE, NULL, &sig_pipe[OLD]);
> + sigaction(SIGPIPE, &sig_pipe[NEW], NULL);
> +
> + dload_interrupted = 0;
> + sig_int[NEW].sa_handler = &inthandler;
> + sigemptyset(&sig_int[NEW].sa_mask);
> + sig_int[NEW].sa_flags = 0;
> + sigaction(SIGINT, NULL, &sig_int[OLD]);
> + sigaction(SIGINT, &sig_int[NEW], NULL);
> +
> + /* NOTE: libfetch does not reset the error code, be sure to do it before
> + * calls into the library */
> +
> + /* find out the remote size *and* mtime in one go. there is a lot of
> + * trouble in trying to do both size and "if-modified-since" logic in a
> + * non-stat request, so avoid it. */
> + fetchLastErrCode = 0;
> + if(fetchStat(fileurl, &ust, "") == -1) {
> + ret = -1;
> + goto cleanup;
> + }
> + check_stop();
> +
> + _alpm_log(PM_LOG_DEBUG, "ust.mtime: %ld local_time: %ld compare: %ld
",
> + ust.mtime, local_time, local_time - ust.mtime);
> + _alpm_log(PM_LOG_DEBUG, "ust.size: %ld local_size: %ld compare: %ld
",
> + ust.size, local_size, local_size - ust.size);
> + if(!force && ust.mtime && ust.mtime == local_time
> + && ust.size && ust.size == local_size) {
> + /* the remote time and size values agreed with what we have, so move on
> + * because there is nothing more to do. */
> + _alpm_log(PM_LOG_DEBUG, "files are identical, skipping %s
", filename);
> ret = 1;
> goto cleanup;
> }
> + if(!ust.mtime || ust.mtime != local_time) {
> + _alpm_log(PM_LOG_DEBUG, "mtimes were different or unavailable,
> downloading %s from beginning
", filename); + fileurl->offset = 0;
> + }
> +
> + fetchLastErrCode = 0;
> + dlf = fetchGet(fileurl, "");
> + check_stop();
>
> if(fetchLastErrCode != 0 || dlf == NULL) {
> pm_errno = PM_ERR_LIBFETCH;
> @@ -169,17 +215,14 @@ static int download_internal(const char *url, const
> char *localpath, _alpm_log(PM_LOG_DEBUG, "connected to %s successfully
",
> fileurl->host); }
>
> - if(ust.mtime && mtimenew) {
> - *mtimenew = ust.mtime;
> + if(localf && fileurl->offset == 0) {
> + _alpm_log(PM_LOG_WARNING, _("resuming download of %s not possible;
> starting over
"), filename); + fclose(localf);
> + localf = NULL;
> + } else if(fileurl->offset) {
> + _alpm_log(PM_LOG_DEBUG, "resuming download at position %ld
",
> fileurl->offset); }
>
> - if(chk_resume && fileurl->offset == 0) {
> - _alpm_log(PM_LOG_WARNING, _("cannot resume download, starting over
"));
> - if(localf != NULL) {
> - fclose(localf);
> - localf = NULL;
> - }
> - }
>
> if(localf == NULL) {
> _alpm_rmrf(tempfile);
> @@ -187,7 +230,8 @@ static int download_internal(const char *url, const
> char *localpath, dl_thisfile = 0;
> localf = fopen(tempfile, "wb");
> if(localf == NULL) { /* still null? */
> - _alpm_log(PM_LOG_ERROR, _("cannot write to file '%s'
"), tempfile);
> + _alpm_log(PM_LOG_ERROR, _("error writing to file '%s': %s
"),
> + tempfile, strerror(errno));
> ret = -1;
> goto cleanup;
> }
> @@ -199,11 +243,12 @@ static int download_internal(const char *url, const
> char *localpath, }
>
> while((nread = fetchIO_read(dlf, buffer, PM_DLBUF_LEN)) > 0) {
> + check_stop();
> size_t nwritten = 0;
> nwritten = fwrite(buffer, 1, nread, localf);
> if((nwritten != nread) || ferror(localf)) {
> _alpm_log(PM_LOG_ERROR, _("error writing to file '%s': %s
"),
> - destfile, strerror(errno));
> + tempfile, strerror(errno));
> ret = -1;
> goto cleanup;
> }
> @@ -240,36 +285,60 @@ static int download_internal(const char *url, const
> char *localpath, fetchIO_close(dlf);
> dlf = NULL;
>
> + /* set the times on the file to the same as that of the remote file */
> + if(ust.mtime) {
> + struct timeval tv[2];
> + memset(&tv, 0, sizeof(tv));
> + tv[0].tv_sec = ust.atime;
> + tv[1].tv_sec = ust.mtime;
> + utimes(tempfile, tv);
> + }
> rename(tempfile, destfile);
> ret = 0;
>
> cleanup:
> - /* restore any existing SIGPIPE signal handler */
> - sigaction(SIGPIPE, &old_action, NULL);
> -
> FREE(tempfile);
> FREE(destfile);
> if(localf != NULL) {
> + /* if we still had a local file open, we got interrupted. set the mtimes
> on + * the file accordingly. */
> + fflush(localf);
> + if(ust.mtime) {
> + struct timeval tv[2];
> + memset(&tv, 0, sizeof(tv));
> + tv[0].tv_sec = ust.atime;
> + tv[1].tv_sec = ust.mtime;
> + futimes(fileno(localf), tv);
> + }
> fclose(localf);
> }
> if(dlf != NULL) {
> fetchIO_close(dlf);
> }
> fetchFreeURL(fileurl);
> +
> + /* restore the old signal handlers */
> + sigaction(SIGINT, &sig_int[OLD], NULL);
> + sigaction(SIGPIPE, &sig_pipe[OLD], NULL);
> + /* if we were interrupted, trip the old handler */
> + if(dload_interrupted) {
> + raise(SIGINT);
> + }
> +
> return(ret);
> }
> #endif
>
> static int download(const char *url, const char *localpath,
> - time_t mtimeold, time_t *mtimenew) {
> + int force) {
> if(handle->fetchcb == NULL) {
> #if defined(INTERNAL_DOWNLOAD)
> - return(download_internal(url, localpath, mtimeold, mtimenew));
> + return(download_internal(url, localpath, force));
> #else
> RET_ERR(PM_ERR_EXTERNAL_DOWNLOAD, -1);
> #endif
> } else {
> - int ret = handle->fetchcb(url, localpath, mtimeold, mtimenew);
> + int ret = handle->fetchcb(url, localpath, force);
> if(ret == -1) {
> RET_ERR(PM_ERR_EXTERNAL_DOWNLOAD, -1);
> }
> @@ -279,19 +348,15 @@ static int download(const char *url, const char
> *localpath,
>
> /*
> * Download a single file
> - * - if mtimeold is non-NULL, then only download the file if it's
> different - * than mtimeold.
> - * - if *mtimenew is non-NULL, it will be filled with the mtime of the
> remote - * file.
> * - servers must be a list of urls WITHOUT trailing slashes.
> *
> * RETURN: 0 for successful download
> - * 1 if the mtimes are identical
> + * 1 if the files are identical
> * -1 on error
> */
> int _alpm_download_single_file(const char *filename,
> alpm_list_t *servers, const char *localpath,
> - time_t mtimeold, time_t *mtimenew)
> + int force)
> {
> alpm_list_t *i;
> int ret = -1;
> @@ -308,7 +373,7 @@ int _alpm_download_single_file(const char *filename,
> CALLOC(fileurl, len, sizeof(char), RET_ERR(PM_ERR_MEMORY, -1));
> snprintf(fileurl, len, "%s/%s", server, filename);
>
> - ret = download(fileurl, localpath, mtimeold, mtimenew);
> + ret = download(fileurl, localpath, force);
> FREE(fileurl);
> if(ret != -1) {
> break;
> @@ -327,7 +392,7 @@ int _alpm_download_files(alpm_list_t *files,
> for(lp = files; lp; lp = lp->next) {
> char *filename = lp->data;
> if(_alpm_download_single_file(filename, servers,
> - localpath, 0, NULL) == -1) {
> + localpath, 0) == -1) {
> ret++;
> }
> }
> @@ -354,7 +419,7 @@ char SYMEXPORT *alpm_fetch_pkgurl(const char *url)
> cachedir = _alpm_filecache_setup();
>
> /* download the file */
> - ret = download(url, cachedir, 0, NULL);
> + ret = download(url, cachedir, 0);
> if(ret == -1) {
> _alpm_log(PM_LOG_WARNING, _("failed to download %s
"), url);
> return(NULL);
> diff --git a/lib/libalpm/dload.h b/lib/libalpm/dload.h
> index 64b57d2..ee80024 100644
> --- a/lib/libalpm/dload.h
> +++ b/lib/libalpm/dload.h
> @@ -25,11 +25,11 @@
>
> #include <time.h>
>
> -#define PM_DLBUF_LEN (1024 * 10)
> +#define PM_DLBUF_LEN (1024 * 16)
>
> int _alpm_download_single_file(const char *filename,
> alpm_list_t *servers, const char *localpath,
> - time_t mtimeold, time_t *mtimenew);
> + int force);
>
> int _alpm_download_files(alpm_list_t *files,
> alpm_list_t *servers, const char *localpath);
> diff --git a/src/pacman/pacman.c b/src/pacman/pacman.c
> index f4f8044..3fe9093 100644
> --- a/src/pacman/pacman.c
> +++ b/src/pacman/pacman.c
> @@ -34,6 +34,7 @@
> #include <signal.h>
> #include <unistd.h>
> #include <sys/types.h>
> +#include <sys/stat.h>
> #include <sys/utsname.h> /* uname */
> #include <locale.h> /* setlocale */
> #include <time.h> /* time_t */
> @@ -631,10 +632,11 @@ static char *get_tempfile(const char *path, const
> char *filename) {
>
> /** External fetch callback */
> int download_with_xfercommand(const char *url, const char *localpath,
> - time_t mtimeold, time_t *mtimenew) {
> + int force) {
> int ret = 0;
> int retval;
> int usepart = 0;
> + struct stat st;
> char *parsedcmd,*tempcmd;
> char cwd[PATH_MAX];
> char *destfile, *tempfile, *filename;
> @@ -650,6 +652,13 @@ int download_with_xfercommand(const char *url, const
> char *localpath, destfile = get_destfile(localpath, filename);
> tempfile = get_tempfile(localpath, filename);
>
> + if(force && stat(tempfile, &st) == 0) {
> + unlink(tempfile);
> + }
> + if(force && stat(destfile, &st) == 0) {
> + unlink(destfile);
> + }
> +
> tempcmd = strdup(config->xfercommand);
> /* replace all occurrences of %o with fn.part */
> if(strstr(tempcmd, "%o")) {
>

--
-------------------

Dario Freddi
KDE Developer
GPG Key Signature: 511A9A3B
 
Old 11-14-2009, 05:36 PM
Xavier
 
Default download: major refactor to address lingering issues

On Sat, Nov 14, 2009 at 7:29 PM, Dario Freddi <drf54321@gmail.com> wrote:
> First of all, the patch and the idea behind it looks great, even if it's not
> BC. The code looks fine to me, I had some doubts about the force parameter
> being useful, but if that is supposed to tell the frontend "hey, wipe away the
> partially downloaded file if there's one", then it's completely fine.
>

Actually force is stronger than that : it forces the file to be
(re)downloaded, no matter what (whether there is already a complete or
partial file).
And that's what we need for pacman -Syy

> I'm all for having this in (but please, increase the .so version )
>

This kind of patch only goes in major release. Major release are
always so-bumped. So don't worry
 
Old 11-15-2009, 08:02 AM
Dario Freddi
 
Default download: major refactor to address lingering issues

On Saturday 14 November 2009 19:36:59 Xavier wrote:
> Actually force is stronger than that : it forces the file to be
> (re)downloaded, no matter what (whether there is already a complete or
> partial file).
> And that's what we need for pacman -Syy

Yeah, my concern was actually that for complete files, the backend itself
(alpm) could have handled the deletion on its own, but I do recognize that it
can't really predict what the partially downloaded files would look like.

>
> > I'm all for having this in (but please, increase the .so version )
>
> This kind of patch only goes in major release. Major release are
> always so-bumped. So don't worry

Yeah

>

--
-------------------

Dario Freddi
KDE Developer
GPG Key Signature: 511A9A3B
 
Old 11-15-2009, 12:23 PM
Xavier
 
Default download: major refactor to address lingering issues

On Thu, Nov 12, 2009 at 9:09 AM, Dan McGee <dan@archlinux.org> wrote:
> Sorry for this being such a huge patch, but I believe it is necessary for
> quite a few reasons which I will attempt to explain herein. I've been
> mulling this over for a while, but wasn't super happy with making the
> download interface more complex. Instead, if we carefully order things in
> the internal download code, we can actually make the interface simpler.
>
> 1. FS#15657 - This involves `name.db.tar.gz.part` files being left around the
> filesystem, and then causing all sorts of issues when someone attempts to
> rerun the operation they canceled. We need to ensure that if we resume a
> download, we are resuming it on exactly the same file; if we cannot be
> almost postive of that then we need to start over.
>

That's quite different to the simple solution we had before : simply
disable resuming for repo db, which are quite small anyway

> 2. http://www.mail-archive.com/pacman-dev@archlinux.org/msg03536.html - Here
> we have a lighttpd bug to ruin the day. If we send both a Range: header and
> If-Modified-Since: header across the wire in a GET request, lighttpd doesn't
> do what we want in several cases. If the file hadn't been modified, it
> returns a '304 Not Modified' instead of a '206 Partial Content'. We need to
> do a stat (e.g. HEAD in HTTP terms) operation here, and the proceed
> accordingly based off the values we get back from it.
>

These two issues look completely different.
The link you gave was about If-Modified-Since not working correctly,
and lighttpd not returning '304 Not Modified' because of some
time/timezone issues.
Besides, this should have been fixed in 1.4.24 which is in arch now.
I guess you should report your issue to lighttpd

>From libfetch point of view, doing a Stat is fine.
What seemed to cause issue for some users is doing a Get right away,
and cancel when mtimes matched.
See 6f97842ab22eb50fdc689e8aa2e95688d015fa74

> 3. The mtime stuff was rather ugly, and relied on the called function to
> write back to a passed in reference, which isn't the greatest. Instead, use
> the power of the filesystem to contain this info. Every file downloaded
> internally is now carefully timestamped with the remote file time. This
> should allow the resume logic to work. In order to guarantee this, we need
> to implement a signal handler that catches interrupts, notifies the running
> code, and causes it to set the mtimes on the file. It then rethrows the
> signal so the pacman signal handler (or any frontend) works as expected.
>
> 4. We did a lot of funky stuff in trying to track the DB last modified time.
> It is a lot easier to just keep the downloaded DB file around and track the
> time on that rather than in a funky dot file. It also kills a lot of code.
>

I always find signal handling quite tricky, but yeah, the idea of
using the file itself to track mtime rather than a dot file sounds
good.
Also this indeed makes resuming much safer.

I was just thinking about backward compatibility (old downloaded file
and new download code), but I think it is fine :
- for already downloaded packages , the download code is not called
- for partially downloaded packages, they will be discarded and the
download will start over. But if you still have partially downloaded
packages when upgrading pacman, they are probably old and outdated.

> 5. For GPG verification of the databases down the road, we are going to need
> the DB file around for at least a short bit of time anyway, so this gets us
> closer to that.
>

Well, I do not think this would have been a problem. We probably want
to do gpg verification right after downloading, and before extracting
(and then removing) the file, don't we ?


Anyway, I did not have big issues with the old code, but I do not have
any with the new one either. Code review and testing did not reveal
any issues.
And the new one does have several advantages so it's all good.
 
Old 11-16-2009, 01:09 AM
Dan McGee
 
Default download: major refactor to address lingering issues

On Sun, Nov 15, 2009 at 7:23 AM, Xavier <shiningxc@gmail.com> wrote:
> On Thu, Nov 12, 2009 at 9:09 AM, Dan McGee <dan@archlinux.org> wrote:
>> Sorry for this being such a huge patch, but I believe it is necessary for
>> quite a few reasons which I will attempt to explain herein. I've been
>> mulling this over for a while, but wasn't super happy with making the
>> download interface more complex. Instead, if we carefully order things in
>> the internal download code, we can actually make the interface simpler.
>>
>> 1. FS#15657 - This involves `name.db.tar.gz.part` files being left around the
>> filesystem, and then causing all sorts of issues when someone attempts to
>> rerun the operation they canceled. We need to ensure that if we resume a
>> download, we are resuming it on exactly the same file; if we cannot be
>> almost postive of that then we need to start over.
>>
>
> That's quite different to the simple solution we had before : simply
> disable resuming for repo db, which are quite small anyway

Yes. However, the only way to do this was to add even more flags to
the download (and thus callback). I had a "partial_ok" flag going for
a while in one iteration of the patch, but dumped it when I realized
the issue could be solved by just checking headers correctly.

>> 2. http://www.mail-archive.com/pacman-dev@archlinux.org/msg03536.html - Here
>> we have a lighttpd bug to ruin the day. If we send both a Range: header and
>> If-Modified-Since: header across the wire in a GET request, lighttpd doesn't
>> do what we want in several cases. If the file hadn't been modified, it
>> returns a '304 Not Modified' instead of a '206 Partial Content'. We need to
>> do a stat (e.g. HEAD in HTTP terms) operation here, and the proceed
>> accordingly based off the values we get back from it.
>>
>
> These two issues look completely different.
> The link you gave was about If-Modified-Since not working correctly,
> and lighttpd not returning '304 Not Modified' because of some
> time/timezone issues.
> Besides, this should have been fixed in 1.4.24 which is in arch now.
> I guess you should report your issue to lighttpd

Yeah. the original message was the same problem actually. The poster
of that message didn't diagnose it correctly, and it isn't the linked
issue to lighttpd. However, the lighty issue should be solved in their
Duke Nukem Forever 1.5 release.

> >From libfetch point of view, doing a Stat is fine.
> What seemed to cause issue for some users is doing a Get right away,
> and cancel when mtimes matched.
> See 6f97842ab22eb50fdc689e8aa2e95688d015fa74

Yep; this should not negate the changes of that fix.

>> 3. The mtime stuff was rather ugly, and relied on the called function to
>> write back to a passed in reference, which isn't the greatest. Instead, use
>> the power of the filesystem to contain this info. Every file downloaded
>> internally is now carefully timestamped with the remote file time. This
>> should allow the resume logic to work. In order to guarantee this, we need
>> to implement a signal handler that catches interrupts, notifies the running
>> code, and causes it to set the mtimes on the file. It then rethrows the
>> signal so the pacman signal handler (or any frontend) works as expected.
>>
>> 4. We did a lot of funky stuff in trying to track the DB last modified time.
>> It is a lot easier to just keep the downloaded DB file around and track the
>> time on that rather than in a funky dot file. It also kills a lot of code.
>>
>
> I always find signal handling quite tricky, but yeah, the idea of
> using the file itself to track mtime rather than a dot file sounds
> good.
> Also this indeed makes resuming much safer.
>
> I was just thinking about backward compatibility (old downloaded file
> and new download code), but I think it is fine :
> - for already downloaded packages , the download code is not called
> - for partially downloaded packages, they will be discarded and the
> download will start over. But if you still have partially downloaded
> packages when upgrading pacman, they are probably old and outdated.

Yep, and even locally things seemed to work out OK. Not a show-stopper
by any means.

>> 5. For GPG verification of the databases down the road, we are going to need
>> the DB file around for at least a short bit of time anyway, so this gets us
>> closer to that.
>>
>
> Well, I do not think this would have been a problem. We probably want
> to do gpg verification right after downloading, and before extracting
> (and then removing) the file, don't we ?

But why not allow it to happen any time? Why is the removal of the
file tied in with the download/extraction loop? It is just a big mess
right now IMO.

> Anyway, I did not have big issues with the old code, but I do not have
> any with the new one either. Code review and testing did not reveal
> any issues.
> And the new one does have several advantages so it's all good.

Thanks for the review, appreciate it.
 
Old 11-16-2009, 07:23 AM
Xavier
 
Default download: major refactor to address lingering issues

On Mon, Nov 16, 2009 at 3:09 AM, Dan McGee <dpmcgee@gmail.com> wrote:
>
> Yes. However, the only way to do this was to add even more flags to
> the download (and thus callback). I had a "partial_ok" flag going for
> a while in one iteration of the patch, but dumped it when I realized
> the issue could be solved by just checking headers correctly.
>

You are right, all these flags were getting messy

>>> 2. http://www.mail-archive.com/pacman-dev@archlinux.org/msg03536.html - Here
>>> we have a lighttpd bug to ruin the day. If we send both a Range: header and
>>> If-Modified-Since: header across the wire in a GET request, lighttpd doesn't
>>> do what we want in several cases. If the file hadn't been modified, it
>>> returns a '304 Not Modified' instead of a '206 Partial Content'. We need to
>>> do a stat (e.g. HEAD in HTTP terms) operation here, and the proceed
>>> accordingly based off the values we get back from it.
>>>
>>
>> These two issues look completely different.
>> The link you gave was about If-Modified-Since not working correctly,
>> and lighttpd not returning '304 Not Modified' because of some
>> time/timezone issues.
>> Besides, this should have been fixed in 1.4.24 which is in arch now.
>> I guess you should report your issue to lighttpd
>
> Yeah. the original message was the same problem actually. The poster
> of that message didn't diagnose it correctly, and it isn't the linked
> issue to lighttpd. However, the lighty issue should be solved in their
> Duke Nukem Forever 1.5 release.
>

Hmm, I am still confused here.

Which original message ?
Marc : http://www.mail-archive.com/pacman-dev@archlinux.org/msg03527.html
lighttpd issue : http://redmine.lighttpd.net/issues/2047
Both seem to match perfectly : If-Modified-Since issue with DST ,
fixed in 1.4.24
None of them seem to talk about Range header.

Anyway, if it's fixed in 1.5 , it's better than nothing. At least that
code is available, anyone can get it from svn.
 

Thread Tools




All times are GMT. The time now is 11:54 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org