Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Fedora Infrastructure (http://www.linux-archive.org/fedora-infrastructure/)
-   -   app2 disk space (http://www.linux-archive.org/fedora-infrastructure/163094-app2-disk-space.html)

Mike McGrath 09-20-2008 02:50 AM

app2 disk space
 
Some of you have seen the disk alerts on app2, Looking more closely it
seems the host was not built with enough disk space (like was app1). So
after the freeze is over I'll rebuild it.

It does raise a point about storage for transifex though. Basically each
host running transifex (or damned lies, I can't quite remember which)
keeps a local copy of every scm as part of its usage. For performance
reasons I don't think that will change but its something we'll want to
figure out long term. I haven't done the research but in my brain it
seems like running something like git/hg/svn/bzr over nfs will cause
problems.

On the other hand, these aren't upstream repos but local cache so I'm also
curious what the harm would be, if they get borked one could just delete
the cache and it would repopulate. Thoughts?

-Mike

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

Toshio Kuratomi 09-20-2008 06:06 AM

app2 disk space
 
Mike McGrath wrote:
> Some of you have seen the disk alerts on app2, Looking more closely it
> seems the host was not built with enough disk space (like was app1). So
> after the freeze is over I'll rebuild it.
>
> It does raise a point about storage for transifex though. Basically each
> host running transifex (or damned lies, I can't quite remember which)
> keeps a local copy of every scm as part of its usage. For performance
> reasons I don't think that will change but its something we'll want to
> figure out long term. I haven't done the research but in my brain it
> seems like running something like git/hg/svn/bzr over nfs will cause
> problems.
>
I think the major thing is how the scms do locking. I know that bzr
does its own locking in the pack-0.92 format and beyond that does not
rely on os level locking (pack-0.92 is the default format in our
currently deployed bzr). transifex should only be using subversion
working trees which should be fine (although the documentation recomends
to turn off subtree checking.) cvs, I'd imagine, would be similar to
subversion. git and hg are unknown to me.

> On the other hand, these aren't upstream repos but local cache so I'm also
> curious what the harm would be, if they get borked one could just delete
> the cache and it would repopulate. Thoughts?
>
I'll leave glezos to answer this. I think the problem area would be if
a repository got into some sort of locked state that required us to
manually remove it.

-Toshio

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

Nigel Jones 09-20-2008 06:45 AM

app2 disk space
 
On Fri, 2008-09-19 at 21:50 -0500, Mike McGrath wrote:
> Some of you have seen the disk alerts on app2, Looking more closely it
> seems the host was not built with enough disk space (like was app1). So
> after the freeze is over I'll rebuild it.
>
> It does raise a point about storage for transifex though. Basically each
> host running transifex (or damned lies, I can't quite remember which)
> keeps a local copy of every scm as part of its usage. For performance
> reasons I don't think that will change but its something we'll want to
> figure out long term. I haven't done the research but in my brain it
> seems like running something like git/hg/svn/bzr over nfs will cause
> problems.
>
> On the other hand, these aren't upstream repos but local cache so I'm also
> curious what the harm would be, if they get borked one could just delete
> the cache and it would repopulate. Thoughts?
-1

I'd like to propose a different strategy...

Based upon your original e-mail, this is Damned Lies at fault, not
Transifex, now I remember a similar issue to this with the initial
rebuild of the app servers, Damned Lies wasn't working because the
SQLite database didn't exist etc. My problem is that we have at least
two copies of the database, SO...

I've said this a couple of times before, BUT it'd REALLY be 'nice' to
have a machine in PHX that is dedicated for non public facing, yet
mission critical tasks (things that happen in the background). This
would be Damned Lies checkouts, and Mirror Manager crawls to name a
couple.

The database could then be shared by NFS (or rsynced) to the app servers
to keep it up-to-date. On the other hand, for Damned Lies, it appears
we can use mysql as the backend instead of SQLite (at least it's not
postgres :))

This way we can hopefully reduce some of our RAM etc needs and also some
of our bandwidth needs (one lot of regular checkouts vs two).

- Nigel
--
Nigel Jones <dev@nigelj.com>

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

"Stephen John Smoogen" 09-20-2008 11:41 PM

app2 disk space
 
On Fri, Sep 19, 2008 at 8:50 PM, Mike McGrath <mmcgrath@redhat.com> wrote:
> Some of you have seen the disk alerts on app2, Looking more closely it
> seems the host was not built with enough disk space (like was app1). So
> after the freeze is over I'll rebuild it.
>

Newbie questions from the peanut gallery :)

Do we have standard kickstarts for systems or are they done by hand?
How are systems provisioned to be built?
Do we use cobbler? Would we be interested in doing so?
What is the flight speed of an unladen swallow?

> It does raise a point about storage for transifex though. Basically each
> host running transifex (or damned lies, I can't quite remember which)
> keeps a local copy of every scm as part of its usage. For performance
> reasons I don't think that will change but its something we'll want to
> figure out long term. I haven't done the research but in my brain it
> seems like running something like git/hg/svn/bzr over nfs will cause
> problems.
>
> On the other hand, these aren't upstream repos but local cache so I'm also
> curious what the harm would be, if they get borked one could just delete
> the cache and it would repopulate. Thoughts?
>
> -Mike
>
> _______________________________________________
> Fedora-infrastructure-list mailing list
> Fedora-infrastructure-list@redhat.com
> https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
>



--
Stephen J Smoogen. -- BSD/GNU/Linux
How far that little candle throws his beams! So shines a good deed
in a naughty world. = Shakespeare. "The Merchant of Venice"

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

Ricky Zhou 09-21-2008 12:18 AM

app2 disk space
 
On 2008-09-20 05:41:22 PM, Stephen John Smoogen wrote:
> Do we have standard kickstarts for systems or are they done by hand?
Yup, we pretty much do a kickstart install, edit /etc/hosts and other
networking configs, then run puppet to build a box. Our SOP for doing
all of this is publicly available at
http://fedoraproject.org/wiki/Infrastructure/SOP/kickstart.

> How are systems provisioned to be built?
I'm not completely sure what this means - I guess we just look for xen
hosts with available resources and build there.

> Do we use cobbler? Would we be interested in doing so?
I don't think we are now, and I have no idea what kind of interest there
is in using it.

> What is the flight speed of an unladen swallow?
Hehe: http://www.style.org/unladenswallow/

Thanks,
Ricky
_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list

Mike McGrath 09-21-2008 05:35 AM

app2 disk space
 
On Sat, 20 Sep 2008, Ricky Zhou wrote:

> On 2008-09-20 05:41:22 PM, Stephen John Smoogen wrote:
> > Do we have standard kickstarts for systems or are they done by hand?
> Yup, we pretty much do a kickstart install, edit /etc/hosts and other
> networking configs, then run puppet to build a box. Our SOP for doing
> all of this is publicly available at
> http://fedoraproject.org/wiki/Infrastructure/SOP/kickstart.
>
> > How are systems provisioned to be built?
> I'm not completely sure what this means - I guess we just look for xen
> hosts with available resources and build there.
>

In our case we have hosts all over the place so it can be different from
host to host but the SOP/kickstart mostly handles both virtual guests (the
bulk of our kickstarting) as well as the physical hosts. We've got a pxe
environment in PHX.

> > Do we use cobbler? Would we be interested in doing so?
> I don't think we are now, and I have no idea what kind of interest there
> is in using it.
>

Yeah, we don't use cobbler yet. Mostly because of time and need. I'd
like to support that project and use it no doubt. But in our case we have
very few kickstart files and they're all incredibly basic. As ricky said
they do the base install, check to see if any special hosts entries are
needed. Then yum update, then yum install puppet, reboot.

-Mike

_______________________________________________
Fedora-infrastructure-list mailing list
Fedora-infrastructure-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list


All times are GMT. The time now is 02:48 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.