On Sat, 2010-10-02 at 20:49 +0100, Yorvyk wrote:
> On Sat, 02 Oct 2010 19:54:27 +0200
> Gilboa Davara <firstname.lastname@example.org> wrote:
> > On Thu, 2010-09-30 at 09:59 -0700, JD wrote:
> > > I was browsing for info on 12 core cpu's and found
> > > that AMD released them or announced back in March.
> > > The price is steep of course.
> > > What I would like to know is the degree of granularity
> > > of the SMP implementation in Linux.
> > > Does anyone have an inside track on that?
> > > Or point to some internal documentation?
> > I'm not sure I understand the question.
> > The Linux kernel itself has no issues supporting 100's of CPUs (either
> > real, or SMT).
> Apparently it does have issues http://www.conceivablytech.com/3166/science-research/current-operating-systems-may-only-make-sense-up-to-48-cores/
I saw this on ./ yesterday, but didn't have time to read the actual
paper (only the story linked above).
Never the less, talking about the kernel as single blob, is plain wrong.
Different application / workloads exercise different code paths in the
kernel; some are better equipped to handle 100's of cores, others don't.
Claiming that you should "redesign" the kernel in-order to handle
huge-SMP configurations (that do exist today, mind you) is irrelevant
unless you point at specific parts of the kernel that should be
redesigned (E.g. network? FS? scheduler? etc, etc, etc) including the
target workloads. (E.g. No point in optimizing the kernel for 32 core
desktop machine - at least not for now...)
Beyond that, at least in my own experience, if you're running an
application that can actually *fully* utilize 32+ cores, your
application is usually memory or CPU bound and not kernel (so-to-speak)
... But then again, as I said, I only had time to read the story linked
above and didn't have time to read the actual MIT paper.
users mailing list
To unsubscribe or change subscription options: