FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Fedora User

 
 
LinkBack Thread Tools
 
Old 11-05-2011, 04:35 PM
Linux Tyro
 
Default A general history question

Hi,

Excited to see this world of Linux. A general question came in mind regarding the origin of Linux.

Well, it (Linux) is basically a kernel -- perhaps same in majority of all the distros, almost all. Well, openSUSE also uses the technique of .rpm which is again Red Hat Package Manager. So basically i get to know that it was initially in Linux two sides -- 1) debian 2) rpm (as already discussed) but just wanted to know that openSUSE also has been derived from Redhat like many other distros have....?


And out of debian and .rpm, both seem to be the oldest but I guess debian is more old...? Was just talking of linux not unix (like freebsd).
--
THX

--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-05-2011, 04:37 PM
Linux Tyro
 
Default A general history question

On Sat, Nov 5, 2011 at 1:35 PM, Linux Tyro <fedora.bkn@gmail.com> wrote:


Excited to see this world of Linux. A general question came in mind regarding the origin of Linux.

Well, it (Linux) is basically a kernel -- perhaps same in majority of all the distros, almost all. Well, openSUSE also uses the technique of .rpm which is again Red Hat Package Manager. So basically i get to know that it was initially in Linux two sides -- 1) debian 2) rpm (as already discussed) but just wanted to know that openSUSE also has been derived from Redhat like many other distros have....?



And out of debian and .rpm, both seem to be the oldest but I guess debian is more old...? Was just talking of linux not unix (like freebsd).


Well, this question might seem silly to somebody but being a beginner these question(s) often come in mind, so I just get to know the better experiences of the people -- of course Linux experts. If it (this question) is irrelevant, I am sorry, just cancel the question.


--
THX

--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-05-2011, 04:37 PM
inode0
 
Default A general history question

On Sat, Nov 5, 2011 at 12:35 PM, Linux Tyro <fedora.bkn@gmail.com> wrote:
> Hi,
>
> Excited to see this world of Linux. A general question came in mind
> regarding the origin of Linux.
>
> Well, it (Linux) is basically a kernel -- perhaps same in majority of all
> the distros, almost all. Well, openSUSE also uses the technique of .rpm
> which is again Red Hat Package Manager. So basically i get to know that it
> was initially in Linux two sides -- 1) debian 2) rpm (as already discussed)
> but just wanted to know that openSUSE also has been derived from Redhat like
> many other distros have....?
>
> And out of debian and .rpm, both seem to be the oldest but I guess debian is
> more old...? Was just talking of linux not unix (like freebsd).

You might find this interesting - be sure to look at the timeline.

http://en.wikipedia.org/wiki/Linux_distribution

John
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-05-2011, 06:10 PM
Don Quixote de la Mancha
 
Default A general history question

Richard Stallman and his colleagues at The Freesoftware Foundation
assert that the proper term is "GNU/Linux", because in reality linux
is just - "just!" - the operating system kernel.

There's not a whole lot you can do with a kernel all by itself. While
the kernel is the first program to run in a complete operating system,
with all the other programs being launched via the facilities of the
kernel, a complete system - what we know as a "distribution" or
"distro" - requires user space runtime libraries such as the Standard
C Library, "daemons" that run invisibly in the background, some way to
set up the configuration of the system - for GNU/Linux, these are text
files mostly located in /etc - tools to allow the user to create,
delete and edit various file formats, and for a computer with a
graphical user interface, a way for the screen "real estate" to be
divided up among programs that want to display graphics, as well as to
send keyboard and mouse input to the appropriate GUI program.

Richard Stallman originally set out to write a complete new - and
better - source code-compatible clone of the very proprietary,
expensive and closed-source operating system UNIX.

Among his reasons for wanting to do so was that the source to UNIX
was at first distributed more or less freely by AT&T, when AT&T was
broken up into many competing telephone companies. The AT&T breakup
also lifted many of the regulatory restrictions once placed on AT&T as
a natural monopoly. Now free of the restrictions that prevented it
from competing in the computer business, AT&T realized there was money
to be made in selling UNIX.

A UNIX SystemV binary-executable license was expensive, with the
license price increasing with the number of allowable logged-in users.

A SystemV source code license was collosally expensive, with the
licensees being bound by restrictive non-disclosure agreements, as
well as the obligation to pay AT&T for any source or binaries that the
licensees passed on to others. Such source licenses were only
affordable to those who found some way to charge a lot of money for
whatever they produced from it.

Stallman - or RMS as he prefers to be called - started out by
developing GNU - which stands for "GNU's Not UNIX", which itself
expands to "GNU's Not UNIX Not UNIX" and so on - from the outside in.
That is, rather than writing a kernel from scratch than adding user
space software that ran on top of that kernel, RMS and his colleagues
at The Free Software Foundation developed a great deal of user space
software first.

RMS and I share a common trait for which we are both often criticized:
we are very picky about our work, and don't particularly care if it
takes a long time to get it right.

More or less the first program that RMS wrote for use by GNU was a
text editor called GNU Emacs. That originally stood for "Editor
Macros", as it was originally a set of simple macros, more or less
like Word or Excel macros, for the editor that was built in to the
LISP artificial intelligence workstations that RMS used at the MIT AI
Laboratory in Cambridge, Massachusetts.

Writing Emacs in C, with an integrated LISP interpreter enabled Emacs
to be portable to other kinds of computers, with Emacs being quite
powerfuly extensible by writing Emacs LISP (or elisp) programs for it.

Emacs is pretty small and quick by today's standards, compared to say
Firefox or the Gnome or KDE desktops, but back in the day it was said
to stand for not "Editor Macros" but "Eight Megs And Constantly
Swapping" - I first used it on a 16 MHz Sun Workstation with but four
megabytes of memory! - as well as "Emacs Makes A Computer Slow".

When Emacs was used on a GUI computer, it was popular to give it a
Kitchen Sink icon!

It could readily be argued that RMS would have by now made a lot more
progress with GNU had he not spent so much time writing Emacs,
improving it as well as porting it to so many completely alien
computing platforms. However, if you learn to use Emacs really,
really well, you well find that it can do a great deal more than edit
text. For most coders who are Emacs fans, it does everything that
today's Integrated Development Environments such as Visual Studio,
CodeWarrior, Xcode and KDevelop can do. It can actually do a great
deal more, and is readily customizable by writing elisp programs.

Thus Emacs all by itself empowered all of us coders to accomplish a
great deal more with less effort and in less time than we could have
with the programmer's editors of the day, such as the original UNIX
vi. Today's Vim (VI iMproved) works just like the UNIX vi, but it can
do a great deal more than vi could.

I used to really suffer under the SunOS 3 vi; when some consultant
dropped by with an Emacs source tape and showed me how to build and
run it, I thought he was a Heaven-sent prophet!

RMS, despite being notorious for never getting anything done, in
reality is just about the most brilliant and productive engineer in
all of human history. It's just that he doesn't like to do anything
the simple way. If you Google up the GNU coding standards document,
you will find the advice that while GNU programs as well as subroutine
libraries are expected to work much like the UNIX originals they
replace, they are expected to improve on them in just about every way
possible.

The GNU programs all have extensive documentation, the most essential
parts of which can be accessed by giving the "--help" command-line
option to the programs. No such online help existed in the original
UNIX command-line tools, I expect because they were originally written
for such resource-constrained computers that the executable files of
all the software really did have to be as small as possible.

Many of the older UNIX programs, as well as the UNIX kernel itself,
had fixed-size buffers, or memory areas in which they stored their
data. The GNU coding standard explicitly forbids such fixed-length
buffers when it is at all possible to dynamically allocate buffers of
any desired size.

Dynamic allocation has many advantages, but the code that implements
it is a great deal more complex than code that uses fixed-size
buffers. It can be slower and use more memory as well, because there
is some invisible overhead due to the bookkeeping requirements of the
dynamic memory manager.

I first got into UNIX in a real way right around the time that work
was commencing to make the kernel fully dynamic. That might not sound
like such a big deal, but operating system kernels of any sort are a
lot harder to write than just about any userspace program. A big
advantage of fixed-size buffers in the kernels is that, to the extent
you didn't try to exceed their capacity, you could count on them to
work right!

The SunOS 3 kernel on my old 3/160 workstations had a filesize -
uncompressed - of about 500 kilobytes. The entire Microport
SystemV/AT UNIX distribution for the 80386 PC clones was distributed
on a half-dozen or so 5 1/4 inch 360 kb floppy disks.

The bane of my existence when I worked phone support for Microport was
some customer ringing me up to shout at me about the message "Inode
Table Overflow!" spewing all over his console. His box didn't
actually crash, but had become largely broken.

That message occured when all the processes - running programs - on
the system all together tried to open too many files and directories.
On Linux, and the UNIXen of todays, the number of open files is
limited only by the avialable memory installed in the computer, but
back then, each open file was represented by an inode data structure
that was an element of a fixed-sized array of such structures.

Opening too many files would never overflow that array, as that would
corrupt your kernel memory and crash your machine. Instead your file
open would fail, and you'd get that overflow message on the console.

For most of my callers, the only way to fix that overflow would be to
send them a configuration kit - we called it the Link Kit - that
contained the object code - but NOT source code! - for the kernel,
except that we did provide source for certain user-configurable items,
such as the inode table array. One would edit a header file to
increase the size of that array, compile what sources we provided,
then link all the objects together to make a new kernel that could
have more files open all at one time than would otherwise be the case.

But why didn't we at Microport just configure the kernel with a big
inode table to start with? That was because the entire inode table
occupied precious physical memory - the kernel memory is never swapped
out as user space programs often are - so an inode table that was
larger than you actually needed not only wasted memory, but could
cause other operating systems that used a lot of memory to fail!

While adjusting the inode table size was the most common use for our
Link Kit, there were lots of other configurable parameters. Devices
drivers were all hardwired in, for example. Dynamically loadable
driver modules as we have today were yet to be implemented. Again
linking in drivers you didn't really, really need took up a lot of
wasted memory.

RMS and his colleagues eventually got to the point that most of the
software that was of any real use to developers who wrote code for Sun
Workstations in particular, as well as most other UNIX variants, and
to a lesser extent VAX/VMS, Microsoft Windows and even, just a little
bit, the Classic Mac OS, could be downloaded in source code form from
FTP servers all over the Internet.

Getting that stuff to compile could be a chore, so the FSF financed
itself to a large extent by selling tapes - TAPES! not DISKS! - of
ready-to-install precompiled executable programs. They also offered a
custom service of making such a tape for just about any platform of
your heart's desire, but for a hefty price.

While some of that code could be gotten to work on inexpensive 16-bit
MS-DOS and Windows 3.1 computers, a lot of it either had not yet been
ported to the 16-bit x86 architecture, or just could not be ported at
all. To enjoy the full power of the GNU software, one had to drop a
lot of dollars on an expensive UNIX computer, only to replace the
proprietary software - which you had no choice but to pay a lot of
money for - with the GNU software.

RMS by this time was well into the development of HURD, the GNU
operating system kernel, but in his usual fashion he wanted it to be
light-years ahead of existing UNIX kernel technology. It was a long,
long time before HURD could do anything useful at all.

While it was developed with portability in mind, a kernel is a very
least portable of any kind of software, because its source code
depends on all kinds of minute, picky, arbitrary and often downright
disgusting ways on the hardware that operates the kernel. Quite a lot
of those hardware details are only available to developers who sign
non-disclosure agreements. Even when the hardware interface isn't a
trade secret, it happens all the time that the document is just plain
wrong, or doesn't even exist!

Eventually a Finnish Computer Science undergraduate student by the
name of Linus Torvalds got really tired of waiting around for HURD to
be usable as well as ported to the 80386 PC clones, so he wrote from
scratch, and more or less all by himself, just about the simplest
possible operating system kernel that would work more or less like
UNIX did, but ONLY on 80386 PCs.

That kernel was called "linux", and it still is.

Without the work of Richard Stallman, his colleagues at The Free
Software Foundation, as well as countless others who contributed to
the GNU source code, the linux kernel all by itself would have been
largely useless.

The original distributions were little more than the Linux kernel and
the GNU user space software. That's why they should be properly known
as GNU/Linux.

The distributions of today come with a great deal more software than
the kernel and the original GNU stuff. There are some who argue that
we should call it "Mozilla Linux" because most of us spend most of our
time using the Firefox web browser, which was developed by the Mozilla
project.

But the essential core of all of our distributions - not just Fedora,
but Ubuntu, Slackware, Debian, all the Live CDs - is the GNU software.

If you don't want to call it "GNU/Linux" instead of just "Linux", it
would really be better not to call it Linux at all. Call it by a name
that gives credit to ALL of its contributors. For us, that is just
"Fedora". On my MacBook Pro, it is Ubuntu.

I'm really beat, so I'm not going to address your question about the
package managers quite yet. The package management is an important
part of creating a usable distribution that end-users or system
administrators can easily install, upgrade and maintain.

In particular the mind-numbingly idiotic way in which all UNIX-like
operating systems spread vitally important programs, code libraries,
configuration files and other data files such as national language
localizations All Over Creation makes it damn near impossible for
anyone but an expert to UNINSTALL a program without the use of a
package manager.

The package managers didn't used to work so well though. It's not as
bad these days, but it used to happen all the time to just about
everyone that installation, upgrading or uninstallations would totally
break a system.

Debian did a better job with the packaging at first. Red Hat with RPM
- the Red Hat Package Manager - had a well-deserved reputation for
being totally brain-damaged.

But those days are largely behind us. Not completely though; for no
reason I can fathom, just last night, without doing anything at all,
the Ubuntu Natty Narwhale installation on my MacBook Pro started
reporting that it had broken dependencies that so far I have been at a
total loss to repair.

I'll Send You My Bill In The Mail.

Sincerely,

A Grey-Bearded Old-Time UNIX Hacker
--
Don Quixote de la Mancha
Dulcinea Technologies Corporation
Software of Elegance and Beauty
http://www.dulcineatech.com
quixote@dulcineatech.com
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-05-2011, 09:31 PM
Alan Cox
 
Default A general history question

> the distros, almost all. Well, openSUSE also uses the technique of .rpm
> which is again Red Hat Package Manager. So basically i get to know that it
> was initially in Linux two sides -- 1) debian 2) rpm (as already discussed)
> but just wanted to know that openSUSE also has been derived from Redhat
> like many other distros have....?

The family tree is a bit older than that with Jurix involved. There is a
reasonable family tree chart on the internet, but I'm not sure its 100%
accurate - then again memory of things that long ago can be misleading so
it may be me. SuSE is I think also older than Red Hat.

> And out of debian and .rpm, both seem to be the oldest but I guess debian
> is more old...? Was just talking of linux not unix (like freebsd).

Bogus kind of led to both Debian and Red Hat but I believe Debian is a
bit older. There are several other older distributions still such as
Slackware. The oldest "Linux distribution" was probably MCC 0.95+ which
migrated its users to Debian as one of its updates and ceased to exist
long ago.

Alan
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-05-2011, 10:00 PM
scott
 
Default A general history question

On 11/05/2011 03:10 PM, Don Quixote de la Mancha wrote:
> Richard Stallman and his colleagues at The Freesoftware Foundation
> assert that the proper term is "GNU/Linux", because in reality linux
> is just - "just!" - the operating system kernel.
>
> There's not a whole lot you can do with a kernel all by itself. While
> the kernel is the first program to run in a complete operating system,
> with all the other programs being launched via the facilities of the
> kernel, a complete system - what we know as a "distribution" or
> "distro" - requires user space runtime libraries such as the Standard
> C Library, "daemons" that run invisibly in the background, some way to
> set up the configuration of the system - for GNU/Linux, these are text
> files mostly located in /etc - tools to allow the user to create,
> delete and edit various file formats, and for a computer with a
> graphical user interface, a way for the screen "real estate" to be
> divided up among programs that want to display graphics, as well as to
> send keyboard and mouse input to the appropriate GUI program.
>
> Richard Stallman originally set out to write a complete new - and
> better - source code-compatible clone of the very proprietary,
> expensive and closed-source operating system UNIX.
>
> Among his reasons for wanting to do so was that the source to UNIX
> was at first distributed more or less freely by AT&T, when AT&T was
> broken up into many competing telephone companies. The AT&T breakup
> also lifted many of the regulatory restrictions once placed on AT&T as
> a natural monopoly. Now free of the restrictions that prevented it
> from competing in the computer business, AT&T realized there was money
> to be made in selling UNIX.
>
> A UNIX SystemV binary-executable license was expensive, with the
> license price increasing with the number of allowable logged-in users.
>
> A SystemV source code license was collosally expensive, with the
> licensees being bound by restrictive non-disclosure agreements, as
> well as the obligation to pay AT&T for any source or binaries that the
> licensees passed on to others. Such source licenses were only
> affordable to those who found some way to charge a lot of money for
> whatever they produced from it.
>
> Stallman - or RMS as he prefers to be called - started out by
> developing GNU - which stands for "GNU's Not UNIX", which itself
> expands to "GNU's Not UNIX Not UNIX" and so on - from the outside in.
> That is, rather than writing a kernel from scratch than adding user
> space software that ran on top of that kernel, RMS and his colleagues
> at The Free Software Foundation developed a great deal of user space
> software first.
>
> RMS and I share a common trait for which we are both often criticized:
> we are very picky about our work, and don't particularly care if it
> takes a long time to get it right.
>
> More or less the first program that RMS wrote for use by GNU was a
> text editor called GNU Emacs. That originally stood for "Editor
> Macros", as it was originally a set of simple macros, more or less
> like Word or Excel macros, for the editor that was built in to the
> LISP artificial intelligence workstations that RMS used at the MIT AI
> Laboratory in Cambridge, Massachusetts.
>
> Writing Emacs in C, with an integrated LISP interpreter enabled Emacs
> to be portable to other kinds of computers, with Emacs being quite
> powerfuly extensible by writing Emacs LISP (or elisp) programs for it.
>
> Emacs is pretty small and quick by today's standards, compared to say
> Firefox or the Gnome or KDE desktops, but back in the day it was said
> to stand for not "Editor Macros" but "Eight Megs And Constantly
> Swapping" - I first used it on a 16 MHz Sun Workstation with but four
> megabytes of memory! - as well as "Emacs Makes A Computer Slow".
>
> When Emacs was used on a GUI computer, it was popular to give it a
> Kitchen Sink icon!
>
> It could readily be argued that RMS would have by now made a lot more
> progress with GNU had he not spent so much time writing Emacs,
> improving it as well as porting it to so many completely alien
> computing platforms. However, if you learn to use Emacs really,
> really well, you well find that it can do a great deal more than edit
> text. For most coders who are Emacs fans, it does everything that
> today's Integrated Development Environments such as Visual Studio,
> CodeWarrior, Xcode and KDevelop can do. It can actually do a great
> deal more, and is readily customizable by writing elisp programs.
>
> Thus Emacs all by itself empowered all of us coders to accomplish a
> great deal more with less effort and in less time than we could have
> with the programmer's editors of the day, such as the original UNIX
> vi. Today's Vim (VI iMproved) works just like the UNIX vi, but it can
> do a great deal more than vi could.
>
> I used to really suffer under the SunOS 3 vi; when some consultant
> dropped by with an Emacs source tape and showed me how to build and
> run it, I thought he was a Heaven-sent prophet!
>
> RMS, despite being notorious for never getting anything done, in
> reality is just about the most brilliant and productive engineer in
> all of human history. It's just that he doesn't like to do anything
> the simple way. If you Google up the GNU coding standards document,
> you will find the advice that while GNU programs as well as subroutine
> libraries are expected to work much like the UNIX originals they
> replace, they are expected to improve on them in just about every way
> possible.
>
> The GNU programs all have extensive documentation, the most essential
> parts of which can be accessed by giving the "--help" command-line
> option to the programs. No such online help existed in the original
> UNIX command-line tools, I expect because they were originally written
> for such resource-constrained computers that the executable files of
> all the software really did have to be as small as possible.
>
> Many of the older UNIX programs, as well as the UNIX kernel itself,
> had fixed-size buffers, or memory areas in which they stored their
> data. The GNU coding standard explicitly forbids such fixed-length
> buffers when it is at all possible to dynamically allocate buffers of
> any desired size.
>
> Dynamic allocation has many advantages, but the code that implements
> it is a great deal more complex than code that uses fixed-size
> buffers. It can be slower and use more memory as well, because there
> is some invisible overhead due to the bookkeeping requirements of the
> dynamic memory manager.
>
> I first got into UNIX in a real way right around the time that work
> was commencing to make the kernel fully dynamic. That might not sound
> like such a big deal, but operating system kernels of any sort are a
> lot harder to write than just about any userspace program. A big
> advantage of fixed-size buffers in the kernels is that, to the extent
> you didn't try to exceed their capacity, you could count on them to
> work right!
>
> The SunOS 3 kernel on my old 3/160 workstations had a filesize -
> uncompressed - of about 500 kilobytes. The entire Microport
> SystemV/AT UNIX distribution for the 80386 PC clones was distributed
> on a half-dozen or so 5 1/4 inch 360 kb floppy disks.
>
> The bane of my existence when I worked phone support for Microport was
> some customer ringing me up to shout at me about the message "Inode
> Table Overflow!" spewing all over his console. His box didn't
> actually crash, but had become largely broken.
>
> That message occured when all the processes - running programs - on
> the system all together tried to open too many files and directories.
> On Linux, and the UNIXen of todays, the number of open files is
> limited only by the avialable memory installed in the computer, but
> back then, each open file was represented by an inode data structure
> that was an element of a fixed-sized array of such structures.
>
> Opening too many files would never overflow that array, as that would
> corrupt your kernel memory and crash your machine. Instead your file
> open would fail, and you'd get that overflow message on the console.
>
> For most of my callers, the only way to fix that overflow would be to
> send them a configuration kit - we called it the Link Kit - that
> contained the object code - but NOT source code! - for the kernel,
> except that we did provide source for certain user-configurable items,
> such as the inode table array. One would edit a header file to
> increase the size of that array, compile what sources we provided,
> then link all the objects together to make a new kernel that could
> have more files open all at one time than would otherwise be the case.
>
> But why didn't we at Microport just configure the kernel with a big
> inode table to start with? That was because the entire inode table
> occupied precious physical memory - the kernel memory is never swapped
> out as user space programs often are - so an inode table that was
> larger than you actually needed not only wasted memory, but could
> cause other operating systems that used a lot of memory to fail!
>
> While adjusting the inode table size was the most common use for our
> Link Kit, there were lots of other configurable parameters. Devices
> drivers were all hardwired in, for example. Dynamically loadable
> driver modules as we have today were yet to be implemented. Again
> linking in drivers you didn't really, really need took up a lot of
> wasted memory.
>
> RMS and his colleagues eventually got to the point that most of the
> software that was of any real use to developers who wrote code for Sun
> Workstations in particular, as well as most other UNIX variants, and
> to a lesser extent VAX/VMS, Microsoft Windows and even, just a little
> bit, the Classic Mac OS, could be downloaded in source code form from
> FTP servers all over the Internet.
>
> Getting that stuff to compile could be a chore, so the FSF financed
> itself to a large extent by selling tapes - TAPES! not DISKS! - of
> ready-to-install precompiled executable programs. They also offered a
> custom service of making such a tape for just about any platform of
> your heart's desire, but for a hefty price.
>
> While some of that code could be gotten to work on inexpensive 16-bit
> MS-DOS and Windows 3.1 computers, a lot of it either had not yet been
> ported to the 16-bit x86 architecture, or just could not be ported at
> all. To enjoy the full power of the GNU software, one had to drop a
> lot of dollars on an expensive UNIX computer, only to replace the
> proprietary software - which you had no choice but to pay a lot of
> money for - with the GNU software.
>
> RMS by this time was well into the development of HURD, the GNU
> operating system kernel, but in his usual fashion he wanted it to be
> light-years ahead of existing UNIX kernel technology. It was a long,
> long time before HURD could do anything useful at all.
>
> While it was developed with portability in mind, a kernel is a very
> least portable of any kind of software, because its source code
> depends on all kinds of minute, picky, arbitrary and often downright
> disgusting ways on the hardware that operates the kernel. Quite a lot
> of those hardware details are only available to developers who sign
> non-disclosure agreements. Even when the hardware interface isn't a
> trade secret, it happens all the time that the document is just plain
> wrong, or doesn't even exist!
>
> Eventually a Finnish Computer Science undergraduate student by the
> name of Linus Torvalds got really tired of waiting around for HURD to
> be usable as well as ported to the 80386 PC clones, so he wrote from
> scratch, and more or less all by himself, just about the simplest
> possible operating system kernel that would work more or less like
> UNIX did, but ONLY on 80386 PCs.
>
> That kernel was called "linux", and it still is.
>
> Without the work of Richard Stallman, his colleagues at The Free
> Software Foundation, as well as countless others who contributed to
> the GNU source code, the linux kernel all by itself would have been
> largely useless.
>
> The original distributions were little more than the Linux kernel and
> the GNU user space software. That's why they should be properly known
> as GNU/Linux.
>
> The distributions of today come with a great deal more software than
> the kernel and the original GNU stuff. There are some who argue that
> we should call it "Mozilla Linux" because most of us spend most of our
> time using the Firefox web browser, which was developed by the Mozilla
> project.
>
> But the essential core of all of our distributions - not just Fedora,
> but Ubuntu, Slackware, Debian, all the Live CDs - is the GNU software.
>
> If you don't want to call it "GNU/Linux" instead of just "Linux", it
> would really be better not to call it Linux at all. Call it by a name
> that gives credit to ALL of its contributors. For us, that is just
> "Fedora". On my MacBook Pro, it is Ubuntu.
>
> I'm really beat, so I'm not going to address your question about the
> package managers quite yet. The package management is an important
> part of creating a usable distribution that end-users or system
> administrators can easily install, upgrade and maintain.
>
> In particular the mind-numbingly idiotic way in which all UNIX-like
> operating systems spread vitally important programs, code libraries,
> configuration files and other data files such as national language
> localizations All Over Creation makes it damn near impossible for
> anyone but an expert to UNINSTALL a program without the use of a
> package manager.
>
> The package managers didn't used to work so well though. It's not as
> bad these days, but it used to happen all the time to just about
> everyone that installation, upgrading or uninstallations would totally
> break a system.
>
> Debian did a better job with the packaging at first. Red Hat with RPM
> - the Red Hat Package Manager - had a well-deserved reputation for
> being totally brain-damaged.
>
> But those days are largely behind us. Not completely though; for no
> reason I can fathom, just last night, without doing anything at all,
> the Ubuntu Natty Narwhale installation on my MacBook Pro started
> reporting that it had broken dependencies that so far I have been at a
> total loss to repair.
>
> I'll Send You My Bill In The Mail.
>
> Sincerely,
>
> A Grey-Bearded Old-Time UNIX Hacker
You just put more grey on my head, and in my beard that I care to
admit to. Nice write-up.

Scott....another old grey-hair
--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 
Old 11-06-2011, 11:32 AM
Linux Tyro
 
Default A general history question

On Sat, Nov 5, 2011 at 1:37 PM, inode0 <inode0@gmail.com> wrote:

*http://en.wikipedia.org/wiki/Linux_distribution


I read this..

On Sat, Nov 5, 2011 at 3:10 PM, Don Quixote de la Mancha <quixote@dulcineatech.com> wrote:


Richard Stallman and his colleagues at The Freesoftware Foundation

assert that the proper term is "GNU/Linux", because in reality linux

is just - "just!" - the operating system kernel.



There's not a whole lot you can do with a kernel all by itself. *While

the kernel is the first program to run in a complete operating system,

with all the other programs being launched via the facilities of the

kernel, a complete system - what we know as a "distribution" or

"distro" - requires user space runtime libraries such as the Standard

C Library, "daemons" that run invisibly in the background, some way to

set up the configuration of the system - for GNU/Linux, these are text

files mostly located in /etc - tools to allow the user to create,

delete and edit various file formats, and for a computer with a

graphical user interface, a way for the screen "real estate" to be

divided up among programs that want to display graphics, as well as to

send keyboard and mouse input to the appropriate GUI program.



Richard Stallman originally set out to write a complete new - and

better - source code-compatible clone of the very proprietary,

expensive and closed-source operating system UNIX.



*Among his reasons for wanting to do so was that the source to UNIX

was at first distributed more or less freely by AT&T, when AT&T was

broken up into many competing telephone companies. *The AT&T breakup

also lifted many of the regulatory restrictions once placed on AT&T as

a natural monopoly. *Now free of the restrictions that prevented it

from competing in the computer business, AT&T realized there was money

to be made in selling UNIX.



A UNIX SystemV binary-executable license was expensive, with the

license price increasing with the number of allowable logged-in users.



A SystemV source code license was collosally expensive, with the

licensees being bound by restrictive non-disclosure agreements, as

well as the obligation to pay AT&T for any source or binaries that the

licensees passed on to others. *Such source licenses were only

affordable to those who found some way to charge a lot of money for

whatever they produced from it.



Stallman - or RMS as he prefers to be called - started out by

developing GNU - which stands for "GNU's Not UNIX", which itself

expands to "GNU's Not UNIX Not UNIX" and so on - from the outside in.

That is, rather than writing a kernel from scratch than adding user

space software that ran on top of that kernel, RMS and his colleagues

at The Free Software Foundation developed a great deal of user space

software first.



RMS and I share a common trait for which we are both often criticized:

we are very picky about our work, and don't particularly care if it

takes a long time to get it right.



More or less the first program that RMS wrote for use by GNU was a

text editor called GNU Emacs. *That originally stood for "Editor

Macros", as it was originally a set of simple macros, more or less

like Word or Excel macros, for the editor that was built in to the

LISP artificial intelligence workstations that RMS used at the MIT AI

Laboratory in Cambridge, Massachusetts.



Writing Emacs in C, with an integrated LISP interpreter enabled Emacs

to be portable to other kinds of computers, with Emacs being quite

powerfuly extensible by writing Emacs LISP (or elisp) programs for it.



Emacs is pretty small and quick by today's standards, compared to say

Firefox or the Gnome or KDE desktops, but back in the day it was said

to stand for not "Editor Macros" but "Eight Megs And Constantly

Swapping" - I first used it on a 16 MHz Sun Workstation with but four

megabytes of memory! - as well as "Emacs Makes A Computer Slow".



When Emacs was used on a GUI computer, it was popular to give it a

Kitchen Sink icon!



It could readily be argued that RMS would have by now made a lot more

progress with GNU had he not spent so much time writing Emacs,

improving it as well as porting it to so many completely alien

computing platforms. *However, if you learn to use Emacs really,

really well, you well find that it can do a great deal more than edit

text. *For most coders who are Emacs fans, it does everything that

today's Integrated Development Environments such as Visual Studio,

CodeWarrior, Xcode and KDevelop can do. *It can actually do a great

deal more, and is readily customizable by writing elisp programs.



Thus Emacs all by itself empowered all of us coders to accomplish a

great deal more with less effort and in less time than we could have

with the programmer's editors of the day, such as the original UNIX

vi. *Today's Vim (VI iMproved) works just like the UNIX vi, but it can

do a great deal more than vi could.



*I used to really suffer under the SunOS 3 vi; when some consultant

dropped by with an Emacs source tape and showed me how to build and

run it, I thought he was a Heaven-sent prophet!



RMS, despite being notorious for never getting anything done, in

reality is just about the most brilliant and productive engineer in

all of human history. *It's just that he doesn't like to do anything

the simple way. *If you Google up the GNU coding standards document,

you will find the advice that while GNU programs as well as subroutine

libraries are expected to work much like the UNIX originals they

replace, they are expected to improve on them in just about every way

possible.



The GNU programs all have extensive documentation, the most essential

parts of which can be accessed by giving the "--help" command-line

option to the programs. *No such online help existed in the original

UNIX command-line tools, I expect because they were originally written

for such resource-constrained computers that the executable files of

all the software really did have to be as small as possible.



Many of the older UNIX programs, as well as the UNIX kernel itself,

had fixed-size buffers, or memory areas in which they stored their

data. *The GNU coding standard explicitly forbids such fixed-length

buffers when it is at all possible to dynamically allocate buffers of

any desired size.



Dynamic allocation has many advantages, but the code that implements

it is a great deal more complex than code that uses fixed-size

buffers. *It can be slower and use more memory as well, because there

is some invisible overhead due to the bookkeeping requirements of the

dynamic memory manager.



I first got into UNIX in a real way right around the time that work

was commencing to make the kernel fully dynamic. *That might not sound

like such a big deal, but operating system kernels of any sort are a

lot harder to write than just about any userspace program. *A big

advantage of fixed-size buffers in the kernels is that, to the extent

you didn't try to exceed their capacity, you could count on them to

work right!



The SunOS 3 kernel on my old 3/160 workstations had a filesize -

uncompressed - of about 500 kilobytes. *The entire Microport

SystemV/AT UNIX distribution for the 80386 PC clones was distributed

on a half-dozen or so 5 1/4 inch 360 kb floppy disks.



The bane of my existence when I worked phone support for Microport was

some customer ringing me up to shout at me about the message "Inode

Table Overflow!" spewing all over his console. *His box didn't

actually crash, but had become largely broken.



That message occured when all the processes - running programs - on

the system all together tried to open too many files and directories.

On Linux, and the UNIXen of todays, the number of open files is

limited only by the avialable memory installed in the computer, but

back then, each open file was represented by an inode data structure

that was an element of a fixed-sized array of such structures.



Opening too many files would never overflow that array, as that would

corrupt your kernel memory and crash your machine. *Instead your file

open would fail, and you'd get that overflow message on the console.



For most of my callers, the only way to fix that overflow would be to

send them a configuration kit - we called it the Link Kit - that

contained the object code - but NOT source code! - for the kernel,

except that we did provide source for certain user-configurable items,

such as the inode table array. *One would edit a header file to

increase the size of that array, compile what sources we provided,

then link all the objects together to make a new kernel that could

have more files open all at one time than would otherwise be the case.



But why didn't we at Microport just configure the kernel with a big

inode table to start with? *That was because the entire inode table

occupied precious physical memory - the kernel memory is never swapped

out as user space programs often are - so an inode table that was

larger than you actually needed not only wasted memory, but could

cause other operating systems that used a lot of memory to fail!



While adjusting the inode table size was the most common use for our

Link Kit, there were lots of other configurable parameters. *Devices

drivers were all hardwired in, for example. *Dynamically loadable

driver modules as we have today were yet to be implemented. *Again

linking in drivers you didn't really, really need took up a lot of

wasted memory.



RMS and his colleagues eventually got to the point that most of the

software that was of any real use to developers who wrote code for Sun

Workstations in particular, as well as most other UNIX variants, and

to a lesser extent VAX/VMS, Microsoft Windows and even, just a little

bit, the Classic Mac OS, could be downloaded in source code form from

FTP servers all over the Internet.



Getting that stuff to compile could be a chore, so the FSF financed

itself to a large extent by selling tapes - TAPES! not DISKS! - of

ready-to-install precompiled executable programs. *They also offered a

custom service of making such a tape for just about any platform of

your heart's desire, but for a hefty price.



While some of that code could be gotten to work on inexpensive 16-bit

MS-DOS and Windows 3.1 computers, a lot of it either had not yet been

ported to the 16-bit x86 architecture, or just could not be ported at

all. *To enjoy the full power of the GNU software, one had to drop a

lot of dollars on an expensive UNIX computer, only to replace the

proprietary software - which you had no choice but to pay a lot of

money for - with the GNU software.



RMS by this time was well into the development of HURD, the GNU

operating system kernel, but in his usual fashion he wanted it to be

light-years ahead of existing UNIX kernel technology. *It was a long,

long time before HURD could do anything useful at all.



While it was developed with portability in mind, a kernel is a very

least portable of any kind of software, because its source code

depends on all kinds of minute, picky, arbitrary and often downright

disgusting ways on the hardware that operates the kernel. *Quite a lot

of those hardware details are only available to developers who sign

non-disclosure agreements. *Even when the hardware interface isn't a

trade secret, it happens all the time that the document is just plain

wrong, or doesn't even exist!



Eventually a Finnish Computer Science undergraduate student by the

name of Linus Torvalds got really tired of waiting around for HURD to

be usable as well as ported to the 80386 PC clones, so he wrote from

scratch, and more or less all by himself, just about the simplest

possible operating system kernel that would work more or less like

UNIX did, but ONLY on 80386 PCs.



That kernel was called "linux", and it still is.



Without the work of Richard Stallman, his colleagues at The Free

Software Foundation, as well as countless others who contributed to

the GNU source code, the linux kernel all by itself would have been

largely useless.



The original distributions were little more than the Linux kernel and

the GNU user space software. *That's why they should be properly known

as GNU/Linux.



The distributions of today come with a great deal more software than

the kernel and the original GNU stuff. *There are some who argue that

we should call it "Mozilla Linux" because most of us spend most of our

time using the Firefox web browser, which was developed by the Mozilla

project.



But the essential core of all of our distributions - not just Fedora,

but Ubuntu, Slackware, Debian, all the Live CDs - is the GNU software.



If you don't want to call it "GNU/Linux" instead of just "Linux", it

would really be better not to call it Linux at all. *Call it by a name

that gives credit to ALL of its contributors. *For us, that is just

"Fedora". *On my MacBook Pro, it is Ubuntu.



I'm really beat, so I'm not going to address your question about the

package managers quite yet. *The package management is an important

part of creating a usable distribution that end-users or system

administrators can easily install, upgrade and maintain.



*In particular the mind-numbingly idiotic way in which all UNIX-like

operating systems spread vitally important programs, code libraries,

configuration files and other data files such as national language

localizations All Over Creation makes it damn near impossible for

anyone but an expert to UNINSTALL a program without the use of a

package manager.



The package managers didn't used to work so well though. *It's not as

bad these days, but it used to happen all the time to just about

everyone that installation, upgrading or uninstallations would totally

break a system.



Debian did a better job with the packaging at first. *Red Hat with RPM

- the Red Hat Package Manager - had a well-deserved reputation for

being totally brain-damaged.



But those days are largely behind us. *Not completely though; for no

reason I can fathom, just last night, without doing anything at all,

the Ubuntu Natty Narwhale installation on my MacBook Pro started

reporting that it had broken dependencies that so far I have been at a

total loss to repair.
+Excellent.
On Sat, Nov 5, 2011 at 6:31 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:



SuSE is I think also older than Red Hat.
But SUSE uses .rpm (RedHat Package Manager) so is it possible, could you link the page, please?


On Sat, Nov 5, 2011 at 7:00 PM, scott <redhowlingwolves@nc.rr.com> wrote:


*You just put more grey on my head, and in my beard that I care to

admit to. Nice write-up.



Scott....another old grey-hair

, it was really nice!

--
THX

--
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
 

Thread Tools




All times are GMT. The time now is 11:41 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org