Hi folks,
Can I get some people's recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora :-).
Thanks,
Dan
DanG wrote:
Can I get some people?s recommendations on PCI/PNP network
cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about
^^^^^ Intel 100 are very good NICs, but better with _e100_ driver.
D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
A RealTek RTL-8139C+ clone is cheaper. But it must be C+, only _8139_ are shit. The driver is 8139cp.
A very cheap Gb NIC is Netgear GA302T, tg3 driver.
But if you have a problem with a 905, better send a report to bugzilla or to netdev@oss.sgi.com(net devices Linux ml)
Hi list,
I added the updates directory to my /etc/sysconfig/rhn/sources file. Ran up2date (went smoothly) rebooted my computer and;
-gdm won't work "Can't recognize image format png" -evolution won't launch "GdkPixbuf-WARNING **: Can not open pixbuf loader module file '/etc/gtk-2.0/gdk-pixbuf.loaders: No such file or directory" -desktop icons and menu icons are all red "X's"
I remember this happened once before but I don't remember the fix. Why oh why is running up2date a nail biting experience? How can updating fedora result in an unusable system so frequently?
As far as I know up2date didn't uninstall anything. The following image libraries are installed... libpng-1.2.2-17 libpng10-devel-1.0.13-9 libpng-devel-1.2.2-17 libpng10-1.0.13-9 gtk2-2.2.4-5.1 pygtk2-2.0.0-1 gtk2-engines-2.2.0-3 gtk2-devel-2.2.4-5.1 libjpeg-6b-29 libjpeg-devel-6b-29 ImageMagick-5.5.6-5
Any thoughts on how I can un-break fedora?
Btw, kde seems to be working ok though.
help appreciated, -ry
I've never had a problem with any of the recent Realtek chips. They just work. I guess I'm using a really recent chipset!
Bob
Xose Vazquez Perez wrote:
DanG wrote:
Can I get some people?s recommendations on PCI/PNP network
cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about
^^^^^ Intel 100 are very good NICs, but better with _e100_ driver.
D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
A RealTek RTL-8139C+ clone is cheaper. But it must be C+, only _8139_ are shit. The driver is 8139cp.
A very cheap Gb NIC is Netgear GA302T, tg3 driver.
But if you have a problem with a 905, better send a report to bugzilla or to netdev@oss.sgi.com(net devices Linux ml)
this thread is a little off-topic here. Sorry
Robert L Cochran wrote:
Xose Vazquez Perez wrote:
A RealTek RTL-8139C+ clone is cheaper. But it must be C+, only _8139_ are shit. The driver is 8139cp.
I've never had a problem with any of the recent Realtek chips. They just work. I guess I'm using a really recent chipset!
Yes, RealTek NICs work. But RTL-8139 based has a _very_ bad design. FreeBSD driver has more information:
/* * The RealTek 8139 PCI NIC redefines the meaning of 'low end.' This is * probably the worst PCI ethernet controller ever made, with the possible * exception of the FEAST chip made by SMC. The 8139 supports bus-master * DMA, but it has a terrible interface that nullifies any performance * gains that bus-master DMA usually offers. * * For transmission, the chip offers a series of four TX descriptor * registers. Each transmit frame must be in a contiguous buffer, aligned * on a longword (32-bit) boundary. This means we almost always have to * do mbuf copies in order to transmit a frame, except in the unlikely * case where a) the packet fits into a single mbuf, and b) the packet * is 32-bit aligned within the mbuf's data area. The presence of only * four descriptor registers means that we can never have more than four * packets queued for transmission at any one time. * * Reception is not much better. The driver has to allocate a single large * buffer area (up to 64K in size) into which the chip will DMA received * frames. Because we don't know where within this region received packets * will begin or end, we have no choice but to copy data from the buffer * area into mbufs in order to pass the packets up to the higher protocol * levels. * * It's impossible given this rotten design to really achieve decent * performance at 100Mbps, unless you happen to have a 400Mhz PII or * some equally overmuscled CPU to drive it. * [...] * Fast forward a few years. RealTek now has a new chip called the * 8139C+ which at long last implements descriptor-based DMA. Not * only that, it supports RX and TX TCP/IP checksum offload, VLAN * tagging and insertion, TCP large send and 64-bit addressing. * Better still, it allows arbitrary byte alignments for RX and * TX buffers, meaning no copying is necessary on any architecture. * There are a few limitations however: the RX and TX descriptor * rings must be aligned on 256 byte boundaries, they must be in * contiguous RAM, and each ring can have a maximum of 64 descriptors. * There are two TX descriptor queues: one normal priority and one * high. Descriptor ring addresses and DMA buffer addresses are * 64 bits wide. The 8139C+ is also backwards compatible with the * 8139, so the chip will still function with older drivers: C+ * mode has to be enabled by setting the appropriate bits in the C+ * command register. The PHY access mechanism appears to be unchanged. * [...]
On Fri, 14 Nov 2003, DanG wrote:
Can I get some people's recommendations on PCI/PNP network cards
that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora :-).
I use cheapo Dlink 530TX and 530TXS cards and they work flawlessly. $12 at staples. 6MB/s+ transfer rates full duplex. No problems ever.
I also have Intel 100 and 1000 hardware which works great. HTH
On Sat, Nov 15, 2003 at 06:24:15AM -0500, Mike A. Harris wrote:
I use cheapo Dlink 530TX and 530TXS cards and they work flawlessly. $12 at staples. 6MB/s+ transfer rates full duplex. No problems ever.
D-Link DFE-538TX over here. Slightly expensive but they work very well.
Emmanuel
Intel and Tulip chipsets here (intel, netgear, and linksys cards). Never had a single problem whatsoever.
On Sun, 2003-11-16 at 08:57, Emmanuel Seyman wrote:
On Sat, Nov 15, 2003 at 06:24:15AM -0500, Mike A. Harris wrote:
I use cheapo Dlink 530TX and 530TXS cards and they work flawlessly. $12 at staples. 6MB/s+ transfer rates full duplex. No problems ever.
D-Link DFE-538TX over here. Slightly expensive but they work very well.
Emmanuel
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
Once upon a time at band camp Sat, 15 Nov 2003 2:27 pm, DanG wrote:
Hi folks,
Can I get some people's recommendations on PCI/PNP network
cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora :-).
I use realtek 8139 based cards i have a few different chipsets never had any issues with them i use the 8139too driver i get them for about AUD$13 which is about US$9.5 or there abouts.
Basically most of them will work fine so it comes down to what you want to spend.
Dennis
----- Original Message ----- From: "Dennis Gilmore" dennis@ausil.us To: fedora-test-list@redhat.com Sent: Saturday, November 15, 2003 11:34 AM Subject: Re: Well supported, reliable NICs for Redhat Linux/Fedora?
Once upon a time at band camp Sat, 15 Nov 2003 2:27 pm, DanG wrote:
Hi folks,
Can I get some people's recommendations on PCI/PNP network
cards that run very stable under Linux 2.4. I hear the Intel 10/100
eepro
I've used netgear ones on all my servers for several years now they're also pretty cheap - a tenner (£10) last time if I remember correctly
On Sat, 2003-11-15 at 05:27, DanG wrote:
Hi folks, Can I get some people’s recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
I'm currently using a 3c905 (rev b, combo version) with fedora without any problems. I'm wondering what "headaches" it's causing you.
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your logs show "hanging transceiver reset" or other vague messages. I've had no problems with Intel and Digital cards.
Klaasjan
Klaasjan Brand wrote:
On Sat, 2003-11-15 at 05:27, DanG wrote:
Hi folks, Can I get some people’s recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
I'm currently using a 3c905 (rev b, combo version) with fedora without any problems. I'm wondering what "headaches" it's causing you.
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98767
basically the cards lock up when kudzu is run during the boot process and the machine has to be power-cycled to get them back into a working condition. It seems to be limited to the 905, the B's and C's seem to work fine, also the 905's themselves seem to work fine with the 2.6 kernel.
I've kind of worked round the problem, my work has just thrown out a bunch of PC's with 905b's in them and I retrieved the cards from the dustbin,
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your logs show "hanging transceiver reset" or other vague messages.
AFAICR they're not very high performance cards either.
I've had no problems with Intel and Digital cards.
Klaasjan
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
Someone on the fedora-list has the same problem. Something to do with kudzu I think. Look in that list instead of this test-list. Sorry I couldn't be of more help.
On 11/16/03 7:40 AM, "Iain Rae" iainr@zathras.org wrote:
Klaasjan Brand wrote:
On Sat, 2003-11-15 at 05:27, DanG wrote:
Hi folks, Can I get some people¹s recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
I'm currently using a 3c905 (rev b, combo version) with fedora without any problems. I'm wondering what "headaches" it's causing you.
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98767
basically the cards lock up when kudzu is run during the boot process and the machine has to be power-cycled to get them back into a working condition. It seems to be limited to the 905, the B's and C's seem to work fine, also the 905's themselves seem to work fine with the 2.6 kernel.
I've kind of worked round the problem, my work has just thrown out a bunch of PC's with 905b's in them and I retrieved the cards from the dustbin,
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your logs show "hanging transceiver reset" or other vague messages.
AFAICR they're not very high performance cards either.
I've had no problems with Intel and Digital cards.
Klaasjan
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
I have not had any kind of problems with 3COM, D-Link or Realtek, they all do the same job even if the cost is $15.00 or $300.00 as long as they are 10/100 they work fine with any Linux, even the Asustek or Nvidia 3Com works fine with me. The new Intel 10/100/1000 are more expensive and the transmission is 10/100 only until now. There are a lot of technologies in the market that we do not need, they just complicate our life more, it is like SATA hard drives, I prefer SCSI hard drives. Some 3 com 905 network card they boot by itself in some types of motherboards, I have inserted it on a 64bit pci slot and it works fine, on an NVidia chipset motherboard it wont boot by itself, but it will boot on an SIS chipset motherboard. In some motherboard it makes cycle, but they are very good and there are always drivers for them.
Eric Barnes ebarnes@rationalsystemsupport.com wrote: Someone on the fedora-list has the same problem. Something to do with kudzu I think. Look in that list instead of this test-list. Sorry I couldn't be of more help.
On 11/16/03 7:40 AM, "Iain Rae" wrote:
Klaasjan Brand wrote:
On Sat, 2003-11-15 at 05:27, DanG wrote:
Hi folks, Can I get some people�s recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
I'm currently using a 3c905 (rev b, combo version) with fedora without any problems. I'm wondering what "headaches" it's causing you.
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98767
basically the cards lock up when kudzu is run during the boot process and the machine has to be power-cycled to get them back into a working condition. It seems to be limited to the 905, the B's and C's seem to work fine, also the 905's themselves seem to work fine with the 2.6 kernel.
I've kind of worked round the problem, my work has just thrown out a bunch of PC's with 905b's in them and I retrieved the cards from the dustbin,
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your logs show "hanging transceiver reset" or other vague messages.
AFAICR they're not very high performance cards either.
I've had no problems with Intel and Digital cards.
Klaasjan
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
Yes the problem I have is with my 3c905-TX card which is revision A. If Kudzu is on at boot time the card does not initialize (though it worked great from RH 7-9). I have quite a few posts in bugzilla about it and workarounds. The issue has not been resolved yet with Kudzu and I wanted another card that I could use the machine for NAT/Firewall for my network. I bought a new Realtek 8139C chipset based card. It has a one year warranty. Works like a charm not bad for $6 US :-). Thanks for everyone's input.
Dan
_____
From: fedora-test-list-admin@redhat.com [mailto:fedora-test-list-admin@redhat.com] On Behalf Of marcos colome Sent: Sunday, November 16, 2003 10:58 AM To: fedora-test-list@redhat.com Subject: Re: Well supported, reliable NICs for Redhat Linux/Fedora?
I have not had any kind of problems with 3COM, D-Link or Realtek, they all do the same job even if the cost is $15.00 or $300.00 as long as they are 10/100 they work fine with
any Linux, even the Asustek or Nvidia 3Com works fine with me. The new Intel
10/100/1000 are more expensive and the transmission is 10/100 only until now. There
are a lot of technologies in the market that we do not need, they just complicate our
life more, it is like SATA hard drives, I prefer SCSI hard drives. Some 3 com 905 network
card they boot by itself in some types of motherboards, I have inserted it on a 64bit pci
slot and it works fine, on an NVidia chipset motherboard it wont boot by itself, but it
will boot on an SIS chipset motherboard. In some motherboard it makes cycle, but they
are very good and there are always drivers for them.
Eric Barnes ebarnes@rationalsystemsupport.com wrote:
Someone on the fedora-list has the same problem. Something to do with kudzu I think. Look in that list instead of this test-list. Sorry I couldn't be of more help.
On 11/16/03 7:40 AM, "Iain Rae" wrote:
Klaasjan Brand wrote:
On Sat, 2003-11-15 at 05:27, DanG wrote:
Hi folks, Can I get some people9s recommendations on PCI/PNP network cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
I'm currently using a 3c905 (rev b, combo version) with fedora without any problems. I'm wondering what "headaches" it's causing you.
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98767
basically the cards lock up when kudzu is run during the boot process and the machine has to be power-cycled to get them back into a working condition. It seems to be limited to the 905, the B's and C's seem to work fine, also the 905's themselves seem to work fine with the 2.6
kernel.
I've kind of worked round the problem, my work has just thrown out a bunch of PC's with 905b's in them and I retrieved the cards from the dustbin,
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your! logs show "hanging transceiver reset" or other vague messages.
AFAICR they're not very high performance cards either.
I've had no problems with Intel and Digital cards.
Klaasjan
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sunday 16 November 2003 07:58, marcos colome wrote:
There are a lot of technologies in the market that we do not need, they just complicate our life more, it is like SATA hard drives, I prefer SCSI hard drives.
Heh, I love comments like these.
Show me a 250gig SCSI disk this is truly hot-swappable. Oh wait, thats right, it doesn't exist. Pitty. Show me a 3u dual xeon server that is capable of 4+ TB of hot-swap SCSI storage (all in the 3u, no external stuff). Oh wait, it doesn't exist. Pitty.
- -- Jesse Keating RHCE MCSE (http://geek.j2solutions.net) Fedora Legacy Team (http://www.fedora.us/wiki/FedoraLegacy) Mondo DevTeam (www.mondorescue.org) GPG Public Key (http://geek.j2solutions.net/jkeating.j2solutions.pub)
Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating
We are here in order to give our opinions not for intelectuals arguments. If you want to use a 250Gig that is okay with me, if you want to spend the money on a dual Xeon that is okay. I have the same enjoyment and I do the same job with 20Gig or 250 Gig, with single processor or Dual Processors., I have tried and I own all those things that you are referring to, but I love simplicity and that is my own personal opinion, I am still driving a 1965 Chevy and my cousin got killed on a 2003 Porsche --- Jesse Keating jkeating@j2solutions.net wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sunday 16 November 2003 07:58, marcos colome wrote:
There are a lot of technologies in the market that we do
not need, they just
complicate our life more, it is like SATA hard
drives, I prefer SCSI
hard drives.
Heh, I love comments like these.
Show me a 250gig SCSI disk this is truly hot-swappable. Oh wait, thats right, it doesn't exist. Pitty. Show me a 3u dual xeon server that is capable of 4+ TB of hot-swap SCSI storage (all in the 3u, no external stuff). Oh wait, it doesn't exist. Pitty.
Jesse Keating RHCE MCSE (http://geek.j2solutions.net) Fedora Legacy Team (http://www.fedora.us/wiki/FedoraLegacy) Mondo DevTeam (www.mondorescue.org) GPG Public Key
(http://geek.j2solutions.net/jkeating.j2solutions.pub)
Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE/t7WU4v2HLvE71NURAgzgAKCXIVO4Qh8mIoqvKL7yVf+QnOl6JwCgmDY6
9PhYMcaEZGn52BPU0wJuE6o= =X0Z7 -----END PGP SIGNATURE-----
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sunday 16 November 2003 10:16, marcos colome wrote:
We are here in order to give our opinions not for intelectuals arguments. If you want to use a 250Gig that is okay with me, if you want to spend the money on a dual Xeon that is okay. I have the same enjoyment and I do the same job with 20Gig or 250 Gig, with single processor or Dual Processors., I have tried and I own all those things that you are referring to, but I love simplicity and that is my own personal opinion, I am still driving a 1965 Chevy and my cousin got killed on a 2003 Porsche
I just don't think you're seeing the big picture. How are you going to service a large companies /home file server with a 20gig drive? 20gigs gets used up pretty quickly.
A really good idea to keep in mind is that "What works for me doesn't necessarily work for everybody else." and "What I need isn't necessarily what everybody else needs.". Try to keep the big picture in mind.
There are things that the Intel cards can do that most the others cant, channel bonding, vlan stuff, thats what gives it a higher price. gigE is also very important for a lot of situations. If our installation network wasn't gigE, network installs would take FAR too long. The fact that it _is_ gigE, and the majority of the systems we install have gigE capabilities, a lot of time is saved, and time == $$.
- -- Jesse Keating RHCE MCSE (http://geek.j2solutions.net) Fedora Legacy Team (http://www.fedora.us/wiki/FedoraLegacy) Mondo DevTeam (www.mondorescue.org) GPG Public Key (http://geek.j2solutions.net/jkeating.j2solutions.pub)
Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating
I agree with Jesse. That big picture is what most of us need to think about.
Bob
Jesse Keating wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Sunday 16 November 2003 10:16, marcos colome wrote:
We are here in order to give our opinions not for intelectuals arguments. If you want to use a 250Gig that is okay with me, if you want to spend the money on a dual Xeon that is okay. I have the same enjoyment and I do the same job with 20Gig or 250 Gig, with single processor or Dual Processors., I have tried and I own all those things that you are referring to, but I love simplicity and that is my own personal opinion, I am still driving a 1965 Chevy and my cousin got killed on a 2003 Porsche
I just don't think you're seeing the big picture. How are you going to service a large companies /home file server with a 20gig drive? 20gigs gets used up pretty quickly.
A really good idea to keep in mind is that "What works for me doesn't necessarily work for everybody else." and "What I need isn't necessarily what everybody else needs.". Try to keep the big picture in mind.
There are things that the Intel cards can do that most the others cant, channel bonding, vlan stuff, thats what gives it a higher price. gigE is also very important for a lot of situations. If our installation network wasn't gigE, network installs would take FAR too long. The fact that it _is_ gigE, and the majority of the systems we install have gigE capabilities, a lot of time is saved, and time == $$.
Jesse Keating RHCE MCSE (http://geek.j2solutions.net) Fedora Legacy Team (http://www.fedora.us/wiki/FedoraLegacy) Mondo DevTeam (www.mondorescue.org) GPG Public Key (http://geek.j2solutions.net/jkeating.j2solutions.pub)
Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE/t+Ni4v2HLvE71NURAmwzAJ0dWFSbVl3SFPCg/b8HmL9H4cTVegCcCL0i /zpEmJFEtdHK6xSBm7RBTDk= =b1AV -----END PGP SIGNATURE-----
-- fedora-test-list mailing list fedora-test-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-test-list
On Sunday 16 November 2003 12:36 pm, Jesse Keating wrote:
On Sunday 16 November 2003 07:58, marcos colome wrote:
complicate our life more, it is like SATA hard drives, I prefer SCSI hard drives.
Heh, I love comments like these.
Show me a 250gig SCSI disk this is truly hot-swappable. Oh wait, thats right, it doesn't exist. Pitty. Show me a 3u dual xeon server that is capable of 4+ TB of hot-swap SCSI storage (all in the 3u, no external stuff). Oh wait, it doesn't exist. Pitty.
Show me a 15kRPM SATA drive with the capability to truly know that the data has hit the platter (essential for journaling filesystems and ACID compliant databases). Pity. :-) One will probably be available soone enough, but by that time the 4u 4+TB of hot swap SCSI will also be a reality.
Incidentally, ATAPI == SCSI over IDE. Crippled SCSI over IDE at that.
Show me a 15kRPM SATA drive with the capability to truly know that the data has hit the platter (essential for journaling filesystems and ACID compliant databases). Pity. :-) One will probably be available soone enough, but by that time the 4u 4+TB of hot swap SCSI will also be a reality.
SATA can support read cache only and multiple outstanding commands so its as happy at that as scsi. SATA (more so SATA2) removes just about any relevant bus level advantages scsi has.
On Monday 17 November 2003 04:50 pm, Alan Cox wrote:
Show me a 15kRPM SATA drive with the capability to truly know that the data has hit the platter (essential for journaling filesystems and ACID compliant databases). Pity. :-) One will probably be available soone enough, but by that time the 4u 4+TB of hot swap SCSI will also be a reality.
SATA can support read cache only and multiple outstanding commands so its as happy at that as scsi.
It's not read cache only that is the issue. It's the idea that when the application tells the kernel 'don't return until the data has hit the disk' that the kernel can tell the drive 'get back to me when the data has hit the disk' and the drive then can notify the kernel of that fact while still processing other reads and writes without messing up a nice elevator, but keeping the writes in the order given. I (being the PostgreSQL backend, for instance) must be able to be sure that what I have written to the disk is actually written to the disk in the order I specified.
What is needed is the full FUA extensions slated for ATA-7. In the case of PostgreSQL, it is urgent that the WAL gets written before the actual data page on the disk. See http://www.ussg.iu.edu/hypermail/linux/kernel/0304.1/0450.html for the actual text of what the Maxtor guy had to say.
And I _love_ your reply.
SCSI disks can do this. Fibre channel disks can do this. ATA disks can't yet do this, and older ATA disks won't do this. Tagged Command Queuing is part of the picture, but just part. I have to be sure my write ahead log is actually written ahead of the data page, or the advantages of WAL are lost; and, in fact, having a WAL that is out of sync with the data is worse than no WAL at all.
On Wed, Nov 19, 2003 at 03:51:55PM -0500, Lamar Owen wrote:
It's not read cache only that is the issue. It's the idea that when the application tells the kernel 'don't return until the data has hit the disk' that the kernel can tell the drive 'get back to me when the data has hit the disk' and the drive then can notify the kernel of that fact while still
Thats read cache only + TCQ
processing other reads and writes without messing up a nice elevator, but keeping the writes in the order given. I (being the PostgreSQL backend, for instance) must be able to be sure that what I have written to the disk is actually written to the disk in the order I specified.
TCQ is in ATA6 in some SATA drives.
What is needed is the full FUA extensions slated for ATA-7. In the case of PostgreSQL, it is urgent that the WAL gets written before the actual data page on the disk. See http://www.ussg.iu.edu/hypermail/linux/kernel/0304.1/0450.html for the actual text of what the Maxtor guy had to say.
Yeah. I spent quite a bit of time talking with drive vendors. The drives have metadata and in essence nowdays its a storage appliance with a file system and the whole works on it. In fact if you put a modern disk drive in an old PC its quite possible the PC is the slower CPU.
To get back on topic
2.4 SATA layer over SCSI (eg Promise drivers) do know how to use TCQ and controller level queueing so can make good use of the drives that have it. The parallel IDE in 2.4 kernels can't so it may be a 2.6 feature (FC2) although PATA TCQ is so mindbogglingly screwball its a very good candidate for "never"
Alan
On Monday 17 November 2003 13:32, Lamar Owen wrote:
Show me a 15kRPM SATA drive with the capability to truly know that the data has hit the platter (essential for journaling filesystems and ACID compliant databases). Pity. :-) One will probably be available soone enough, but by that time the 4u 4+TB of hot swap SCSI will also be a reality.
I do believe the 10K RPM SATA drives from Western Digital are capable of that.
And what size disk to you expect to use to get 1TB per U? How long has 140~ gig been the upper limit on SCSI disks? How long was it at 73 prior to? Either way thats not the argument. The argument is that certain technologies while not necessary for you may be necessary for your neighboor, and shouldn't be discounted.
On Monday 17 November 2003 05:31 pm, Jesse Keating wrote:
And what size disk to you expect to use to get 1TB per U? How long has 140~ gig been the upper limit on SCSI disks?
It isn't. Seagate has a 182MB SCSI disk. Although it's only 7200RPM. (models ST1181677LCV and LWV)
The argument is that certain technologies while not necessary for you may be necessary for your neighboor, and shouldn't be discounted.
I figured you'd get the fact that I was continuing your line of thought on that...
SATA seems like a good inside the box disk option. Cabling is certainly a big plus, although I question the need for Yet Another Power Connector. For outside the box and really big arrays fibre channel is the clear winner in terms of performance and capacity, with arrays being able to be split amongst different towns if need be (given that they're within a few km of each other). But SCSI still has its advantages for many things.
On Wednesday 19 November 2003 10:33, Lamar Owen wrote:
It isn't. Seagate has a 182MB SCSI disk. Although it's only 7200RPM. (models ST1181677LCV and LWV)
Ah, I hadn't seen that model.
The argument is that certain technologies while not necessary for you may be necessary for your neighboor, and shouldn't be discounted.
I figured you'd get the fact that I was continuing your line of thought on that...
Yep, I almost missed that fact (;
SATA seems like a good inside the box disk option. Cabling is certainly a big plus, although I question the need for Yet Another Power Connector. For outside the box and really big arrays fibre channel is the clear winner in terms of performance and capacity, with arrays being able to be split amongst different towns if need be (given that they're within a few km of each other). But SCSI still has its advantages for many things.
I also question the need, but it is a rather moot point when dealing with hotswap backplanes. I tend to agree with your assessment up to a point. Large SATA servers (16disks in a 3u chassis) are becoming one of our hottest items for sale. Seems to fit the bill if quite a few customers who need big storage, but can't afford the big cost of fibre channel. Mostly these are companies who don't have an office in another town (;
I do also agree though, that SCSI has advantages for many things, but isn't the clear winner for everything. No technology that I've seen is perfect for everybody for everything.
On Wednesday November 19, 2003 Jesse Keating wrote:
I do also agree though, that SCSI has advantages for many things, but isn't the clear winner for everything. No technology that I've seen is perfect for everybody for everything.
Jesse,
<rant> AFAIK, SCSI is still the only hard drive interface that supports low level formatting. I can't tell you the number of IDE/EIDE hard drives I've had to throw away over the years because power failures during write ops that corrupt sector headers. Some later model EIDE drives can map those "bad" spots away, but only SCSI allows you to repair that damage by laying down new sectors, headers, gaps, etc.
There are only two types of EIDE server drive users: those who have already had massive data loss, and those waiting for it to happen. Anyone betting their business on big cheap EIDE drives rather than SCSI better have (and religiously use) first class tape backup systems. They're gonna need 'em because the failure rates for EIDE drives used in 24/7 servers are between two to three orders of magnitude higher than SCSI. </rant>
We return you now to your regularly scheduled programming...
--Doc Savage Fairview Heights, IL
On Wednesday 19 November 2003 11:28, dsavage@peaknet.net wrote:
Anyone betting their business on big cheap EIDE drives rather than SCSI better have (and religiously use) first class tape backup systems. They're gonna need 'em because the failure rates for EIDE drives used in 24/7 servers are between two to three orders of magnitude higher than SCSI.
The businesses that purchase from us use hardware raid with their big cheap IDE drives, most often raid 5 or raid 10, most often with hot spares in the chassis as well. Given that SATA is hotswap, disk outages don't result in system outages, taking away yet another advantage that SCSI has.
Given the reliability of these "cheap big EIDE servers" vs the cost of a SCSI system providing the same capacity, it's no wonder people are making the switch. The price difference is somewhere in the 8 orders of magnitude.
On Wednesday November 19, 2003 Jesse Keating wrote:
On Wednesday 19 November 2003 11:28, dsavage@peaknet.net wrote:
Anyone betting their business on big cheap EIDE drives rather than SCSI better have (and religiously use) first class tape backup systems. They're gonna need 'em because the failure rates for EIDE drives used in 24/7 servers are between two to three orders of magnitude higher than SCSI.
Given the reliability of these "cheap big EIDE servers" vs the cost of a SCSI system providing the same capacity, it's no wonder people are making the switch. The price difference is somewhere in the 8 orders of magnitude.
Jesse,
Order of magnitude = integer power of 10
The MTBF for SCSI drives is typically 100-1000 (2-3 OoM) times greater than EIDE. Incidentally, this has very little to do with the I/O interface. SCSI drives are designed to run indefinitely at 100% duty cycles. They're engineered to much tighter tolerances and fabricated from far more durable materials than your typical mass market EIDE drive.
If these new SATA drives are truly intended for 24/7 server use, I would expect them to be as expensive as SCSI. On the other hand, if they're really just EIDE with a new plug, then you'd have to be crazy to build SATA RAID arrays without features like automatic failover sparing. When you surround a drive with that kind of technology, much of its cost advantage would evaporate.
--Doc Savage Fairview Heights, IL
On Wednesday 19 November 2003 12:39, dsavage@peaknet.net wrote:
Order of magnitude = integer power of 10
Right, price me out a server that can provide for ~3.5TB of storage in a 3u space, with hotswap/hotspare capability, hardware raid, and it has to be SCSI, add in redundant powersupply unit for the chassis as well. Does it come in under $13K? I bet it doesn't. My SATA system does.
The MTBF for SCSI drives is typically 100-1000 (2-3 OoM) times greater than EIDE. Incidentally, this has very little to do with the I/O interface. SCSI drives are designed to run indefinitely at 100% duty cycles. They're engineered to much tighter tolerances and fabricated from far more durable materials than your typical mass market EIDE drive.
This is not disputed.
If these new SATA drives are truly intended for 24/7 server use, I would expect them to be as expensive as SCSI.
Yes, there are some disks being built this way, notably the Western Digital Raptor SATA drives. 10Krpm built like SCSI, but with a SATA controller on it. And yes, prices are close for the disk, the money saved is on the controller. But the Raptor drives are only at 36gig and soon 73gig capacity. Again you loose in the capacity game.
On the other hand, if they're really just EIDE with a new plug, then you'd have to be crazy to build SATA RAID arrays without features like automatic failover sparing. When you surround a drive with that kind of technology, much of its cost advantage would evaporate.
Not even. Hardware SATA raid cards that support hotspare failover are far cheaper than SCSI alternatives. A 3ware 8506-12 (12 port SATA hardware raid) is somewhere around $700~800. The 8 port version ( we use two in our server to get you 16 drives) is around $500~. Far cheaper than the alternatives, considering that each disk gets it's own port. Throwing money at SCSI just because it's SCSI isn't exactly a very good business practice when you can save quite a bit of $$ by going with SATA systems, given the proper redundancy built around it.
On Wednesday 19 November 2003 02:28 pm, dsavage@peaknet.net wrote:
AFAIK, SCSI is still the only hard drive interface that supports low level formatting. I can't tell you the number of IDE/EIDE hard drives I've had to throw away over the years because power failures during write ops that corrupt sector headers. Some later model EIDE drives can map those "bad" spots away, but only SCSI allows you to repair that damage by laying down new sectors, headers, gaps, etc.
Toshiba, Fujitsu, Maxtor, Seagate, Western Digital, et al, provide low level format utilities that will do exactly what you want. I have a Maxtor/Quantum (Maxtor bought out Quantum, and low end Maxtors might be relabeled Quantums) 60GB drive that had that problem. Using the Maxtor low leveller fixed it right up. This drive had been in a drive sled that had power connector issues; it was spinning up and down every few seconds.
As to losing data through power corruptions, if your SCSI drive is in the middle of a write, and power fails in such a way that writing current is still on as the head goes over embedded servo data (as all currently available drives use), then it's going to be unusable just like the IDE drive, because a servowriter must be used to get that data back, regardless of drive interface type.
As to the use of 'is still the only,' ESDI and MFM drives historically did low level formats. IDE is just the MFM/RLL/ESDI controller wrapped up onto the drive electronics board, and many older IDE drives support the same low level command. Zone Bit Recording drives, OTOH, have to have a special program to do the job, or, in the case of Conner drives, a special hardware fixture was used, and Conner drives of that vintage truly cannot be software low levelled. But, then again, just because you issue the format unit command to a SCSI drive does not mean that drive has to honor it.
AFAIK, SCSI is still the only hard drive interface that supports low level formatting. I can't tell you the number of IDE/EIDE hard drives I've had to throw away over the years because power failures during write ops that corrupt sector headers. Some later model EIDE drives can map those "bad" spots away, but only SCSI allows you to repair that damage by laying down new sectors, headers, gaps, etc.
Most SCSI drives can't do that either. The tools needed to do it are just too complex to embed in the drive.
There are only two types of EIDE server drive users: those who have
s/EIDE server//
Disks fail. Reliability MTBF values vary by price of disk, but not because of the interface. A 15000 rpm SATA2 drive with SCSI grade reliability is going to cost SCSI like prices, but the controller is going to be a lot cheaper, as are the cables
On Wed, 19 Nov 2003, Alan Cox wrote:
AFAIK, SCSI is still the only hard drive interface that supports low level ... new sectors, headers, gaps, etc.
Most SCSI drives can't do that either. The tools needed to do it are just too complex to embed in the drive.
I don't think you can do only a sector at a time or something, but SCSI contains a command to tell the drive to "low-level format itself", that supposedly does defect checking, etc. (and takes a very long time), which I used to use. I never had a drive not support it, I just later found out that it wasn't normally needed.
-- noah silva
On Wed, 19 Nov 2003 dsavage@peaknet.net wrote:
AFAIK, SCSI is still the only hard drive interface that supports low level formatting.
You can often get utils from the drive manufacturers to do same thing with IDE drives. There's not a standardised interface for it though, AIUI.
There are only two types of EIDE server drive users: those who have already had massive data loss, and those waiting for it to happen. Anyone betting their business on big cheap EIDE drives rather than SCSI better have (and religiously use) first class tape backup systems. They're gonna need 'em because the failure rates for EIDE drives used in 24/7 servers are between two to three orders of magnitude higher than SCSI. </rant>
This is waffle. For many drives, the only difference between IDE and SCSI is the electronics.
And any way, for the price of your SCSI system, I can have an IDE system with more storage, mirrored to a second system, with the tape drive paid for out of the cost difference and with change to spare.
We return you now to your regularly scheduled programming...
--Doc Savage Fairview Heights, IL
regards,
On Fri, 2003-11-21 at 06:17, Paul Jakma wrote:
This is waffle. For many drives, the only difference between IDE and SCSI is the electronics.
And the quality control process.
And any way, for the price of your SCSI system, I can have an IDE system with more storage, mirrored to a second system, with the tape drive paid for out of the cost difference and with change to spare.
Not quite. You haven't seen the real differences. There is a reason why people pay he price premium for SCSI.
We return you now to your regularly scheduled programming...
--Doc Savage Fairview Heights, IL
regards,
On Sun, 2003-11-16 at 13:40, Iain Rae wrote:
see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=98767
basically the cards lock up when kudzu is run during the boot process and the machine has to be power-cycled to get them back into a working condition. It seems to be limited to the 905, the B's and C's seem to work fine, also the 905's themselves seem to work fine with the 2.6 kernel.
I've kind of worked round the problem, my work has just thrown out a bunch of PC's with 905b's in them and I retrieved the cards from the dustbin,
Looks like the problem will be around for a while then, if noone cares to fix it. Don't know how many cards it affect though, could be the number of 905a cards is limited.
The Realtek cards are ok, but a bit broken by design (the driver works around that, but don't be surprised when your logs show "hanging transceiver reset" or other vague messages.
AFAICR they're not very high performance cards either.
They generate more processor overhead, but the network performance is about equal to the 3com cards as far as I've seen. For a server with multiple interfaces I'd choose another brand.
Klaasjan
Right now I have a bunch of computers. I started out with 2 machines and 2 Netgear FA310TX cards.
One of the Netgear cards died -- I replaced it with the FA311
Then I went with the CompUSA brand because it is cheap, also Hawkings Techologies which is cheaper than CompUSA branded cards. All have worked well.
When I have to put an NIC in someone else's computer I get the cheapest available card.
Lately I won't take a computer that doesn't have LAN capability built into the motherboard. I've built 2 computers and both have onboard LAN. That's the way I want to keep it in the future.
I guess I've been lucky!
Bob
DanG wrote:
Hi folks,
Can I get some people’s recommendations on PCI/PNP network
cards that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora J.
Thanks, Dan
Can I get some people's recommendations on PCI/PNP network cards
that run very stable under Linux 2.4. I hear the Intel 10/100 eepro based cards are good but are a little more costly. What about D-Link, Realtek, Linksys etc for $10-15 cards. I am only looking for 10/100 Mbit cards. Do not even begin to recommend 3com 905 based cards which is the cause of my headaches currently with Fedora :-).
I've taken to using realtek stuff when performance is not critical. The 905 problem is something fedora related, possibly acpi triggered. If you chkconfig kudzu off then your 905 will work fine