Search for an algorithm for determining the speed of a network line

I'm looking for a better way to interpret the standard (well, standard) Ethernet PHY registers to determine the rate at which the Ethernet line is actually running. (e.g. 10/100/1000 and full / half duplex)

I suppose it can be found in the source of things like Linux and I just want to look there now, but if anyone has a good recommendation I would be interested.

What I'm interested in is that it really connects and how it is connected, rather than the huge sea of ​​possibilities that each end has been promoting from the start.

0


source to share


2 answers


Thanks for the answer. It is intended as a language and platform agnostic question because almost all MI / GMII Ethernet PHYs have the same base registers. I ended up on an embedded platform.

But I found a reasonable sequence that was good enough for my limited application by looking at the various bits of the Linux driver source - it's basically:



Check the link in the base state (0x1) If the link is up, then check the negotiation in the base state (0x1) If the negotiation is complete, check the 1G in the 1000M status register (0xa) If you don't have 1G, then you have 100M. (This is not a general rule, but it applies in this application)

Perhaps it was a hardware issue, not a software issue ...

+1


source


To help you take a look at how the Linux kernel does it: while each driver can do their own thing, there is a general version that should be used when the chip follows the standard closely enough: Independent interface device support for independent media .



+1


source







All Articles