- Simplify names (timestamp_source --> timestamp, tx/tx_timestamp CSR/Layouts --> timestamp, etc...)
- Simplify the logic a bit.
- Use consistent names for FIFO between Writer and Reader (cmd_fifo and stat_fifo).
- Avoid stat_fifo on Reader when Timestamp is disabled.
- Use EventSourceLevel() on Reader only when Timestamp is enabled.
This implements optional packet timestamping based on a hardware
timestamp source for outgoing Ethernet packets, as required by
applications such as IEEE 1588 (Precision Time Protocol).
This changes the liteeth SRAM reader to utilize a feedback channel
returning the slot of which a packet has been sent.
The event source is changed from a pulse to a level-based trigger,
such that it will continue asserted if a single packet has been
acknowledged, but additional packets have been sent.
This infrastructure allows to convey additional information about
transmitted packets, such as timestamps or errors.
This implements optional packet timestamping based on a hardware
timestamp source for incoming Ethernet packets, as required by
applications such as IEEE 1588 (Precision Time Protocol).
When a timestamp source is given as an argument, an additonal CSR is
generated containing the packet timestamp.
Background:
When there is a lot of broadcasts in the network, receive buffers
may overflow easly. Especially having onlu 2 of them.
To prevent that you can enlarge nrxslots, but because
nrxslots + ntxslots must be the power of two, you must also
inrease ntxslots. But there is no need to have more than 2 tx
buffers (they work as ping-pong buffer), the CPU will not use
more than two buffers.
So being able to set for example nrxslots=8 and ntxslots=2
is quite reasonable.
In some hardware, ref_clk can be input for both the MAC and the PHY. In this
case, setting refclk_cd to None will make the CRG use ref_clk as the RMII
input reference clock:
Pads:
# RMII Ethernet
("eth_clocks", 0,
Subsignal("ref_clk", Pins("D17")),
IOStandard("LVCMOS33"),
),
("eth", 0,
Subsignal("rst_n", Pins("F16")),
Subsignal("rx_data", Pins("A20 B18")),
Subsignal("crs_dv", Pins("C20")),
Subsignal("tx_en", Pins("A19")),
Subsignal("tx_data", Pins("C18 C19")),
Subsignal("mdc", Pins("F14")),
Subsignal("mdio", Pins("F13")),
Subsignal("rx_er", Pins("B20")),
Subsignal("int_n", Pins("D21")),
IOStandard("LVCMOS33")
),
PHY:
self.submodules.ethphy = LiteEthPHYRMII(
clock_pads = self.platform.request("eth_clocks"),
pads = self.platform.request("eth"),
refclk_cd = None)
Thanks @mwick83 for reporting the use case and for the initial implementation.
This ensures the full Etherbone packet is available before starting the transmission
and fixes transmission issues with larges bursts or slow sys_clk_freq.
For correct io delays in nextpnr the DEL_VALUE parameter needs to
be an integer, instead of the "DELAY{}" string.
The use of a "DELAY{}" string appears in the Lattice primitive
manual, but appears to be incorrect. At least based of the current
nextpnr.
Because we are not making use of dynamic io delays here we can
also use the simpler DELAYG block instead of DELAYF.
Fixes#50
On ECP5 targets the core struggles to meet timing closure. This
change adds buffers to the CRC module on tx/rx paths.
This results in 20-30MHz gain to max clock rate.
This fixes#47