feat: introduce serial link recovery#123
Conversation
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
alexjoedt
left a comment
There was a problem hiding this comment.
Hey, sorry for the late review! Really like the direction here.
I left a few comments, mostly around defaults and some small robustness things. Hope they're helpful, let me know what you think!
| // Silent period after successful connection | ||
| ConnectDelay time.Duration | ||
| // Recovery timeout if the connection is lost | ||
| LinkRecoveryTimeout time.Duration |
There was a problem hiding this comment.
I noticed that neither NewRTUClientHandler nor NewASCIIClientHandler set a default for LinkRecoveryTimeout. With a zero value, linkRecoveryDeadline is immediately in the past, so the recovery loop will never actually retry, it will always hit "link recovery timeout reached" on the first error.
The TCP transporter handles this by checking if mb.LinkRecoveryTimeout == 0 before giving up. Should we either set a sensible default here or add the same zero-check in the serial Send methods?
There was a problem hiding this comment.
better to be same for consistency IMO
There was a problem hiding this comment.
But really if went with default what would be sensible here? It would depend on parameters like bauderate, voltage stablity etc. Also if there's multiple requests in a loop line (with cetain devices turned off) would delay the entire request response cycle.
| connDeadline := time.Now().Add(mb.Timeout) | ||
| linkRecoveryDeadline := time.Now().Add(mb.LinkRecoveryTimeout) | ||
|
|
||
| for { |
There was a problem hiding this comment.
Just thinking out loud: if close() + connect() keeps succeeding but the remote device stays unresponsive, this could spin pretty fast until linkRecoveryDeadline. Would it be worth adding a small backoff delay between reconnect attempts, or capping the number of retries?
There was a problem hiding this comment.
True. Speaking of which wouldn't this apply to TCP too?
Perhaps it can be tackled as a seperate PR
rtu_transport_test.go
Outdated
| @@ -0,0 +1,117 @@ | |||
| //go:build darwin || linux || freebsd || openbsd || netbsd | |||
| // +build darwin linux freebsd openbsd netbsd | |||
There was a problem hiding this comment.
The build tags include darwin but openPTY() uses syscall.TIOCSPTLCK and syscall.TIOCGPTN, which are Linux-only. This won't compile on macOS. Should we either restrict the tag to linux or use something like github.com/creack/pty for cross-platform PTY support?
There was a problem hiding this comment.
For now I am dropping mac support. I don't have any mac device to test this with anyway. If required perhaps someone else can take a dig at this later on.
|
Also this has the unexpected surprise of writing to currently open terminals. Lines 12 to 16 in 79617db IMO instead of pts terminals perhaps better to have some tmp file that's auto trapped and deleted once tests are run. |
Attempt protection against shoddy wires in RTU connection.
Fixes: #122