d7d1cd wrote:There is no criterion for ending the sequence of answers in it. Except for the criterion of cyclic redundancy code (CRC16) at the end of each response.
I seriously doubt that. There HAS to be a way to know what each message length is. The
actual MODBUS TCP/IP protocol does exactly that. It uses a structure where every message begins with a fixed 7-byte header:
2-byte Transaction ID
2-byte Protocol ID
2-byte Length Field
1-byte Unit ID
The Length Field is the size of the MODBUS data that follows the header (the MODBUS Address and Checksum fields are not transmitted over TCP). So, a receiver would simply read the first 6 bytes, then read the number of bytes specified in the Length Field to finish the message (the length includes the Unit ID).
I
strongly suspect you need to do something similar in your situation, but I can't be sure since you have not provides ANY details about the
actual protocol you are dealing with. Saying it is "similar to MODBUS" is not the same as "actually being MODBUS". So what is different in this situation? Do you have ANY documentation about your device's
actual transmission protocol?
d7d1cd wrote:The protocol of exchange with the device ensures that for each request there will be a response of a specific length. However, there may be the following situation. Before sending requests to the device, you must log in to the device. That is, you need to send a special access request. If this is not done, then the device will respond to any other request with the same sequence with the access error code. Therefore, before sending requests, of course, you need to open the communication channel and then the response from the device will be expected.
That does not change anything I have said so far. EVERY message back and forth has a layout to it. You need to write your code to follow that layout so you know where each message begins and ends.
d7d1cd wrote:Of course, knowing the request and observing the order during the exchange (first authorization, then everything else), the length of the answers will be known.
It is not common for a TCP-based protocol to dictate every request has a specific fixed-length response, like you describe. Most TCP-based protocol are more generalized than that, using a single unified format for every message. So again, I ask you, what are the specific details about the
ACTUAL PROTOCOL that your device is really using? I can't help you fix your Indy code without that information.
d7d1cd wrote:Although there is one situation where the length of the response will not be known. This is if you create an application that allows the user to form queries to the device by simply typing the command in hexadecimal numbers. Here we can say that we need to analyze the request and determine how long the device should return, but what if the application is universal? What if it is designed simply to send requests and read responses?
Even if you allow the user to format their own queries, your code is still responsible for reading the responses before presenting them to the user. So your code is responsible for reading the responses
correctly.
d7d1cd wrote:Allow another question about the operation of the ReadStream function.
...
A request is sent to the Write function, to which the device gives 8 bytes of response. If expect = 8, then everything works fine. If expect = -1, then the ReadStream function, as expected, waits 5 seconds, but only 4 bytes are written to bytesStream, not 8. Why is this happening?
The only possibility is that only 4 bytes were actually available after the timeout elapsed.
As I said earlier, using AByteCount=-1 returns whatever bytes are available at that very moment. If there are no bytes available yet, then it waits for the ReadTimeout to elapse and then returns whatever bytes are available.
Using AByteCount>0 instead, it actually waits for all of the requested bytes to arrive, however long they may take (subject to the ReadTimeout, of course).
I
strongly suspect that using AByteCount=-1 is the WRONG solution to your situation.
d7d1cd wrote:why do I need to pass a parameter to the TBytesStream constructor (in my tmp code)?
Because that is just the way TBytesStream works. It is a wrapper for a byte array, so you have to give it an initial byte array to work with, even if it is just an empty one. TBytesStream does not have a parameter-less constructor. If you want that, use TMemoryStream instead.
d7d1cd wrote:Why can't you create an empty stream?
You can, if you give it an empty byte array. That is just the way TBytesStream was coded to operate.