Forums › Forums › SIMPOL Programming › txfactor
- This topic has 4 replies, 3 voices, and was last updated 10 years, 8 months ago by Jim Locker.
- AuthorPosts
- August 12, 2013 at 9:43 pm #291JD KromkowskiParticipant
What exactly is “txfactor” in the simpolserver cfg file. Jean, said someplace that 9 was painfully slow and she is trying 3. The sample has 0 for port 4000 6 for port 4001 I couldn’t tell the difference but I was just testing 3 machines. And why those ports? So what does look like if I want run an appframework application For example, what would this src =@ me.opendatasource(“sbme1”, SDataDir + “/employee.sbm”, appw, error=e) if src =@= .nul wxmessagedialog(appw.w, “Error opening the employee.sbm file”, sAPPMSGTITLE, “ok”, “error”) else t =@ appw.opendatatable(src, “EMPLOYEE”, error=e) BECOME if I wanted to run this a multi-user application I know “sbme1” gets changed to “ppcstype1” but I’m confused about the string to be use for the source For example, where I know my IP Address “10.1.10.XX:4000 but then what? “10.1.10.XX:4000” + SDataDir + “/employee.sbm”, appw, error=e) JDK
August 14, 2013 at 3:06 pm #2290MichaelKeymasterOn 12/08/2013 22:43, kromkowski wrote:
> What exactly is "txfactor" in the simpolserver cfg file.
>
> Jean, said someplace that 9 was painfully slow and she is trying 3.
>
> The sample has
>
> 0 for port 4000
> 6 for port 4001
>
> I couldn't tell the difference but I was just testing 3 machines.The txfactor exists to artificially slow down the server. The problem is that UDP/IP is a protocol that just spits out everything
and hopes that it arrives. We have found that if you are sending large amounts of data (Jean has over 800 fields in one table),
that the buffer is not being empties fast enough and therefore when it is full the OS discards the remainder. By setting a
txfactor higher than 0, you can artificially slow down the server so the data arrives slowly enough for the client to consume. The
rise is an exponential curve, so 1 is not much slower, 2 is pretty slow, 6 is a reasonable value for cross-Internet connections on
tables with 80 or so fields, etc. On fairly normal size tables (100 fields or less), 0 is probably fine in a typical LAN.> And why those ports?
We looked around for ports that were not heavily contested and they were available. You can pick any port you like, though not a
good idea <= 1024 (system reserved ports) with a maximum value of 65535. Superbase doesn't support values above 8000 if I remember
rightly.As for examples, the latest AddressBook sample shows both.
Ciao, Neil
> So what does look like if I want run an appframework application
>
> For example, what would this
>
> src =@ me.opendatasource("sbme1", SDataDir + "/employee.sbm", appw,
> error=e)
> if src =@= .nul
> wxmessagedialog(appw.w, "Error opening the employee.sbm file",
> sAPPMSGTITLE, "ok", "error")
> else
> t =@ appw.opendatatable(src, "EMPLOYEE", error=e)
>
> BECOME if I wanted to run this a multi-user application
>
> I know "sbme1" gets changed to "ppcstype1"
>
> but I'm confused about the string to be use for the source
>
> For example, where I know my IP Address
>
> "10.1.10.XX:4000 but then what?
>
> "10.1.10.XX:4000" + SDataDir + "/employee.sbm", appw, error=e)
>
>
> JDK
>August 15, 2013 at 11:20 pm #2291Jim LockerMemberThis doesn't sound right to me. Any modern client ought to be quite
adequately fast enough to keep ahead of any stream coming over the
internet.
..
Heck, I have a 32 bit Atom processor handling several thousand incoming
packets a second in an environment where we always wind up being
processor-bound based on the fact that we are compressing/decompressing
and routing all the traffic.
..
Even if there is a problem, why not write a dedicated thread that does
nothing but take stuff out of the incoming queue and stuff it into an
internal (and much larger) queue?August 16, 2013 at 4:56 pm #1444MichaelKeymasterOn 16/08/2013 00:20, Jim wrote:
> This doesn't sound right to me. Any modern client ought to be quite
> adequately fast enough to keep ahead of any stream coming over the
> internet.
> ..When we wrote the protocol and started dealing with the problems, it was 1997-8, and we still had customers using Windows 95. We
selected UDP/IP over TCP/IP because we didn't want to tell people they had to use NT to run a PPCS server, and running a test
server on Windows 95 crashed after a reasonably low number of connections. So we picked UDP/IP. Then, after we had implemented it,
we had customers trying it that had huge numbers of fields in their tables, and they were timing out trying to load the table
definitions and occasionally the records themselves. That was when we stumbled over the problem with the Windows internal buffer.
The data is being read in compiled C, but for some reason the speed with which it is coming out of the socket is slower than the
speed that it is receiving it. This sort of thing happens much less on modern machines, but it can still happen.> Heck, I have a 32 bit Atom processor handling several thousand incoming
> packets a second in an environment where we always wind up being
> processor-bound based on the fact that we are compressing/decompressing
> and routing all the traffic.
> ..
> Even if there is a problem, why not write a dedicated thread that does
> nothing but take stuff out of the incoming queue and stuff it into an
> internal (and much larger) queue?As I said above, we are already pulling the data out as fast as we can, but it isn't coming out as fast as it appears to be going
in. It may have to do with the overhead of processing the data, of that I am not sure (I didn't write the protocol). The solution
of slowing down the server worked fine for us even way back then. It is also directly compatible with the Superbase
implementation. If we do a newer, updated version, we will probably select TCP/IP sockets nowadays.Ciao, Neil
August 16, 2013 at 5:38 pm #2292Jim LockerMemberWell, I can't offer an opinion on Windows 95 sockets; all I know about
them is that they are limited in that raw sockets are not available.
Also, in those days, network interfaces were commonly found on ISA busses
which had their own set of limitations and which put an additional burden
on the processor.
..
However in any modern environment the hardware handles all of that; on
this Atom Q7 board, if we turn off the compression and routing and just
bridge from one ethernet port to the other, we can ship something like
30000 packets/sec before the hardware maxes out…with the processor
sitting nearly idle the whole time. And, in case you are not familiar
with them, the Atom is a low-power relatively low speed x86 processor
intended for embedded applications and the Q7 form factor is the same
thing. These are not high-end, blazing speed systems like are found on a
modern workstation. We're putting them in satellite modems in order to
have a fully integrated compression router/modem solution, enabling a
one-box satellite earth station. Well, they still need some other RF
components…
..
Now, I know Jean has an older system (XP, I believe), but even in the XP
days the network interfaces were on PCI busses and should be quite
adequately fast. I also realize we are talking Windows here, with all the
lack of transparency and lack of documentation of internals that implies.
If receive buffer overruns are being experienced on systems as recent as
XP systems with PCI busses, then that suggests to me that something needs
to be revisited. It might be as simple as changing the compiler
optimizations.
..
Also, I reiterate, compression/decompression using 7-zip is not only
feasible, but it greatly speeds up the data transfer and reduces the
chance of overrunning the receive buffer. On Superbase, I achieved
greater than order of magnitude increases in transfer speed across the
internet, even after considering the overhead of doing the
compression/decompression, by doing this.
..
It would be useful for Jean to profile her system (at a minimum, use task
manager to watch processor load) at the time her server is receiving all
this traffic in order to see how much work the processor is doing. - AuthorPosts
- You must be logged in to reply to this topic.