brad.r.hodge

Forum Replies Created

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • brad.r.hodge
    Participant

      Yes, in my case i was receiving the file from a remote server.

      First idea to consider is improve dnstrace performance by splitting its operations over two threads, e.g. one thread to read packets from the driver and second thread to re-inject then back.

      Are you sure this will help much? because in this way i guess we have to have a seperate link list for the packets that are needed to get re inject, and the reader thread needs to insert every received packet into this, which i don’t see why would this improve the performance much. seems like its mostly the same as having it all in one thread?

      I think we need to focus on SMB to solve this issue, i found an OSR thread here:

      https://community.osr.com/discussion/290695/wfp-callout-driver-layer2-filtering

      Which has a important part in it:

      However, I am encapsulating packets and needed the ability to be able to create NBL chains in order to improve performance when dealing with large file transfers and the like (i.e. typically for every 1 packet during an SMB file transfer one needs to generate at least 2 packets per 1 original packet because of MTU issues)

      Thoughts?

      brad.r.hodge
      Participant

        This is the result on a i7-2600 @ 3.4GHz Win10x64:

        100MB/s -> 40MB/s

        CPU: 52%
        Memory: 30%
        Disk: 10%

        So i dont know which one is the bottleneck, but when i turn off the dnstrace it goes back to 100MB/s and CPU usage goes to 40% but thats it..

        Everything is latest version, i even used the 64 bit compiled tools from the website and didn’t compile them myself to make sure nothing is wrong.

        brad.r.hodge
        Participant

          Also i would be grateful if you can test this yourself on different systems as well, specially with low end and average CPUs and check the results.

          brad.r.hodge
          Participant

            So i tried it with multiple systems with different CPUs, baremetal and VM, and the results are the same.

            Running the SNI inspector/Dnstrace x64 versions on a win10x64 will reduce the file transfer speed through shares around 50%

            for example 70MB/s -> 30MB/s.

            Is there any fix available for this? or there is no solution?

            Although i should mention that on some high end CPUs such as i7 7700k the reduction was around 10-15%, but most of average customers don’t have these so we have to assume the worst case scenario where they have an average or low end CPU.

            brad.r.hodge
            Participant

              I tested on the real hardware over real 1 Gbps wire cable.

              Are you sure you are copying from the share through your network connection? because the picture you posted above is copying the file with the rate of 300MB/s, which is not possible with the network that has a 1Gbs speed cable (notice the Byte and bit).

              To get that 300MB/s you have to have a 10Gbs network which is not that common for simple networks.

              brad.r.hodge
              Participant

                Also, if you would like to test with fast I/O option you can take sni_inspector.

                Isn’t fast IO implemented in dnstrace as well? i saw some fast IO stuff in its code. also, is fast IO only available if we compile it as 64 bit?

                It is easy to verify, just start Task Manager (or Resource Monitor) when you copy the file and check the CPU load with and without dnstrace running. If your CPU peaks even without dnstrace then no wonder if you get the throughput degradation when add extra work…

                Yep, it seems like the bottleneck is because of CPU intensive works, the cpu I’m testing with is core i7 5500, even tho its not high end its what most of the ordinary users use, and obviously we can’t tell our clients to just upgrade and get a high end CPU to fix this issue if we purchased this project. Is there anyway to fix this? does dnstrace fully implement fast I/O?

                What CPU are you testing with?

                brad.r.hodge
                Participant

                  Can you also try to use the code that was shared on the blog that you mentioned to see if you still don’t get any reduced performance?

                  brad.r.hodge
                  Participant

                    I guess one reason could be because of how powerful the underlying CPU is. I tied the dnstrace and it got reduced from 80MB/s to 40MB/s. But it is weird that it doesn’t get reduced at all in your case, are you sure you are copying from a shared folder on a network? mine is a freshly installed windows 10 (VM).

                    I want to measure how much overhead this project has if we use it to send every packet to user to check (block or not), and then send those that are OK based on user mode decision. So this dnstrace seems to do exactly this right? because based on reading its code, it is receiving packets from kernel and sending those that are OK (which in this case is all of them) back to kernel, right?

                    brad.r.hodge
                    Participant

                      I actually read that post too, and compiled the exact code and the result is still the same, smb file transfer speed gets reduced by 80%-70%.

                      The reason i tried the newest version was that i was hoping maybe direct IO would fix this issue, since they didn’t implement direct IO in that post, but seems like it doesn’t.

                      Although I’m not sure how that post is claiming to reach 90MB/s when i cant reach 30-40? Maybe it didn’t try doing file transfer through share?

                      Are you getting the same result as me?

                      brad.r.hodge
                      Participant

                        We removed the file writing part, but the problem still persist. so right now it just gets the packet from driver, and then passes it back to kernel.

                        Although i should mention that our connection speed is 1Gb/s, and using capture even after removing the file writings will reduce it to 200Mb/s when moving files from shares?

                        Can you give it a try as well by removing the file writing and see how much it reduces the speed?

                        brad.r.hodge
                        Participant

                          Also Shouldn’t the fast I/O that is implemented recently fix this?

                          We compiled the capture as 64 bit (so not wow64) and the tested windows was windows 10 20h1.

                          What is causing this problem?

                        Viewing 11 posts - 1 through 11 (of 11 total)