Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 116/594 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • Insert data into SQL server with best performance

    - by Incognito
    I have an application which intensively uses DB (SQL Server). As it must have high performance I would like to know the fastest way to insert record into DB.Fastest from the standpoint of execution time. What should I use ? As I know the fastest way is to create stored procedure and to call it from code (ADO.NET). Please let me know is there any better way or may be there are is some other practices to increase performance.

    Read the article

  • Will a higher hard drive size affect performance

    - by user273010
    My laptop came with a 500 GB hard drive. I use my laptop for storing my digital photographs, and only have about 14 GB of file storage left on the original hard drive. I have a 750 GB external hard drive, but am leery of relying on it for primary storage as I tend to knock things over and it has already crashed once and I lost a lot of the files. I am looking at a 1 TB internal hard drive, but am concerned if storing so much data will affect the computer's performance. Should I also increase RAM from 4 to 8 GB (the limit for my 64-bit, Windows 7, Asus A54C laptop)?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • Intel VTune Performance Analyser 9.1 not working on Win 7 64

    - by ian
    Got the 32bit and 64 bit versions of the Intel VTune Performance Analyser. I installed the 64bit version (I think the installed was the "EMT" one) and when I go to create a new project, upon clicking the button to select an executable to profile, no file dialog popup shows. I got an old laptop and installed the 32bit on to Windows XP and it works fine. Regarding the 64 bit version, I did try changing the compatability to XP SP3 but it still didnt work. Does anyone know how to fix this?

    Read the article

  • Control Reference Static Method Performance

    - by dotnetguts
    I have just asked which one is better? Static Vs Non-Static? http://stackoverflow.com/questions/3016717/static-vs-non-static-method-performance-c I would like to take this discussion one step ahead. Consider If i pass reference of Panel control as parameter to Public static method, will static method still rules in performance?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • Using Google Webmaster & Analytics, what data to look at to improve website performance?

    - by Rob
    Using data from Google Analytics and Webmaster tools, what data should I be looking at to improve my websites performance? I want to improve the SEO, usability and just general performance of my website. EDIT: It's a portfolio website that we've done the initial SEO for, also optimised all images etc and made the site as fast as possible. What kind of things should I be looking out for in the analytics and webmaster data to improve performance for both the SEO and each individual page.

    Read the article

  • Most efficient method of detecting/monitoring DOM changes?

    - by Graza
    I need an efficient mechanism for detecting changes to the DOM. Preferably cross-browser, but if there's any efficient means which are not cross browser, I can implement these with a fail-safe cross browser method. In particular, I need to detect changes that would affect the text on a page, so any new, removed or modified elements, or changes to inner text (innerHTML) would be required. I don't have control over the changes being made (they could be due to 3rd party javascript includes, etc), so it can't be approached from this angle - I need to "monitor" for changes somehow. Currently I've implemented a "quick'n'dirty" method which checks body.innerHTML.length at intervals. This won't of course detect changes which result in the same length being returned, but in this case is "good enough" - the chances of this happening are extremely slim, and in this project, failing to detect a change won't result in lost data. The problem with body.innerHTML.length is that it's expensive. It can take between 1 and 5 milliseconds on a fast browser, and this can bog things down a lot - I'm also dealing with a large-ish number of iframes and it all adds up. I'm pretty sure the expensiveness of doing this is because the innerHTML text is not stored statically by browsers, and needs to be calculated from the DOM every time it is read. The types of answers I am looking for are anything from the "precise" (for example event) to the "good enough" - perhaps something as "quick'n'dirty" as the innerHTML.length method, but that executes faster.

    Read the article

  • Improve performance on Lync desktop sharing

    - by Trikks
    I'm using Lync 2010 server to handle some clients communication and screen sharing. The biggest issue is the performance with screen sharing, it is of rather high quality but the frame rate is very poor. I have been reading and searching a lot on the subject and 95% of all topics is about bandwidth, we have a 200/200 MBit Internet connection solely for this application. Also my test machines runs on an internal gigabit lan. The speeds between all boxes is hysterically fast. Next step was to ensure that there where some profiles for different bandwidths, so i registered some New-CsNetworkBandwidthPolicyProfile -Identity 50Mb_Link -Description "BW profile for 50Mb links" -AudioBWLimit 20000 -AudioBWSessionLimit 200 -VideoBWLimit 14000 -VideoBWSessionLimit 700 New-CsNetworkBandwidthPolicyProfile -Identity 100Mb_Link -Description "BW profile for 100Mb links" -AudioBWLimit 30000 -AudioBWSessionLimit 300 -VideoBWLimit 25000 -VideoBWSessionLimit 1500 Nothing fancy happend here either. Non of the test boxes have anything from Norton installed, they doesn't have any firewalls running (nor does the Lync server), all fences are down in this environment just for the testing. Is there any thing that I may have missed to improve the quality of this? Thanks

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • Which JavaScript graphics library has the best performance?

    - by DNS
    I'm doing some research for a JavaScript project where the performance of drawing simple primitives (i.e. lines) is by far the top priority. The answers to this question provide a great list of JS graphics libraries. While I realize that the choice of browser has a greater impact than the library, I'd like to know whether there are any differences between them, before choosing one. Has anyone done a performance comparison between any of these?

    Read the article

  • server performance metrics report and practicality

    - by Anjesh
    I have a need of preparing web server (apache-php) performance report containing important metrics like CPU usage, disk io, memory usage on user basis. Couple of domains are hosted in the same server and they run from separate users using fcgi. The reason being sometimes some hosted applications take lots of cpu usage, making the server slow for other applications (running as separate users). i am planning to develop scripts for this, as i can't seem to find any simple utilities for this purpose. This script will take snapshots of the user wise metrics at defined periods say 15 minutes and record it. Any abnormalities will be reported via emails. How practical is that? also would be interesting to know what else need to be recorded.

    Read the article

  • Perl: Asynchronous file monitoring

    - by Hussain
    I am wondering if it is possible, and if so how, one could create a perl script that constantly monitors a file/db, and then call a subroutine to perform text processing if the file is changed. I'm pretty sure this would be possible using sockets, but this needs to be used for a webchat application on a site running on a shared host, and I'm not so sure sockets would be allowed on it. The basic idea is: -create a listener for a chat file/database -when the file is updated with a new message, call a subroutine -the called subroutine will send the new message back to the browser to be displayed Thanks in advance.

    Read the article

  • How do you demonstrate performance in paired-programming environments?

    - by NT3RP
    Performance reviews have come up recently at my work, and I was put in an interesting position. Our team does a lot of pair programming, which has a tendency of averaging out the skill differences between team members (especially considering we rotate pairs). Generally, when doing performance reviews, you look back at the work you've done, and demonstrate what you've accomplished, and how you've exceeded expectations to try to negotiate a raise or other benefits. How do you demonstrate (or even measure) individual performance in an environment like this?

    Read the article

  • Monitoring an audio line.

    - by Stefan Liebenberg
    I need to monitor my audio line-in in linux, and in the event that audio is played, the sound must be recorded and saved to a file. Similiar to how motion monitors the video feed. Is it possible to do this with bash? something along the lines of: #!/bin/bash # audio device device=/dev/audio-line-in # below this threshold audio will not be recorded. noise_threshold=10 # folder where recordings are stored storage_folder=~/recordings # run indefenitly, until Ctrl-C is pressed while true; do # noise_level() represents a function to determine # the noise level from device if noise_level( $device ) > $noise_threshold; then # stream from device to file, can be encoded to mp3 later. cat $device > $storage_folder/`date`.raw fi; done;

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >