Search Results

Search found 11479 results on 460 pages for 'resource usage'.

Page 122/460 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Setup IPv6 over IPv4 tunnel in VPN

    - by bfmeb
    Let me explain my szenario: I have a linux server A. A is reachable in a VPN. So if I am connected to the VPN over Internet I can successfully ping A. Server A is connected to a Router B. Router B has a local ipv6 address and there are resources (each of them with a local ipv6 address) connected to Router B. After I am connected to VPN, I am able to use ssh to have access over A. Now I can use the ping6 command to ping the Router B or one of its connected resources. This works fine. The ping fails if I try to ping router B on my computer. Overview: My Computer -- VPN -- Server A(ipv4) -- Router B(ipv6) -- Ressource A(ipv6) On resource A runs for example a HTTP-Server. My question is: How can I access Resource A (for example with HTTP) on my to VPN connected computer? Is it possible? Should I setup a tunnel device? Sorry for this inexpertly explanation, but I am new to network stuff!

    Read the article

  • nagiosgraph new services not showing

    - by Eleven-Two
    I am using Nagios Core with Nagiosgraph and had only enabled graphing for cpu usage for a while. This worked fine, but now i wanted to add some more services (for example memory usage). The new services are not working (no rrd data is generated). The Nagiosgraph site only says "no data available" and I get no error in apache log, nagiosgraph.log or nagiosgraph-cgi.log. The new services are standard services (nsclient++ MEMUSE for example) and of course they are included in the map file. If I execute the checks manually, it shows also the perfdata. I added the services by enabling the "graphed-service" use. Did I miss something?

    Read the article

  • How to limit router bandwidth?

    - by David
    Hello, Is there a way to configure my (D-Link DIR-615) router to throttle down the allowed bandwidth after a certain amount of bandwidth has been used? For instance, I want my router to operate normally up to 20GB. After 20GB I want the router to limit bandwidth to a fraction of the normal speed (perhaps 1/5th). I live in Canada, so in about a month, everyone is going to be billed based on the amount they used (usage based billing). Instead of the unlimited bandwidth that I am enjoying now, most people will be capped at 25GB and will have to fork out $2/GB of over usage. Thank you in advance for the help.

    Read the article

  • Configuring IIS site to use HTTPS

    - by James
    I am working on a REST API which I have currently deployed on a Win XP Professional SP2 development machine running IIS 5.1. The site is currently being hosted on port 81 and being accessed via HTTP. I would now like to configure the site to stop using HTTP and use HTTPS only. I have developed a self-signed certificate using the SelfSSL.exe tool from the 6.0 Resource Kit Tools and set the Common Name to be the IP of my server (as it's a local development machine it has no domain name). I have also already configured the site to use SSL using the How To Set Up an HTTPS Service in IIS tutorial as my guide. However, whenever I try to access a resource in the API via HTTPS I get a 404. Any ideas?

    Read the article

  • Make file Linking issue Undefined symbols for architecture x86_64

    - by user1035839
    I am working on getting a few files to link together using my make file and c++ and am getting the following error when running make. g++ -bind_at_load `pkg-config --cflags opencv` -c -o compute_gist.o compute_gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o gist.o gist.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o standalone_image.o standalone_image.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o IplImageConverter.o IplImageConverter.cpp g++ -bind_at_load `pkg-config --cflags opencv` -c -o GistCalculator.o GistCalculator.cpp g++ -bind_at_load `pkg-config --cflags opencv` `pkg-config --libs opencv` compute_gist.o gist.o standalone_image.o IplImageConverter.o GistCalculator.o -o rungist Undefined symbols for architecture x86_64: "color_gist_scaletab(color_image_t*, int, int, int const*)", referenced from: _main in compute_gist.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status make: *** [rungist] Error 1 My makefile is as follows (Note, I don't need opencv bindings yet, but will be coding in opencv later. CXX = g++ CXXFLAGS = -bind_at_load `pkg-config --cflags opencv` LFLAGS = `pkg-config --libs opencv` SRC = \ compute_gist.cpp \ gist.cpp \ standalone_image.cpp \ IplImageConverter.cpp \ GistCalculator.cpp OBJS = $(SRC:.cpp=.o) rungist: $(OBJS) $(CXX) $(CXXFLAGS) $(LFLAGS) $(OBJS) -o $@ all: rungist clean: rm -rf $(OBJS) rungist The method header is located in gist.h float *color_gist_scaletab(color_image_t *src, int nblocks, int n_scale, const int *n_orientations); And the method is defined in gist.cpp float *color_gist_scaletab(color_image_t *src, int w, int n_scale, const int *n_orientation) { And finally the compute_gist.cpp (main file) #include <stdio.h> #include <stdlib.h> #include <string.h> #include "gist.h" static color_image_t *load_ppm(const char *fname) { FILE *f=fopen(fname,"r"); if(!f) { perror("could not open infile"); exit(1); } int width,height,maxval; if(fscanf(f,"P6 %d %d %d",&width,&height,&maxval)!=3 || maxval!=255) { fprintf(stderr,"Error: input not a raw PPM with maxval 255\n"); exit(1); } fgetc(f); /* eat the newline */ color_image_t *im=color_image_new(width,height); int i; for(i=0;i<width*height;i++) { im->c1[i]=fgetc(f); im->c2[i]=fgetc(f); im->c3[i]=fgetc(f); } fclose(f); return im; } static void usage(void) { fprintf(stderr,"compute_gist options... [infilename]\n" "infile is a PPM raw file\n" "options:\n" "[-nblocks nb] use a grid of nb*nb cells (default 4)\n" "[-orientationsPerScale o_1,..,o_n] use n scales and compute o_i orientations for scale i\n" ); exit(1); } int main(int argc,char **args) { const char *infilename="/dev/stdin"; int nblocks=4; int n_scale=3; int orientations_per_scale[50]={8,8,4}; while(*++args) { const char *a=*args; if(!strcmp(a,"-h")) usage(); else if(!strcmp(a,"-nblocks")) { if(!sscanf(*++args,"%d",&nblocks)) { fprintf(stderr,"could not parse %s argument",a); usage(); } } else if(!strcmp(a,"-orientationsPerScale")) { char *c; n_scale=0; for(c=strtok(*++args,",");c;c=strtok(NULL,",")) { if(!sscanf(c,"%d",&orientations_per_scale[n_scale++])) { fprintf(stderr,"could not parse %s argument",a); usage(); } } } else { infilename=a; } } color_image_t *im=load_ppm(infilename); //Here's the method call -> :( float *desc=color_gist_scaletab(im,nblocks,n_scale,orientations_per_scale); int i; int descsize=0; //compute descriptor size for(i=0;i<n_scale;i++) descsize+=nblocks*nblocks*orientations_per_scale[i]; descsize*=3; // color //print descriptor for(i=0;i<descsize;i++) printf("%.4f ",desc[i]); printf("\n"); free(desc); color_image_delete(im); return 0; } Any help would be greatly appreciated. I hope this is enough info. Let me know if I need to add more.

    Read the article

  • What is the formula for HughesNet FAP calculation?

    - by JohnFx
    I am somewhat frustrated with the only FAP monitor I have found on the net and discovered because it relies on a running count of bandwidth usage which (1) requires a service in the background; and (2) Tends to get inaccurate over time. Given that there is a diagnostics page on the firmware of the modem that tells the exact usage per hour, I was planning on writing a more accurate version with a better UI. However, it appears that HughesNet keeps the exact formula for calculating whether you are in FAP a secret. I have no idea why they wouldn't be more forthcoming with this information. Wondering if anyone out in SU-land had done some trial and error testing to reverse engineer the formula or had some inside knowledge to share.

    Read the article

  • how to google a symbol keyword like "$?"

    - by ZhengZhiren
    i saw a trick in a book: in a linux shell, we can use &? to get the return value of a command. For example,we run a command,if it exit normally, the return value is 0. And then we type $?,we will get 0 in the screen. i want to google this kind of usage, so i have to type these two symbol $? in the search blank.But the search engine just return nothing to me... i have looked at the google help page, but still can't find a solution. so my question is: how can i search with this kind of keyword. or if you can give me some advise of the usage of $? or sort of thing, that will be also appreciated.

    Read the article

  • Outbound Traffic Logging on ASA 5520 possible?

    - by j2k4j
    Taking a look at the ASDM (6.4) for my ASA 5520, I get a nice summary of the traffic status, with items like "interface traffic usage", and "connections per second". This works well, but only shows the data for the last 5-6 minutes or so. Recently, I've been asked whether it's possible to pull up this same type of traffic data for a particular time in the past. (Such as: Find the traffic usage for a 3 minute period from date xx:xx:xx @ time xx:xx:xx) I've noticed that my ASA 5520 is logging the warning, errors, etc that it is processing. But traffic data is not logged (yet) according to my search through the ASA. Is logging the traffic data amounts (as wondered above) actually a possibility? Is there any way to find out the past data for traffic and such values? Thanks!

    Read the article

  • Intermittent HTTP 401 errors

    - by forthrin
    I am using an Intranet solution which requires basic HTTP login. However, there is an intermittent error which requires me to log in again, and then the server says "Forbidden" whether I give the correct login information or not. To add insult to injury, Safari (and Chrome) seems to show the login dialog for every included resource in the HTML, and it's impossible to cancel this modal dialog sequence, so the whole browser is blocked until I've pressed Esc some 30 odd times. After an hour, I may gain access again, without having really done anything. My questions: What could cause temporal 401 errors? Why do the browsers show the login dialog 30 times per page load (assumedly for every included resource in the HTML from the same domain)?

    Read the article

  • Choosing parts for a high-spec custom PC - feedback required [closed]

    - by James
    I'm looking to build a high-spec PC costing under ~£800 (bearing in mind I can get the CPU half price). This is my first time doing this so I have plenty of questions! I have been doing lots of research and this is what I have come up with: http://pcpartpicker.com/uk/p/j4lE Usage: I will be using it for Adobe CS6, rendering in 3DS Max, particle simulations in Realflow and for playing games like GTA IV (and V when it comes out), Crysis 1/2, Saints Row The Third, Deus Ex HR, etc. Questions: Can you see any obvious problem areas with the current setup? Will it be sufficient for the above usage? I won't be doing any overclocking initially. Is it worth buying the H60 liquid cooler, or will the fan that comes with the CPU be sufficient? Is water cooling generally quieter? Is the chosen motherboard good for the current components? And is it future-proof? I read that the HDD is often the bottleneck when it comes to gaming. I presume this is true to other high-end applications? If so, is my selection good? I keep changing my mind about the GPU; first the 560, now the 660. Can anyone shed some light on how to choose? I read mixed opinions about matching the GPU to the CPU. Will the 560 or the 660 be sufficient for my required usage? Atm I'm basing my choice on the PassMark benchmarks and how much they cost. The specs on the GeForce website state that the 560 and the 660 both require 450W. Is this a good figure to base the wattage of my PSU on? If so, how do you decide? Do I really need 750W? The latest GTX 690 requires 650W. Is it a good idea to buy a 750W PSU now to future-proof myself?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Workaround for API limits [closed]

    - by blunders
    Problem: Planning on building out a client services company that requires access to APIs. Most APIs are limited based on user, IP, etc. - and even though the API calls would be on a per client basis, there's no way to get usage not tied to IPs. (Theoretical) Solution: Have each client install on their network a proxy/VPN that would allow my systems to connect and use their assigned usage. So, it's possible there's a better solution than the one I've thought of, but it's the only one I've been able to come up with.

    Read the article

  • Applications starts very slowly from a network path

    - by Snowfox
    Hi We have a windows 2008 server which hosts the network share \\srvcompany\lib. This share contains several applications needed for the daily business. Every client/user (all win xp) has shortcuts on the desktop to these apps. We have the problem that at several (but not all) clients the apps starts very slowly. If I copy the application's programm files to a local folder then they'll start fastly. When I watch the memory usage in the task manager on such a "slow" machine while an applications starts I notice that the memory usage grows much slowier than when I start the app from a "fast" machine. But when I copy files with Windows Explorer from this share, the speed is nearly the same. I've also checked the network driver, both tested clients have the same network card with the same driver version. Has anyone an idea where or what I should check next to solve this problem? Thanks for answers.

    Read the article

  • Learn Linux Command Line for Web Server Management [closed]

    - by Jonathan
    I've searched high and low for a good resource for learning the Linux command line. I've found a handful of separate resources, but none that really can assist in web server management. I'm currently learning through trial an error with 'man' pages, along with Google. I was just wondering if anyone had a solid resource that they used to learn, and would be willing to share it with me. Thanks so much for your time, I really appreciate it! EDIT: I have a few CentOS servers at current, and I know the basics, I'm just trying to get to a more advanced level.

    Read the article

  • Bash Shell Scripting Errors: ./myDemo: 56: Syntax error: Unterminated quoted string [EDITED]

    - by ???
    Could someone take a look at this code and find out what's wrong with it? #!/bin/sh while : do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls $PWD ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ "$directory = "q" -o "Q" ] then exit 0 fi while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Re-enter the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 fi done printf ls "$directory" ;; b|B) echo "Please specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi while [ ! -f "$file" ] do echo "Usage: \"$file\" must be an ordinary file." echo "Re-enter the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 fi done cp "$file" "$file.bkup" ;; q|Q) exit 0 ;; esac echo done exit 0 There are some syntax errors that I can't figure out. However I should note that on this unix system echo -e doesn't work (don't ask me why I don't know and I don't have any sort of permissions to change it and even if I wouldn't be allowed to) Bash Shell Scripting Error: "./myDemo ./myDemo: line 62: syntax error near unexpected token done' ./myDemo: line 62: " [Edited] EDIT: I fixed the while statement error, however now when I run the script some things still aren't working correctly. It seems that in the b|B) switch statement cp $file $file.bkup doesn't actually copy the file to file.bkup ? In the a|A) switch statement ls "$directory" doesn't print the directory listing for the user to see ? #!/bin/bash while $TRUE do echo " Select one of the following options:" echo " d or D) Display today's date and time" echo " l or L) List the contents of the present working directory" echo " w or W) See who is logged in" echo " p or P) Print the present working directory" echo " a or A) List the contents of a specified directory" echo " b or B) Create a backup copy of an ordinary file" echo " q or Q) Quit this program" echo " Enter your option and hit <Enter>: \c" read option case "$option" in d|D) date ;; l|L) ls pwd ;; w|w) who ;; p|P) pwd ;; a|A) echo "Please specify the directory and hit <Enter>: \c" read directory if [ ! -d "$directory" ] then while [ ! -d "$directory" ] do echo "Usage: "$directory" must be a directory." echo "Specify the directory and hit <Enter>: \c" read directory if [ "$directory" = "q" -o "Q" ] then exit 0 elif [ -d "$directory" ] then ls "$directory" else continue fi done fi ;; b|B) echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ ! -f "$file" ] then while [ ! -f "$file" ] do echo "Usage: "$file" must be an ordinary file." echo "Specify the ordinary file for backup and hit <Enter>: \c" read file if [ "$file" = "q" -o "Q" ] then exit 0 elif [ -f "$file" ] then cp $file $file.bkup fi done fi ;; q|Q) exit 0 ;; esac echo done exit 0 Another thing... is there an editor that I can use to auto-parse code? I.e something similar to NetBeans?

    Read the article

  • Using rspec to check creation of template

    - by Brian
    I am trying to use rspec with puppet to check the generation of a configuration file from an .erb file. However, I get the error 1) customizations should generate valid logstash.conf Failure/Error: content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] ArgumentError: wrong number of arguments (0 for 1) # ./spec/classes/logstash_spec.rb:29:in `catalogue' # ./spec/classes/logstash_spec.rb:29 And the logstash_spec.rb: describe "customizations" do let(:params) { {:template => "profiles/logstash/output_broker.erb", :options => {'opt_a' => 'value_a' } } } it 'should generate valid logstash.conf' do content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] content.should match('logstash') end end

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • Tool to track bandwidth by domain name?

    - by Grant Limberg
    I'm running an Ubuntu 10.04 server that hosts several domain names. All domains point to the same IP address and use the same network interface. I'm really only concerned with the main domain name such as my-domain1.com and my-domain2.com. It should include subdomains such as www.my-domain1.com with the totals for my-domain1.com. Is there a tool out there that is configurable to track bandwidth usage on a per-domain name basis? Edit: I'm not looking for only web usage. I'm looking for all traffic.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • How to setup Apache 2.2 (prefork) with mod_fcgid to test a C++ application?

    - by skyeagle
    I have written my first fastcgi application (C/C++), and I need to test it to ensure that it is behaving the way I expect it to. I have searched for examples on setting up Apache 2.2. with mod_fcgid, but all of teh tutorials etc I have seen, relate to PHP, Python, Perl etc. Is anyone aware of a resource that shows how I may setup Apache to use mod_fcgid (NOT mod_fastcgi) to test my binary? If no online resource is available (I'd be surprised), then could someone please point out the steps required to do the testing?

    Read the article

  • How to setup Apache 2.2 (prefork) with mod_fcgid to test a C++ application?

    - by skyeagle
    I have written my first fastcgi application (C/C++), and I need to test it to ensure that it is behaving the way I expect it to. I have searched for examples on setting up Apache 2.2. with mod_fcgid, but all of teh tutorials etc I have seen, relate to PHP, Python, Perl etc. Is anyone aware of a resource that shows how I may setup Apache to use mod_fcgid (NOT mod_fastcgi) to test my binary? If no online resource is available (I'd be surprised), then could someone please point out the steps required to do the testing?

    Read the article

  • How do I optimize a high traffic Wordpress website?

    - by mha
    Hello, I am running a wordpress based site which is now hosted on (mt) under DV-Extreme package 2GB+256MB addon RAM. It a muti author site where people are engaged in writing posts, comments, updating status etc. According to Google Analytics this month traffic Visitor = 45,764 Pageview = 1,051,186 Visit = 141,447 I have cdn my site, compress the css, used w3 Total cache plugin to optimize my site. Since last month I am getting several down notice from Pingdom. Right now I am facing more down alert than before. And have to restart my site several time to up again. Is my hosting resource is not enough? Do I need more resource? or what could be the solution? Helpful suggestion will be appreciated. Thanks.

    Read the article

  • AWS EC2: how to compute the cost

    - by EsseTi
    i'm new to AWS, i'm using the free right not and it's terrific. Now, in 1yr the free expires. i went to the website http://aws.amazon.com/ec2/pricing/ where the pricing is but i didn't really get how to compute it. The price are in $ per Hours but i don't think that this means, if i need to have my application running 24h/365d i've to multiplay it for 8760, or do i have? because they write about usage, but how do i compute this value? if i've a website where people in total spend smt like 10 minutes a month and 1 where people spend 750hour a months i pay the same? i can't believe that is the same price. PS:if i've a scheduled task, does it affect the usage?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • What are your most useful textexander (or similar) snippets?

    - by P.Bjorklund
    Textexpander is a program that aims to save you time by auto-replacing snippets of text with the content of your choice, or to quote their Web site: "Save yourself time and effort by typing short abbreviations for frequently-used text and images." So for instance when you type ,h1 it will change it to <h1></h1> with the cursor placed between the <. After some searching I have yet to find a resource/forum-thread/whatnot that discuss the uses of this marvelous program. I am therefor looking for your best snippets or a link to a resource where I can find this. Oh and one thing I can think of right away is sigw, sigp for my work/personal email signature.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >