Search Results

Search found 10215 results on 409 pages for 'ram usage'.

Page 313/409 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • What is the most likely cause to a Blue Screen of Death when the user presses Ctrl-Alt-Delete?

    - by Jay
    My wife is experiencing this with her work laptop--she presses Ctrl-Alt-Delete to lock and she gets the BSOD. The first troubleshooting step is usually to "re-image", and it's locked down. So with this question, I am asking whether the behavior is unique enough that someone in the stack-universe knows exactly what this is (something I can tell her to tell her help desk support). Update: help desk said to order more RAM. Alt-tabbing caused the same behavior today. And...she learned that multiple users are affected. I'm not sure I'll be able to clean any additional info that will help w/ troubleshooting. I'll leave the question here for a bit and if an answer ends up being the actual solution, I'll accept it. If not, I think I should probably remove the question (i'll check meta). Update #2.5: The cause appears to be a ctrl-alt-delete keystroke while Sales Team Configurator is open. This can either be to lock the screen (there are workaround in answers already present) or to unlock the screen (no workaround for that).

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • Send mail on event log error trigger safe check frequency

    - by Zeb Rawnsley
    I want to use powershell to alert me when an error occurs in the event viewer on my new Win2k12 Standard Server, I was thinking I could have the script execute every 10mins but don't want to put any strain on the server just for event log checking, here is the powershell script I want to use: $SystemErrors = Get-EventLog System | Where-Object { $_.EntryType -eq "Error" } If ($SystemErrors.Length -gt 0) { Send-MailMessage -To "[email protected]" -From $env:COMPUTERNAME + @company.co.nz" -Subject $env:COMPUTERNAME + " System Errors" -SmtpServer "smtp.company.co.nz" -Priority High } What is a safe frequency I can run this script at without hurting my server? Hardware: Intel Xeon E5410 @ 2.33GHz x2 32GB RAM 3x 7200RPM S-ATA 1TB (2x RAID1) Edit: With the help of Mathias R. Jessen's answer, I ended up attaching an event to the application & system log with the following script: Param( [string]$LogName ) $ComputerName = $env:COMPUTERNAME; $To = "[email protected]" $From = $ComputerName + "@company.co.nz"; $Subject = $ComputerName + " " + $LogName + " Error"; $SmtpServer = "smtp.company.co.nz"; $AppErrorEvent = Get-EventLog $LogName -Newest 1 | Where-Object { $_.EntryType -eq "Error" }; If ($AppErrorEvent.Length -eq 1) { $AppErrorEventString = $AppErrorEvent | Format-List | Out-String; Send-MailMessage -To $To -From $From -Subject $Subject -Body $AppErrorEventString -SmtpServer $SmtpServer -Priority High; };

    Read the article

  • Users using Perl script to bypass Squid Proxy

    - by mk22
    The users on our network have been using a perl script to bypass our Squid proxy restrictions. Is there any way we can block this script from working?? #!/usr/bin/perl ######################################################################## # (c) 2008 Indika Bandara Udagedara # [email protected] # http://indikabandara19.blogspot.com # # ---------- # LICENCE # ---------- # This work is protected under GNU GPL # It simply says # " you are hereby granted to do whatever you want with this # except claiming you wrote this." # # # ---------- # README # ---------- # A simple tool to download via http proxies which enforce a download # size limit. Requires curl. # This is NOT a hack. This uses the absolutely legal HTTP/1.1 spec # Tested only for squid-2.6. Only squids will work with this(i think) # Please read the verbose README provided kindly by Rahadian Pratama # if u r on cygwin and think this documentation is not enough :) # # The newest version of pget is available at # http://indikabandara.no-ip.com/~indika/pget # # ---------- # USAGE # ---------- # + Edit below configurations(mainly proxy) # + First run with -i <file> giving a sample file of same type that # you are going to download. Doing this once is enough. # eg. to download '.tar' files first run with # pget -i my.tar ('my.tar' should be a real file) # + Run with # pget -g <URL> # # ######################################################################## ######################################################################## # CONFIGURATIONS - CHANGE THESE FREELY ######################################################################## # *magic* file # pls set absolute path if in cygwin my $_extFile = "./pget.ext" ; # download in chunks of below size my $_chunkSize = 1024*1024; # in Bytes # the proxy that troubles you my $_proxy = "192.168.0.2:3128"; # proxy URL:port my $_proxy_auth = "user:pass"; # proxy user:pass # whereis curl # pls set absolute path if in cygwin my $_curl = "/usr/bin/curl"; ######################################################################## # EDIT BELOW ONLY IF YOU KNOW WHAT YOU ARE DOING ######################################################################## use warnings; my $_version = "0.1.0"; PrintBanner(); if (@ARGV == 0) { PrintHelp(); exit; } PrimaryValidations(); my $val; while(scalar(@ARGV)) { my $arg = shift(@ARGV); if($arg eq '-h') { PrintHelp(); } elsif($arg eq '-i') { $val = shift(@ARGV); if (!defined($val)) { printf("-i option requires a filename\n"); exit; } Init($val); } elsif($arg eq '-g') { $val = shift(@ARGV); if (!defined($val)) { printf("-g option requires a URL\n"); exit; } GetURL($val); } elsif($arg eq '-c') { $val = shift(@ARGV); if (!defined($val)) { printf("-c option requires a URL\n"); exit; } ContinueURL($val); } else { printf ("Unknown option %s\n", $arg); PrintHelp(); } } sub GetURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my %mapExt; my $first; my $readLen; my $ext = GetExt($fileName); ReadMap($_extFile, \%mapExt); if ( exists($mapExt{$ext})) { $first = $mapExt{$ext}; GetFile($URL, $first, $fileName, 0); } else { die "Unknown ext in $fileName. Rerun with -i <fileName>"; } } sub ContinueURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my $fileSize = 0; $fileSize = -s $fileName; printf("Size = %d\n", $fileSize); my $first = -1; if ( $fileSize > 0 ) { $fileSize -= 1; GetFile($URL, $first, $fileName, $fileSize); } else { GetURL($URL); } } sub Init { my ($fileName) = @_; my ($key, $value); my %mapExt; my $ext = GetExt($fileName); if ( $ext eq "") { die "Cannot get ext of \'$fileName\'"; } ReadMap($_extFile, \%mapExt); my $b = GetFirst($fileName); $mapExt{$ext} = $b; WriteMap($_extFile, \%mapExt); print "I handle\n"; while ( ($key, $value) = each(%mapExt) ) { print "\t$key -> $value\n"; } } sub GetExt { my ($name) = @_; my @x = split(/\./, $name); my $ext = ""; if (@x != 1) { $ext = pop @x; } return $ext; } sub ReadMap { my($fileName, $mapRef) = @_; my $f; my @arr; open($f, '<', $fileName) or die "Couldn't open $fileName"; my %map = %{$mapRef}; while (<$f>) { my $line = $_; chomp($line); @arr = split(/[ \t]+/, $line, 2); $mapRef->{ $arr[0]} = $arr[1]; } printf("known ext\n"); while (($key, $value) = each(%$mapRef)) { print("$key, $value\n"); } close($f); } sub WriteMap { my ($fileName, $mapRef) = @_; my $f; my @arr; open($f, '>', $fileName) or die "Couldn't open $fileName"; my ($k, $v); while( ($k, $v) = each(%{$mapRef})) { print $f "$k" . "\t$v\n"; } close($f); } sub PrintHelp { print "usage: -h Print this help -i <filename> Initialize for this filetype -g <URL> Get this URL\n -c <URL> Continue this URL\n" } sub GetFirst { my ($fileName) = @_; my $f; open($f, "<$fileName") or die "Couldn't open $fileName"; my $buffer = ""; my $first = -1; binmode($f); sysread($f, $buffer, 1, 0); close($f); $first = ord($buffer); return $first; } sub GetFirstFromMap { } sub GetFileName { my ($URL) = @_; my @x = split(/\//, $URL); my $fileName = pop @x; return $fileName; } sub GetChunk { my ($URL, $file, $offset, $readLen) = @_; my $end = $offset + $_chunkSize - 1; my $curlCmd = "$_curl -x $_proxy -u $_proxy_auth -r $offset-$end -# \"$URL\""; print "$curlCmd\n"; my $buff = `$curlCmd`; ${$readLen} = syswrite($file, $buff, length($buff)); } sub GetFile { my ($URL, $first, $outFile, $fileSize) = @_; my $readLen = 0; my $start = $fileSize + 1; my $file; open($file, "+>>$outFile") or die "Couldn't open $outFile to write"; if ($fileSize <= 0) { my $uc = pack("C", $first); syswrite ($file, $uc, 1); } do { GetChunk($URL, $file, $start ,\$readLen); $start = $start + $_chunkSize; $fileSize += $readLen; }while ($readLen == $_chunkSize); printf("Downloaded %s(%d bytes).\n", $outFile, $fileSize); close($file); } sub PrintBanner { printf ("pget version %s\n", $_version); printf ("There is absolutely NO WARRANTY for pget.\n"); printf ("Use at your own risk. You have been warned.\n\n"); } sub PrimaryValidations { unless( -e "$_curl") { printf("ERROR:curl is not at %s. Pls install or provide correct path.\n", $_curl); exit; } unless( -e "$_extFile") { printf("extFile is not at %s. Creating one\n", $_extFile); `touch $_extFile`; } if ( $_chunkSize <= 0) { printf ("Invalid chunk size. Using 1Mb as default.\n"); $_chunkSize = 1024*1024; } }

    Read the article

  • I can't download or stream for more then 3 sec, and then the conection activity just dies

    - by JMHein
    Just got a new internet connection installed at my sisters place, but it randomly just stops working. At first it was only affecting flash videos. they would randomly just stop buffering. I did a lot of research on this and found that there can be many things that cause this exact trouble. I then tried IE and some flash would stream fine, but still random deaths. So I told my brother in law to reset the router and modem and that fixed the problem for them but not my laptop. I then started trying to fix the flash problem only to fined that downloads of any kind were affected. Now it is so bad that 50% of page loads will never finish because the connection drops to 0% usage with in a split sec. I can't get flash reinstalled because the installer is trying to download but the download dies at 8% I tried up loading a large file by FTP to a web server with no troubles. Yet any activity on my end that takes longer then about 1 sec to finish, just never finishes I can watch the network log in the taskmanager and it spikes for ruffly one sec then drops back to zero and when I go back to the web page it says it is still loading and no matter how long I let it sit it never does any thing more till I reload then it will again create a very short spike of activity on the connection and then drop to zero. Also if I start a download and it does drop off I can restart the download where it left off and get up to 100Kb/s for around the same one sec then it drops to around 14Kb/s then zero a sec latter... I am running Win 7 home prem x64 with FF11 and IE8 I have simply tried every thing I can short of calling up the ISP which very likely will get me no where fast. any advice on what step to take to figure this out would be nice. I am not even sure it is not just an ISP problem. (at least I should be able to get flash reinstalled once I get back home)

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    Good Evening, and thank you for reading this post. I am having a problem with Django after migrating the dB from SQLlite to MySQL. Initially, for the first 48hours, all ran well. But now we are experiencing high cpu about every 30 minutes. This is a production ESX4i VM host, with 2 x 2.8 ghz CPUs and 12 GB ram. I have allocated 4 cpu's to this VM and 4 GB memory. Any insight into this configuration and help with the spikes in CPU would be appreciated. IT is configured to use the prefork MPM. Outlined are the config's for the different services: MySQL Server version: 5.1.61 Source distribution Django 1.3 mod_wsgi Apache/2.2.15 httpd.conf Timeout 120 KeepAlive Off MaxKeepAliveRequests 400 KeepAliveTimeout 3 prefork MPM StartServers 8 MinSpareServers 8 MaxSpareServers 16 ServerLimit 40 MaxClients 40 MaxRequestsPerChild 0 worker MPM StartServers 16 MaxClients 1024 MinSpareThreads 64 MaxSpareThreads 256 ThreadsPerChild 64 MaxRequestsPerChild 10240 MySQL my.conf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid my.cnf wsgi.conf LoadModule wsgi_module modules/mod_wsgi.so /etc/httpd/conf.d/wsgi.conf WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess SITE maximum-requests=10000 WSGIProcessGroup SITE

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Installing Mysql Ruby gem on 64-bit CentOS

    - by Jacek
    Hi, I have a problem installing mysql ruby gem on 64bit CentOS machine. [jacekb@vitaidealn ~]$ uname -a Linux vitaidealn.local 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux Mysql and mysql-devel packages are installed. Mysql_config provides following paths: Usage: /usr/lib64/mysql/mysql_config [OPTIONS] Options: --cflags [-I/usr/include/mysql -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv] --include [-I/usr/include/mysql] --libs [-L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto] --libs_r [-L/usr/lib64/mysql -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -L/usr/lib64 -lssl -lcrypto] --socket [/var/lib/mysql/mysql.sock] --port [3306] --version [5.0.45] --libmysqld-libs [-L/usr/lib64/mysql -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -L/usr/lib64 -lssl -lcrypto] Trying to install: [jacekb@vitaidealn ~]$ gem install mysql -- --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql ... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I would appreciate any help. Thanks for reading :).

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Compaq R4000 laptop randomly locking up

    - by Josh
    I have a Compaq R4000 laptop with 2GB of RAM, running Ubuntu Linux 9.10. It is randomly locking up on me, approximately once every two days. I have a second partition with Windows XP Home installed, and I have had the system lock up in XP as well, meaning I believe this is a hardware issue. I have run two passes of Memtest86+ with no errors. The system has a fan that has died, so I initially suspected overheating. However the system just locked up on me while I was in the middle of typing a script to warn me / shut down if the temperature was too high. When the lockup happened the temperature was 88°F, so I am now starting to believe that may not be the issue. When the system locks up, I cannot SSH in nor ping it. Nothing shows in syslog when I reboot. I have configured it to send syslog messages to a local server as well and no messages appear on that server when the lockup happens. I am open to any and all advice!

    Read the article

  • I love google Chrome, but some non-static pages like Piwik render it unresponsive

    - by gogowitsch
    The web-stat software Piwik stops reacting on mouse clicks after 1-2 seconds. The same is true for Google Maps and Producteev (but GMail and most other pages work like a charm). These rely heavily on JS, and work without Flash. I can click for a very short time period and then the mouse cursor doesn't feel the UI anymore (it doesn't turn into a I over input fields, though it moves; if the freeze occured while the pointer was over an input field, the cursor keeps being a I) and all clicks on the DOM are being ignored by Chrome. No message appears, neither obvious nor in the Console (F12). There is no obstructing div or the like in the DOM (F12). Since I couldn't find any hints on the source of my problems, I suspected my plugins and extensions. Unfortunately, neither deactivating all plugins nor all extensions solved the problem. for the problematic pages, it always happens no Dropbox running several GB of free RAM the taskmanager doesn't show any high CPU or memory utilization (the offending tab uses 30 MB and uses 0-1 % CPU) all problematic pages work in other browsers (Chrome, Firefox, IE) the rest of the computer is very responsive the computers use different security suites (Kaspersky and Avira) The effect exists between several (synchronized) Chrome instances on different machines, all running Windows 7. Both the OS and Chrome are updated automatically. Other tabs and the Chrome chrome (tabs, menus, toolbar buttons of the browser itself) still work. I really don't like switching between browsers. Any ideas?

    Read the article

  • keyboard intermittently stops working even after reinstalling windows 7; possibly a Chrome issue?

    - by neverskipbreakfast
    My keyboard intermittently stops working. Sometimes a couple of keys will work, but usually none. Sometimes if I mash the ctrl+alt+windows keys randomly for a bit, the keyboard will let me type one more letter before stopping again. Sometimes the keys will open a program menu but usually not. I have even completely wiped my machine and reinstalled windows 7; the problem continues. Specs: Intel iMac (early 2006, 2.0GHz, 2MB RAM, 240GB HD) running ONLY Windows 7 Professional, 32-bit (NOT through boot camp) and using a USB keyboard (Saitek Eclipse II.) Unplugging & reconnecting keyboard does NOT fix it. Connecting a different keyboard does NOT fix it. That one won't work, either. Drivers are up-to-date. Removing and reinstalling drivers does NOT fix it. Restarting the computer does NOT fix it. In fact, when the Windows logon screen appears the keyboard won't work and neither will the icon to pull up the on-screen keyboard. My mouse can click around just fine. I can only log onto a non-password protected account. Generally, logging into as different Windows user fixes it. I can then log back on to my main user account and continue work for a few hours until it happens again. Clearing my Chrome browsing data stopped the problem from recurring for a week or so. I have already REINSTALLED Windows 7 (not just a restore.) The problem returned after 2 days of use. I use Avira free antivirus software, and repeated scans turn up nothing fishy. I suspect it is related to something in Google Chrome because I used my google account to reload all my previous Chrome extensions, saved data, etc. (Chrome Extensions Installed: AdBlock, Better Google Tasks, DropBox, FB Photo Zoom, Google Mail Checker, StayFocusd.) Any ideas? Any at all?

    Read the article

  • Server 2008 R2 DNS Lockup / Stops Resolving Internet Names

    - by Richard Maynard
    We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • UPS with a HP Proliant server

    - by Groo
    We placed a EATON Ellipse Max 1500 (900W) as the UPS for our HP Proliant ML350 G6. Upon first power failure (actually we only moved the UPS' input plug to a different socket), server immediatelly turned off, and the Health LED turned red and started blinking. UPS was in operation for about a week before that, with battery fully charged to 100%. Since our server's hot-plug supply is 460W, we are pretty sure we haven't overloaded it, the server was completely idle at that time (no web or win apps running except Windows Server core services). Then we tried to do the same with a different, no-name older PC (Core 2 Duo, 2Gb RAM) with a generic power supply (not sure what the power is) and it continued working when we pulled the plug out. UPS load was less than 15% (measured in the provided Eaton utility). We measured the UPS' output voltage using a smart oscilloscope and the THD of the UPS output waveform turned out to be 40%. Did you have similar experiences? Could this be a faulty UPS? Or a faulty power supply? Or some HP sensors configured to trigger too strictly? I wouldn't like replacing this UPS with the same brand, to get same results. [Edit] I also tried to do this while the server is turned off. While the UPS is working on battery, server will not start - as soon as I press the power button, Health LED starts blinking red.

    Read the article

  • Can't Get Mac Mini to turn on - no screen, no beep, only the fan, power light, and optical drive noi

    - by pibyers
    I have an Intel-based Mac mini mid-2007 (Model A1176). This is the computer my kids use so I don't use it regularly. The computer had been working fine until one day my kids told me that it no longer works. The computer will not boot up. When I turn it on the fan turns, the white power light in the front turns on, and there is a sound that appears to be from the optical drive (rather than hard drive). I don't get anything to the monitor, nor do I get any dings or other start up sounds from the computer. Here is what I've tried thus far to no avail: 1) Swapped out the monitors early on since I figured that was my weak link - no change 2) Reset the PMU - no change 3) Tried to boot up from the System Disk - The mini loaded the dvd into the drive, but nothing else (I can't eject the disk so I can put it back) 3) Start up the computer in target mode connected to another mac - I tried this too, but I never received a chime or the disk show up on the other mac. I'm about out of ideas apart from scraping the computer. Does anyone have any ideas that I can try? Again, nothing has been done to the computer in at least 6 months when I upgraded the RAM. I'm also still on Leopard. Thanks.

    Read the article

  • Setting up a Pagefile and Partition in Server 2008

    - by Brett Powell
    I am setting up 18 new machines for our company, and I have instructions from my new boss on setting up a Pagefile and Partition. I have looked at their existing machines to base the new setups off of, but there is no consistency between any 2 machines, which has left me extremely frustrated to say the least. My instructions are... 1) Set a static pagefile (use recommended value as max/min), set it on SSD if SSD available. 2) Make 3 partitions: C: is used for OS and install files D: is used for backups on machines with a SSD. On machines without SSD create a D: partition for pagefile (2*installed RAM for partition size) E: must be the partition hosting user files I have never messed with Pagefiles before, and looking at their existing machines is offering no help. My questions are... 1) As the machines I am setting up have no SSD (just 2 SATA drives) does it sound like the Pagefile should be setup on the C: (primary) drive or the D:? The instructions are vague so I have no idea. 2) As C: and D: are both Physical drives, does it sound like C: should be partitioned out to create the E: drive or D:? Thanks for any help I can get. I am extremely stressed out under a massive workload right now, and these vague instructions are quite infuriating.

    Read the article

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • How to compare old CPU to new CPU?

    - by Lasse V. Karlsen
    I hope this question doesn't get closed at once :) I have an old laptop, a Compaq NC4200, which is going its final laps around the track these days. Battery is dead, and everything kinda runs slow. It also has only 1GB of memory, and even though I don't know if it can take more, I probably wouldn't be able to get hold of any that matches without having to special order it. The size, however, has been ideal for my usage pattern, so I'm looking to replace it with a similarly sized laptop, at least in the same size category. However, it's been a while since I tried keeping track of CPUs, so I have a question. The old laptop has a Intel Pentium M 760 1.86GHz processor. One laptop I found online has a Intel Pentium SU4100 1.3GHz dual-core. This type of processor seems to be quite common in the price and size-range I've been looking. What kind of relative performance boost could I expect from the old one to the new one? I am not expecting a "about 7.45x speed", but some indication would be nice. For instance, dual-core tells me it might be akin to 2.6GHz, but I assume I can't simply compare 1.86GHz to 2.6GHz and expect the new one to run about 1.4x as fast, I expect more these days. Or is that unrealistic for this kind of processor? Do I need to up my price range and go for a 2+ GHz processor?

    Read the article

  • Intermittent unavailability of an instance in a failover cluster while a standby node is offline in

    - by Emil Fridriksson
    Hi everyone. I've got a small failover cluster that I run for the websites my company has. During a RAM upgrade of the standby server, our websites started to show errors about not being able to access the database server. I verified that the instance was indeed up and the server accessable via remote desktop. I also tried a SQL connection to it and it worked, but that might have been after it became available again. This happened on and off until we were able to roll back the hardware changes that were in progress on the standby server and we were able to bring it back up. There was nothing of interest in the SQL Server log, but there is a continous log for the whole duration of the problem, so there was no restart of the SQL Server service. The event viewer is of more interest, since it shows events relating to the heartbeat network card, but I don't know how that would affect the availability of the server, since the standby node is offline. I'd appreciate any help you can provide, it's not very redundant if the setup depends on the standby server being up. :) Here are the event logs from the time of the problem, I include all of them since I can't seem to see what could possibly be the cause of the problem. Event log: http://hlekkir.com:800/htmltable.htm

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • New SSD freezing on older motherboard (intel G31)

    - by DJM
    I have an ECS G31T-M motherboard running a Core2 Quad processor, 4gb RAM, Windows 7 32bit, Geforce 9600GT. Bought a Sandisk Extreme III 120GB (SDSSDX-120G-G25) and installed today. I'm outputting this from my 9600GT with included TV-out adapter to Component video (to my HDTV). This motherboard is SATA 2 and from what I can tell, SSDs run on the IDE controller and there is nothing fancy to set up advanced features of SSDs. I've noticed on other forums (but not verifying with ECS) this board does not support AHCI. I have two versions of Windows 7 installed on two drives, the SSD and an old 500G disc drive. When booted from my older 500G HDD, video plays fine on the HDTV. When booting the SSD windows 7 install, I am freezing constantly, as in, video plays OK for a minute, then picture freezes for 1-3 minutes (sometimes as audio continues playing) and returns for 20-30 seconds before doing the same thing again. Other tasks such as basic maneuvering through file folders seems to be no problem. Please help!! Do I need a new system for this thing to work, or could there be other fixes? I updated firmware to R201 to no avail. DJM

    Read the article

  • Laptop recommendation - Portable Gaming

    - by ivan
    So, I'm looking for a new laptop (http://superuser.com/questions/116869/toshiba-satellite-u500-totally-damaged-lcd). My requirements for a new Laptop are: -good keyboard(illuminated) and touchpad (multi-media keys included, should be better than toshiba u500) -good graphics card, with system rating of 6.3 and up for gaming graphics (my Toshiba U500 has 6.3). I used to run some heavy games on my Toshiba U500 with ATI Mobility Radeon 4570 with 512 mb VRAM but the framerates are not that nice on high settings. -Decent CPU but I think all new Core i3, i5, i7 can run most of recent resource intensive games (My Toshiba U500 has a Core 2 Duo T6500, 2.13 Ghz) I'm also looking for a long-term reliability, good sound quality, lots of fast RAM of-course(4GB DDR3 - 1066Mhz and up) and a clear looking LED screen with a decent resolution. (I can accomodate a laptop with screen size of 13-inch upto 15.6 inch, and I don't want it to be heavy because I might be taking it outdoors) I'm actually impressed when I saw HP Pavilion DV6t but the screen resolution seems to be a little too small for 15.6 inch. The Pavilion DV3 are also good but I want to know if there other options. Looking for some opinions.. Thanks. :D

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >