Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 589/835 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Strange ASP.NET Queue Performance Counters Behavior?

    - by LemurTech
    We have an ASP.NET 2.0 site running in classic mode. I am seeing very strange behavior in the performance counter values. Perhaps these are bugs (I've been all over Google trying to verify this, without much luck), or perhaps it is just my inexperience with monitoring these things. This PerfMon graph (http://imgur.com/Jv5io5J) represents a load test where I add up to 350 virtual users to the site, at a rate of about 1/sec, performing relatively simple page browsing. At the end of the test, I gradually taper off the number of users. This is a 4 CPU server. Machine.config settings for are at the defaults. The solid blue line is ASP.NET Apps v2.x\Requests Executing for the application in question. The profile makes perfect sense, with a quick ramp-up to 32 executing requests (minWorkerThreads x 4CPUs), followed by a slower ramp-up to 48 ((maxWorkerThreads - minWorkerThreads) x 4CPUs). The solid yellow line is ASP.NET v2.x\Requests Queued. Again, this makes sense: after the initial 32 request threads are activated, the queue begins to build as new thread initialization can't keep pace with incoming requests. But as executing requests reaches its highest possible value of 48, the counter for ASP.NET Apps v2.x\Requests Queued (green solid line) suddenly springs to life and maintains step with the yellow counter. As far as I can tell, and with no other apps running on the server, these two counters should have had the same values from the start. One other odd thing: The counter for ASP.NET v2.x\Request Wait Time (dotted yellow line) also does not spring to life until executing requests reaches 48. Shouldn't I be seeing values here from the moment ASP.NET v2.x\Requests Queued begins to build? And likewise, why would ASP.NET Apps v2.x\Request Execution Time (dotted blue) increase significantly only after that peak of 48 is reached? Shouldn't it ramp-up gradually along with queued requests?

    Read the article

  • Resource Monitor (resmon) in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums? (the graph on the right hand side of the screenshot below):

    Read the article

  • HP Officejet 4500 G510n-z Not Showing up in Remote Desktop (Terminal Services)

    - by Greg_the_Ant
    I installed this printer on a windows XP machine. First using the wireless option, and later using USB. In both cases when I connect to my other computer (also Windows XP) via terminal services and check printers in the local resources tab it does not show up on the remote session. I used to have a Samsung connected to my local computer over USB and and that worked fine over terminal services. Things I tried so far: I did read this page and installed the software fix on both computers: (Printers that use ports that do not begin with...) I installed the minimum HP software install on the remote computer and that didn't help either. I also tried running the add new printer wizard on the remote computer: I selected "local printer attached to this computer" and did not check the "automatically.." option. On the next page of the wizard I can select an option for "use the following port". I see options for TS001 through TS009 there. I'm assuming those are coming from the local machine. I tried clicking each one and then checking "have disk" and pointing it to C:\3be8dc611b11322e8ddf8a67\i386\msxpsdrv.inf 1 but for every single TS00.. port it says "The specified location does not contain information about your hardware." Any help would be greatly appreciated. I'm pretty stuck at this point. 1 C:\3be8dc611b11322e8ddf8a67 is the folder I extracted the HP driver software to after I downloaded it.

    Read the article

  • Correcting owner/permissions on damaged directory tree in linux

    - by mcs130
    I inadvertently made a backup copy of a directory recursively and forgot the -a (--preserve) switch when doing so. This damaged my backup directory (which contains data we need to access). The directory and all of its child folders and files comprise an installation of an application including postgress DB and solr files. The original copy was used to for a failed re-config attempt. Now I need to use the backup copy to start over, only the ownership of the backup copy is now root across everything and it is no longer usable (processes won't run due to ownership problems I created when I forgot the -a on the cp -r). I've re-installed a clean copy of the application into a 3rd location now (which has the correct owner/perms) and need to copy the owner/perms from this good directory over onto the damaged directory. What is the best way (if even possible) to do this. (I've Googled and seen things from perl scripting to setfacl/getfacl to do this but am unfortunately still confused). Apologies if this seems a dumb question. Thanks.

    Read the article

  • Lose internet connection, yet online games continue

    - by Mike
    For the past week or so, my internet connection has been anything but stable. Restarting my modem/router always fixes the problems, but since it has occurred so often, I'm noticing confusing patterns which I was hoping someone could help answer. My internet connection kicks out about 4-5 times a day. The sure-fire way to fix it is to restart my all-in-one modem/router. Sometimes I can diagnose the problem on my laptop which resets my wireless network adapter and fixes the problem, but not always. If that doesn't fix the problem, it usually reports that the connection between the modem and internet is the problem which requires a restart of the router. The odd thing which baffles me is that my connection is supposedly lost such that no browsers can connect to sites, yet things like online games still continue to play without issue. How is this possible? I thought maybe the game was running locally on my PC but that couldn't be the answer because I was still getting messages from other players. So my real question is: How can my internet browsers (firefox, chrome, even IE) lose connection to the internet, but other applications like online games not? Am I actually losing connection or am I mistaken? Edit: I'd also like to add that netflix on my PS3 which is directly connected to the same access point will also lose connection. So internet browsers and netflix lose their internet connection while online games continue without an issue.

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Headless VirtualBox VM NAT Network

    - by dirt
    I have a remote linux server accessible through SSH only. My goal is to host multiple Virtual Machines on this host server using VirtualBox. The host server has 1 IP address, so NAT will be used to route to the VMs for example 10022 will forward to server1:22 and 20022 will forward to server2:22. I have installed VirtualBox and copied a pre-configured CentOS VM to the host server. I start the VM, but cannot establish a connection to the server for example ssh -p 10022 127.0.0.1 times out. I've tried many things: Method 1: Copied existing .vdi, attached to new VM Method 2: Imported .Ova VM (thought it would help any MAC re-init issues?) NAT network type, tried natnet1 192.168/16 and 10.0/16 VBoxManage modifyvm "hermes.awoms.com" --natnet1 "192.168/16" Port forwarding with and without specifying VM ip in modifyvm --natpf1 command VBoxManage modifyvm "hermes" --natpf1 "guestssh,tcp,,10022,,,22" VBoxManage modifyvm "hermes" --natpf1 "guestssh,tcp,,10022,192.168.0.15,22" I can't see if VM is even booting (VBoxHeadless "hermes" --start & runs with no errors) I can't tell if VM is getting an IP address Is there anything else I can do to get more information from VirtualBox or the VM starting up when the only access I have is SSH?

    Read the article

  • Python coding with VLC player (quite a basic query I expect)

    - by Todd
    I'm fairly new to the whole coding realm so my knowledge is fairly limited, and I can't seem to find any basic tutorials on how to use scripts with VLC player. More specifically, the reason I'm asking here is because I stumbled across a post on this site about playing random clips from random videos on VLC player automatically. This is the forum post: Playback random section from multiple videos changing every 5 minutes My situation is similar to this lovely gentleman's was, though he clearly knows a lot more about coding than I do. In short, I'd like to copy this coding into a file of some sort and apply it to VLC player myself. Only I'm not sure what file type I'd have to save it as (I have Python by the way, and I tried saving it as a .py file but I didn't know if it was correct or where to go from there). Additionally, I'm not sure how to get VLC to "read" the script, so to speak - is there a specific location the file needs to be, and do I run the script from another program or through VLC? I'll reiterate that I'm relatively new to this, so if anybody would be so kind as to post a quick list of steps on how to save/place the file and use it with VLC player I really would appreciate it! P.S. I'm not computer illiterate, I'm fine with most programs and I'd understand if you just said things like "C:\Program Files (x86)\VideoLAN\VLC\plugins" or "in VLC, select Tools Plugins and extensions", I just wouldn't catch on to anything about adding a line of coding that does something without being told exactly what to write! Many thanks in advance! :) Todd

    Read the article

  • fatal error 'stdio.h' Python 2.7 on Mc OS X 10.7.5 [closed]

    - by DjangoRocks
    I have this weird issue on my Mac OS X 10.7.5 /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found What caused the above error? This error has been bugging me and i can't install mysql-python as i'm stuck with this step. I'm using Python 2.7.3. Things like Google App Engine ( python ), python script, tornado generally works on my mac. But not mysql-python. I've install MySQL using the dmg image and have copied the mysql folder to /usr/local/ How do i fix this? ======UPDATE====== I've ran the command, and tried to install mysql-python by running sudo python setup.py install. But received the following: running install running bdist_egg running egg_info writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt writing MySQL_python.egg-info/PKG-INFO writing top-level names to MySQL_python.egg-info/top_level.txt writing dependency_links to MySQL_python.egg-info/dependency_links.txt reading manifest file 'MySQL_python.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'MySQL_python.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py copying MySQLdb/release.py -> build/lib.macosx-10.6-intel-2.7/MySQLdb running build_ext gcc-4.2 not found, using clang instead building '_mysql' extension clang -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,4,'rc',5) -D__version__=1.2.4c1 -I/usr/local/mysql/include -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.6-intel-2.7/_mysql.o -Os -g -fno-common -fno-strict-aliasing -arch x86_64 In file included from _mysql.c:29: /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found #include <stdio.h> ^ 1 error generated. error: command 'clang' failed with exit status 1 What other possible ways can i fix it? thanks! Best Regards.

    Read the article

  • Does fast typing influence fast programming?

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • Reconfiguring, then deleting obsolete pagefile.sys from C: in one go using a batch script

    - by DanielSmedegaardBuus
    I'm trying to set up an automated script for a Windows XP installer. It's a batch script that runs on first boot after installation, and among the things I'm trying to accomplish, is removing the pagefile from C: entirely, and putting a 16-768 MB pagefile on D: instead. Here're my batch file instructions: echo === Creating new page file on D: ... cscript %windir%\system32\pagefileconfig.vbs /create /i 16 /m 768 /vo d: >nul echo. echo === Removing old page file from C: ... cscript %windir%\system32\pagefileconfig.vbs /delete /vo C: attrib -s -h c:\pagefile.sys del c:\pagefile.sys My problem is that while these are sane commands, the removal of the pagefile on C: requires me to reboot before those commands succeed.b Or, in other words — I have to first create the D: pagefile, then reboot and delete the c:\pagefile.sys file, or I'm stuck with a c:\pagefile.sys file which isn't even recognized by Windows itself (it'll just say that there's a page file on D:, and that C: has no pagefile at all). Obviously because already some pages are written to the C:\pagefile.sys file. So how would I go about accomplishing this in one go? Or, in two gos, if this is "batch scriptable" :) TIA, Daniel :) EDIT: I should probably clarify: Running those commands above are all valid, but they'll only succeed fully if I re-run the "attrib" and "del" commands at next boot. The C: pagefile is in use at the time, so I cannot delete the file it uses, and Windows itself won't remove it when I configure it to not use C: as a page file drive. Instead, it'll leave an orphaned c:\pagefile.sys file behind (which is really large). I don't necessarily need this to work in one go, registering the last two commands to run after a reboot would also be great :)

    Read the article

  • How to rewrite these URLs?

    - by Evik James
    I am brand new to URL rewriting. I am using an Apache rewriting module on IIS 7.5 (I think). Either way, I am able to do rewrites successfully, but am having trouble on a few key things. I want this pretty url to rewrite to the this ugly url: mydomain.com/bike/1234 (pretty) mydomain.com/index.cfm?Section=Bike&BikeID=1234 (ugly) This works great with this rule: RewriteRule ^bike/([0-9]+)$ /index.cfm?Section=Bike&BikeID$1 Issue #1 I want to be able to add a description and have it go to exactly the same place, so that the useful info is completely ignored by my application. mydomain.com/bike/1234/a-really-great-bike (pretty and useful) mydomain.com/index.cfm?Section=Bike&BikeID=1234 Issue #2 I need to be able to add a second or third parameter and value to the url to get extra info for the db, like this: mydomain.com/bike/1234/5678 mydomain.com/index.cfm?Section=Bike&BikeID=1234&FeatureID=5678 This works using this rule: RewriteRule ^bike/([0-9]+)/([0-9]+)$ /index.cfm?Section=Bike&BikeID=$1&FeatureID=$2 Again, I need to add some extra info, like in the first example: mydomain.com/bike/1234/5678/a-really-great-bike (pretty and useful) mydomain.com/index.cfm?Section=Bike&BikeID=1234&FeatureID=5678 So, how can I combine these rules so that I can have one or two or three parameters and any of the "useful words" are completely ignored?

    Read the article

  • Unix VPS server going down at almost the same time every day

    - by ronnz
    My server load seems to be really spiking and many times the server goes down at the same time each night (Around midnight). I have about 20 cPanel accounts hosted on it and have tried everything I know to try to find what is causing the issue. Some of the things I have tried: Combined all site access logs found in /etc/httpd/domlogs and cannot see anything unusual at the time of server going down. Checked most other logs in the var/log directory and found nothing indicating the issue at the time the server is going down. Checked cron logs and cannot see anything unusual.. See below. Last night CPU spiked to 7.5 at 00:14. What else can I be checking? How can I really monitor to find out the root cause? Dec 8 00:05:01 v1 crond[6082]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:05:01 v1 crond[6084]: (root) CMD (/usr/local/cpanel/whostmgr/bin/dnsqueue /dev/null 2&1) Dec 8 00:10:01 v1 crond[6435]: (root) CMD (/usr/lib64/sa/sa1 1 1) Dec 8 00:10:01 v1 crond[6436]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:15:12 v1 crond[6775]: (root) CMD (/usr/local/cpanel/scripts/autorepair recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6776]: (root) CMD (/usr/local/cpanel/scripts/recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6777]: (root) CMD (/usr/local/cpanel/bin/dbindex /dev/null 2&1) Dec 8 00:15:12 v1 crond[6781]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:20:33 v1 crond[7047]: (root) CMD (/usr/lib64/sa/sa1 1 1)

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • Chipset fan on the frtiz - compressed air hasn't fixed anything - is there anything I can do?

    - by Anthony
    Yesterday, my computer started to make an annoying whining noise. Knowing that this is likely a fan issue, I opened the case and proceeded to determine which fan was causing the issue. I got some compressed air and tried cleaning out the dust around it (and the rest of the computer while I was at it). This hasn't seemed to fix the issue. Now, if it were just any fan, I would probably just replace the fan - they're relatively cheap after all. However, this is a special fan. Aside: For what its worth, I feel bad that the graphics card blocks part of the fan, but it is the only slot the graphics card fits, so I had no choice. After pulling out my motherboard user guide, it looks like this is a fan placed directly on top of the chipset. To be perfectly honest, I have no clue what the purpose of the chipset is - but it sounds important. After some quick research, I see that it is responsible for providing the bridge between my CPU, RAM and graphics, among other things. Just a quick search at Newegg tells me that chipset fans can be purchased at pretty reasonable prices (< 20 dollars). Is it practical to replace this fan? It is an old computer as computers go and I wouldn't be terribly upset to upgrade the motherboard and processor, so perhaps this is a sign. Hardware Specs: Motherboard: Asus A8N-E Chipset: NVIDIA nForce4 Ultra

    Read the article

  • locked files on HFS+ home partition shared between OSX/Linux

    - by HazyBlueDot
    I dual boot into Arch Linux and OS X 10.6 on my MacBook pro. I synced my UID between both OSes and created an HFS partition (with no journaling) to use as a shared home/Users partition. For the most part it works just as I'd expect, but sometimes when I'm booted into OS X certain files are "locked" (when I get info on a particular file the "Locked" box is checked under the "General" pane. I can resolve the issue by manually unchecking the box) and/or I get "Operation not permitted" when I try deleting or chmod'ing a file. In both cases I don't see anything out of the ordinary on the permission bits displayed with ls -l, except for a trailing '@' character in the position where the sticky bit would normally occur: -rw-r--r--@ 1 myuser mygroup 296 Mar 29 11:44 myfile This '@' character shows up on ALL normal files, so doesn't seem to be linked to the locked/operation not permission situation. On the Linux side of things I never have permission problems. To the best of my limited knowledge and experience with ACLs I've not found any ACLs on any of the files in question. For what it's worth, I do most of my file editing using emacs (Aquamacs in OSX), is it possible it is setting weird permission bits? What is the "locked" setting that OS X uses and does it have a permission bit equivalent (so at the very least I could recursively unlock all files in my home directory from the terminal) why might some, but not other files get "locked" when booting into OS X what is the meaning of the '@' character?

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Ctrl + 1 and Ctrl + 2 key combinations don't work

    - by musicfreak
    I noticed back in August (when I got StarCraft 2) that the key combinations Ctrl + 1 and Ctrl + 2 didn't work. I thought this was weird because Ctrl + 3 and all the other combinations worked fine (including Shift + 1, etc), so I didn't think much of it; I just shrugged it off as a SC2 bug. Now, 4 months later, I decided to play a completely unrelated game--Dawn of War 2--and noticed the same thing: those two specific key combinations don't work. To make sure I wasn't going insane, I tried it in Chrome and a couple other applications, and alas, it didn't work. I remember playing strategy games over the summer before StarCraft 2 and it worked fine. Any idea as to what went wrong? My keyboard, a Microsoft Wireless Keyboard 1000 (I know, insert Microsoft joke here), is a little over a year old, so I'm going to assume it's not dying until proven otherwise. Things I've tried ActiveHotkeys says the key combination is not a global hotkey. Tried another keyboard--still doesn't work. The key combinations do work in a virtual machine (tried with both Windows and Ubuntu as guests).

    Read the article

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • PsExec and Remote Environment Variables, Logging, Etc.

    - by alharaka
    When I run PsExec on a remote computer, I always fall short of what I want. What I would like ideally in most situations is a) a log on an admin server where each individual log has the name of each the remote computer it was generated from (e.g. COMPNAME1.log, COMPNAME2.log, etc.) or b) a log file on each remote computer with whatever name I specify. When I try scenario (a), I use the following command. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username cmd /c echo TEST >> \\server.company.tld\share\%computername%.log Problem is that it never works. All the computers just write to the log where %computername% is just the computer I execute PsExec from in my office. What I want are unique logs for each computer specific in the listofcomputers.txt that will correctly use the hostname from the remote environment variable without issue. Is that even possible? It does not seem to work for me. I tried this, and the syntax is clearly wrong. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo TEST >> \\server.company.tld\share\%computername%.log" PsExec just fails saying the system file cannot be found (read: syntax fail). As for scenario (b), it appears to be a variation of a similar problem. When I run a command like this, it does not work. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo %computername% >> \\server.company.tld\share\aggregated.log" Is there something I do not understand about remote path and environment variables with PsExec on the cmd.exe console (I have not even tried the dreaded PowerShell yet). I know such things work in a batch file (cmd /c \\server.company.tld\share\runthis.bat), but is there a reason it will not work when executing commands as arguments? I always need this, and can never get it!

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • WD my cloud 4th is Super Slow

    - by Saduser
    I am using a WD my cloud 4Tb and I have read other posts about users complaining about getting only 10Mb per second. My problem is that I am getting about 100kb/s to transfer a 125gb iPhoto library. Estimated time is 11 days to transfer this file. This is unacceptable. On the back of the WD cloud I am getting a solid green light and from what I read this means that I am on a gigabyte network. I have mac book pro running Mac OS Mavericks. I have tried 4 different cables and turned off my router firewall. I don't run anti-virus nor any firewall on the mac. Other things I have checked: direct connection to both router and WD cloud device. Tried wireless but it is even slower. Previously I was able to transfer a 55Gb iPhoto library in 14 hours which I felt was acceptable. I figured it would take approximately double the time to transfer the 125gb file but 11 days is ridiculous. Any other suggestions? Anything else I can check (how to check it) what is the bottle neck?

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >