Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 170/792 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • VMware Workstation "Power off" and "Suspend" buttons are disabled. What is going on?

    - by Ed Norris
    I've been using an XP image for a few months now with no problems. Recently the power off and resume buttons were disabled and I'm not sure what happened to cause that. In addition, the menu items to do those functions are grayed out (like VM | Power | Power Off) I'm using VMware Workstation 6.5.3 on a Windows 7 64-bit host. The image is Windows XP 32-bit. There is plenty of free space and memory on the host and the CPU is not pegged. I am able to power off the image through its Start menu, but that's a workaround not a fix. Any suggestions? TIA EDIT: Well after manually shutting down the image then closing and reopening the image file (which I've done before) in preparation for reinstalling the tools as suggested below, it looks fine. The power off and suspend buttons are enabled and work. So what do I do with this question now? "Close and restart a few times and it may work" doesn't seem useful...

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • program to track recent saved files or downloads?

    - by DiegoDD
    Back to windows XP times, i used a program named Ava Find that can find any file in the system, but a really nice feature it had, called "scout bot" that automatically finds new files, that is, recently created. More info here: AvaFind For example, if i saved or downloaded a file (in any program), but i cant remember its name or where i put it, i just simply looked at the scoutbot list, and it showed me the most recent files, and then i could just open them from there, or jump to their path. Since i changed to windows 7 (passing through vista), the program no longer works. It simply keeps indexing forever and never finds anything. I don't know if it is not compatible with win 7 itself, or because the fact that my windows 7 is 64 bit. I already tried to contact the Avafind creators, but they never answered, and the program seems to be no longer supported (although the site still works). Now, is there a similar program that finds RECENT files? e.g. listing the most recently saved files, system-wide (or even limiting it to some folders, discs)? And that of course, that is confirmed to work with win 7 64 bit. I know that there are many programs that can find any file, like everything, but what i want is to find recent files and their paths, without knowing their name or e. "everything" also can sort results by date, which is almost what i need, simply to look for part of the file name or extension, and get the most recent one. but if i get to many results, it takes a while, plus, i STILL need to know the name or at least extension of the file. What i want is simply a "Most recently Saved files" tool. i.e., a replacement for Scout Bot in Ava Find. Do you know any alternative?

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • Screen randomly goes blue/black/white

    - by FubsyGamer
    Problem Randomly, while using my computer, the monitor goes dark grey/almost black, or it goes white with faint grey vertical lines, or it goes blue with black vertical lines. It's as if the computer powers off. People tell me I sign out of Skype, Spotify stops playing when it happens, etc. When I look at the tower, it doesn't seem like it's off at all. Nothing changes, fans are spinning, lights are on, etc. If you were only looking at the tower, you'd never know there was a problem The only way I can get it to come back up is to push and hold the power button and turn it off, then back on This never happens while I'm playing video games. I've done 5-6 hour sessions of League of Legends, and it doesn't do anything When I'm just browsing the web, reading email, checking Reddit, etc, it happens all the time. It can happen multiple times in a session, it usually takes only about 5 minutes from the time I start browsing to when the computer crashes This started happening after I moved to a new apartment (this has to be relevant somehow, it was not happening where I lived before) There is nothing in the crash logs or event logs System Specs i5 2500k CPU AMD Radeon 6800 GPU Gigabyte z68a-d3h-b3 motherboard WD VelociRaptor 1 TB HDD Screenshots Device manager About screen Things I have tried I was getting a WMI Error in my event logs, but I fixed it using Microsoft's fix, KB 2545227 I was using Windows 8. I wiped the HDD and downgraded to Windows 7 64 bit I took out the video card and used a can of air to totally clean out the video card, all fans, and the inside of the computer in general. I made sure all of the video card pins were fine, then reconnected it I tried to update my motherboard BIOS, but anything I downloaded from Gigabyte was only for 32 bit machines, not 64. I don't even know how to tell what my motherboard BIOS is at right now I am using a power strip, and anything else connected to it works just fine If I re-seat the monitor cable while this is happening, nothing changes Please, help me. I've been battling this for several weeks now, and it's so frustrating it makes me not even want to use the computer.

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • USB-to-Serial showing gibberish at 115200 Baud

    - by Mose
    I've got a serious problem which drives me crazy because I tried everything I could think of. First of all, I made a video: http://youtu.be/boghkuq7L_s but please read the following text for more information, not only view the video! When using a USB-to-Serial interface everything works as long as I don't go beyond 57600 Baud. At higher rates I only get giberish like this: év.­b0JNLYÆÿ¿iëd0U²(kßÞb! ú]/xscB!ï¯!BoXûÿ1ïâÖCÿ6ÌAnè*íÌC)º¿BíÞØ.C.@ÆÃwHJÂs "YE:ñ.èFðÌCÊ÷ÞÄ !x H w6@BtbHJ ̪ Ì6ì H¾a¿bH.">îvy®;f<ßBÌ p­L¨fæH­E ­þ¼MBÞI What makes the problem so strange is, I exchanged every component and the problem still presists. I tried differtent OSes (Ubuntu, WinXP, Win7, OSX 10.7) with 32 and 64 Bit. I tried USB-to-Serial interface from FTDI and Prolific. I tried reading the output from my Raspberry PI and from an Asterisk Appliance. I changed the cables and the wiring. Nothing helped. In the video I made a example with a old Notebook with native COM and put the USB-to-Serial to the same connection as "sniffer" (only Rx and GND connected) to make sure the output and everything is ok as one can see on the native port. The voltage is ok. Settings for both are 115200 Baud, 8 Bit with 1 Stop and no flow control. Native is ok. USB is messed up. I used the newest drivers and double checked all connections. I have no idea what is wrong here. As I couldn't find anyone describing problems like this I question my long experiance in computer science and think I'm doing some completly wrong... Please help :-/

    Read the article

  • Will a SQL Server client alias survive a sysprep?

    - by shufler
    I want to sysprep a Windows Server 2008 R2 SP1 machine that has SQL Server 2008 R2 SP1 installed (for reference, SQL Server 2008 R2 has a new sysprep feature that allows the instance to be sysprepped). On the server is a SQL Server client alias that points to the default SQL Server database engine instance. For reference, the alias is called Alias-SQLServer and has been configured in both 32-bit and 64-bit cliconfig versions (that is, both registry keys exist) The alias points to the local instance as the image will be used to create development VMs and the installation script for the application that is being developed will use the SQL Server client alias in order to generalize the installation scripts. I can't seem to find information about whether the sysprep tool will update the SQL Server client alias's registry keys with the server's new name once it's unsealed. My guess is that it is not; how is sysprep to know that the server name the alias points to will be different for each image? Right? Perhaps if the alias points to localhost instead of the server name this will work?

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • Get Matrox Millenium video card working in Ubuntu 9.10

    - by wcoenen
    I have installed Ubuntu 9.10 on an old PC and it is mostly working, except for some heavy drawing defects that show up whenever I start dragging a window or scrolling inside a window or menu. It looks like the video driver copies the rectangle being moved to the wrong location. I have taken a look in /var/log/Xorg.0.log and the following line shows the detected video card: (--) PCI:*(0:0:8:0) 102b:0519:0000:0000 Matrox Graphics, Inc. MGA 2064W [Millennium] rev 1, Mem@ 0xf9800000/16384, 0xfb000000/8388608, BIOS @0x????????/65536 (==) Using default built-in configuration (30 lines) (==) --- Start of built-in configuration --- Section "Device" Identifier "Builtin Default mga Device 0" Driver "mga" EndSection How do I fix the drawing defects? It turned out that the 24 bit color depth (automatically selected by ubuntu 9.10) was the problem; apparantly the mga driver doesn't handle this well for cards with little memory. I took the following steps to resolve the issue (you can skip the first three steps if you already have a semi-working xorg.conf file): Reboot ubuntu in recovery mode, to get a root console without X running. Run Xorg -configure to generate a xorg.conf.new file Copy the file to /etc/X11/xorg.conf with cp xorg.conf.new /etc/X11/xorg.conf (assuming it didn't exist yet; that's why I generated it) Open the new config file with sudo nano /etc/X11/xorg.conf and make sure the screen section is configured for 16 bit color depth like this: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection EndSection I can't guarantee those were the only important changes I made - I tried a few things in my attempts to create a valid xorg.conf file. But I'm pretty sure that the screen section was the important part.

    Read the article

  • windows 2008 r2 iis worker proccess memory usage increase

    - by nLL
    I have this web site written in c#. around 400-500 users online at any time. it was on windows 2008 32 bit machine before and never ever locked/slowed down due to increased memory consumption up until i upgraded it's server to win 2008 r2 64 bit. Old server had only 4 gig ram and quad core cpu at 2ghz. site was working just fine. since i've upgraded the server i noticed (2 times with in 10 days) it started to eat ram. last night it went up to 4 gb ram. with ram increase response slows down quite a lot. recycling app pool doesn't help. I have to restart it's worker process to recover. i've noticed this usually happens if there are continuous errors. as i didn't change anything in the code am i safe to assume it is not related to memory leak in the code? did anyone came across something like that? same thing happens if i create continuous errors with classic asp. thanks

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • Can I use HP Recovery Discs for a different hard drive capacity and make?

    - by Fasih Khatib
    About two years ago I created HP Recovery Discs (3 of them). Now my hard drive has crashed and new one is still a week from delivery. I was reading up on how to reinstall the genuine OS using the Recovery Discs as i was not given any Windows 7 installation discs. I did my bit of research after getting answers from the community on what these discs do and found out on other sites that people experience issues when recovering their OS from the disc. Especially when they change the make or capacity of the harddrive. Unfortunately I had to change the make as the hard drive that came built in has gone out of production. This question is just a part of my checklist to avoid problems when recovering the OS. I have: HP DV4-2126TX (available only in India I guess) I had: Seagate Momentus 320 GB I ordered: Western Digital Scorpio Black 500 GB Windows 7 Home Premium 32-bit Is there a possibility to encounter any problems due to the changed capacity and make? I only want my genuine OS and drivers – not my data. I was told that Disc 1 contained the OS and drivers, and the rest of the discs contained data. I couldn't verify that.

    Read the article

  • Can I clone my hard drive to an external and boot from the clone?

    - by willbuntu
    First thing: I am not asking what software I'm supposed to use. I already know the answer: Ghost (proprietary), Clonezilla, and dd (if I'm careful). What I really want to know is if it is possible to (essentially) bit-for-bit clone my entire installation (OS, installed software, activation(s), etc.) to an external USB hard-drive, and then boot off of that (if I need to, I know how to edit BIOS settings and use Plop boot manager), and work with it day-to-day as if there was virtually no difference from using my internal HDD now. Again, I'm not asking how to install Windows to an external (because I know I'd need to do some special workaround), I'm asking if I can clone everything and boot off of it. In case you're wondering why I'm going to this trouble: I'm using a Lenovo Essentials laptop that has an unmodifiable partition table (due to recovery crap), and has all 4 of its partitions spoken for (3 primary, one extended, cannot change the extended). Anyway, my thought is that if I can clone everything and boot off of it when I need to, and just have a Linux distro on the internal HDD, then that could work.

    Read the article

  • How to disable monitor auto detection in Windows 7?

    - by Jay Yother
    I am currently running Windows 7 Ultimate 64-bit with a dual monitor setup with an NVIDIA 7950 GT graphics card. One monitor is dedicated to this machine and the other monitor is connected to a DVI KVM switch. When I switch to my other computer, Windows 7 disables the monitor. However, when I switch back it does not re-enable the monitor. The only circumstance that automatically re-enables the second monitor is when I switch back after Windows has put the monitors into power save mode. I am continually having to bring up the NVIDIA control panel to have it re-enable the monitor. Under Windows XP I would just disable the NVIDIA service to prevent it from auto-detecting the monitor (which doesn't solve the problem under Win7), and in Vista there was a registry hack that would prevent this. It looks as though that has been removed in Windows 7. I have found similar questions posted on this site, but nothing that matches my problem exactly. The following link is the question that comes the closest, but does not provide a solution to the problem. http://superuser.com/questions/96683/how-to-fix-monitor-detection-on-windows-7 Is there a way in Windows 7 to disable monitor auto-detection? Update: I just added a second graphics card to my Windows 7 64-bit machine. I plugged one monitor into each graphics card. Now, when I use the KVM switch to switch back and forth it will re-enable the second monitor like it should. There are however, a few quirks with this. If I have a program maximized on the second monitor and it has focus, when I switch it will move to monitor 1. If I have a program maximized on the second monitor and it does not have focus, when I switch it will behave like it is minimized and when I bring it back up it will show up maximized on monitor 1. Definitely better than it was, but still looking for a way to disable the auto-detection.

    Read the article

  • Updating ASUS BIOS on Windows 7 64bit

    - by joesavage
    I've recently decided that I want to try and utilize my two graphics cards so that I can have a dual monitor setup. Unfortunately Windows only seems to notice my most recent graphics card installation - and so I've been told that I should look into my BIOS and try to enable two graphics cards. I could not find this setting anywhere in my M2N68-AM Plus v0210 BIOS. After some further research I figured that I should perhaps upgrade my BIOS, so I searched and managed to download the latest version (v1804) as a ROM file. However I am having difficulty figuring out how to install it. I've tried using the Asus EZFlash feature built into my BIOS, but when trying to load up a variety of different ROMs that are for my motherboard/BIOS I get the error: Boot block in file is not valid! I'm not totally sure what I should do to fix this, so I'm looking into other methods of upgrading my BIOS - however I can't really find any solutions that seem to work. Asus Update is for 32-bit only, AFUDOS doesn't appear to work on my Windows 7 64-bit system (I think it's supposed to run in DOS or something - but that just sounds confusing since I know nothing about DOS). Could anybody help me with this?

    Read the article

  • VM load and ping problems after replacing server motherboard

    - by Andre
    Recently, we had to replace the motherboard of one of our servers. The procedure was done by IBM as it had guarantee. The server runs ESXi 5.1, with several virtual machines, including our main mail server (Domino) and a file server. After the replacing the motherboard and staring the VMs, ESXi asked us if we had moved it or copied (different motherboard is like a different computer). We clicked the latter. We started each machine and after some basic reconfiguration, all of them were up. However, we have been having problems with the mail server, it has been acting really slow at times (this could be when it syncs with the secondary mail server) and we have been checking with Centreon (a Nagios frontend) that its CPU load has been a bit high at times and ping response too. There was a moment this morning in which I tried connecting via SSH console and it was really slow to show login and basic commands like ifconfig and top. This particular mail server is a CentOS 4.4.7 64-bit. The little configuring we had to do after restarting it was to configure the network connection as it was resolving through DHCP. Our mail software is Lotus Notes server 9. Do you know of any way in which this replacement may be causing these difficulties, and how to fix it? Thanks.

    Read the article

  • VLC Crash when playing MKV files in Windows 7

    - by Phelios
    I'm not sure if the mkv file is corrupted, but when it is opened with VLC player, VLC loads up and displays nothing. Can't even close VLC after that. And also, VLC is running with 50% of the CPU. I have to use End Process to kill it. How do I know if the file is corrupted? How do I solve this? info from mediainfo Format : Matroska File size : 69.4 MiB Duration : 21mn 48s Overall bit rate : 445 Kbps Encoded date : UTC 2009-11-20 18:33:49 Writing application : mkvmerge v2.9.7 ('Tenderness') built on Jul 1 2009 18:43:35 Writing library : libebml v0.7.7 + libmatroska v0.8.1 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 2 frames Format settings, GOP : N=1 Codec ID : V_MPEG4/ISO/AVC Duration : 21mn 48s Width : 640 pixels Height : 352 pixels Display aspect ratio : 16:9 Frame rate : 23.976 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Audio ID : 2 Format : AAC Format/Info : Advanced Audio Codec Format profile : HE-AAC / LC Codec ID : A_AAC Duration : 21mn 48s Channel(s) : 2 channels Channel positions : Front: L R Sampling rate : 48.0 KHz / 24.0 KHz Compression mode : Lossy

    Read the article

  • Problems with Net::FTP slowing down

    - by c0bra
    I'm running into an issue with using Net::FTP (latest version 2.77) to transfer files to a remote host where a process is waiting to take the file and feed into some other system. The remote process does this every 5 minutes but ignores files that were modified in the last 0.2 seconds (that's right, 1/5th of a second). The problem is that for some reason the transfer seems to halt or slow down a bit and no data is transferred for several seconds and during this time the process picks up the incomplete file and removes it. The weird thing is that manually using the ftp binary the file seems to transfer fine. I've tried messing with all of the Net::FTP switches (Active/Passive, different Blocksizes) and nothing seems to help. What's also weird is that the file seems to transfer fairly quickly at first and on occasion a bit into the transfer. Like 300-500k will go up immediately, but then it slows down to where the file size is only increasing by 2,896 bytes every several seconds. It doesn't seem to happen when I try sending the file to a different remote host, but since a regular manual ftp transfer works with this host I don't know what to think. Some combination of Net::FTP and possibly a slow or wonky connection?

    Read the article

  • Out of memory errors but not actually out of memory...

    - by commradepolski
    So, myself and my fellow support techs have been fighting with this issue and we still dont know what the problem is. Lets start off with the system specs: Windows XP 32 bit Corporate (SP2 and SP3) Intel D975XBX2 Mobo 4gb of ram Intel Core 2 Quad Q6600 ATI Radeon HD 3600 - 512mb After a few hours of working on the machine, the end user will begin to see the following symptoms: Out of memory messages Title bars and menus dont draw in properly Problems accessing network resources Problems opening up documents such as MSWord and MSPowerpoint and text files Problems opening up explorer windows General instability We have looked at task manager while this issue was occurring, and all indicators, like PF usage, threads, handles, etc. are normal. We have been having trouble pinpointing the root cause of this issue. It is also not situated with one user, it affects 8-10. So far we have tried: Resetting CMOS (Waiting to see results) Replacing video card (didnt help) Windows updates (didnt help) Updating network drivers (didnt help) Switching user from 1gbps to 100mbps network connection (awaiting results) Swapping the affected user's hardware (waiting for results) Increasing desktop heap size (helped for a bit but then the issue became more frequent) Applying the /3 switch to XP (didnt help) Increasing and decreasing and setting PF to system managed state (didnt help) We did have a power outage at the office a couple weeks ago, and all these issues became more frequent. Prior to the power outage it may take a week or so for the users to experience the issues but since the power outage it takes 3-4 hours or less. We havent had reports of the above issues causing BSODs, although that would be easier to diagnose :). Any help is greatly appreciated.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >