Search Results

Search found 22263 results on 891 pages for 'desktop background'.

Page 743/891 | < Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >

  • 2 routers at home- how to connect with VNC?

    - by Charles Leviton
    I have two routers at home. First router is upstairs and is connected to the cable modem. 2nd router is downstairs and acts as "signal booster" for the 1st router. Devices connected to the upstairs router have IP addresses of the form 192.168.1.n Devices connected to the upstairs router have IP addresses of the form 192.168.2.n. I blindly followed instructions from a website to do this set up, just glad it works! Upstairs I have a PC running Win 7 64 bit. Its assigned IP is 192.168.1.7. I have a VNC viewer running on this. Downstairs I have a 2nd PC running Vista 32 Home edition bit that is connected to the 2nd router and has IP Address 192.168.2.114. VNC server is running on this. It's listening on 5900. There is no firewall. When I try to connect to this downstairs PC from upstairs it fails with message "Failed to connect to server". I cannot ping to this either. If I try to connect to this downstairs PC using VNC Viewer from another computer that's connected to the same downstairs router then it works like a charm. So what's the work around if the viewer is on a different "network"? I don't have any problems doing remote desktop connection from the downstairs PC to the upstairs PC even if they are connected to different routers. Router information- Upstairs- ASUS RTN13U, downstairs- DD-WRT v24 RC-5 Thanks! P.S. I posted this on the Ultra VNC forum as well but that doesn't seem to have a lot of activity, so taking the liberty to multipost.

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • Are email addresses mandatory for Windows 8 login names?

    - by Cedric Martin
    I've got a computer running Windows 8 and in the user accounts I can see four accounts (they're in french, here's a rough translation): [email protected] / administrator Veronique YYY / [email protected] ASP.NET Machine Account / local account guest account / the guest account is desactivated I've got several questions but they're all related to email addresses and login names / accounts. Are email addresses mandatory for Windows 8 login names? Can you mix live and non-live user accounts on a Windows 8 system? Is it possible to have a live Windows 8 user account which is not using a @live.xx email address? Is it possible to have a non-live Windows 8 user account which is using a @live.xx email address? If the gmail.com email address of the admin is not a live Windows 8 account, does this mean I can create a "fake" email and use that as the email of a new Windows 8 account? Basically I don't understand very well why there are email addresses displayed on the login screen and why there are both @live.xx and @gmail.com email addresses on the same system and answer to the questions I asked above may help me understand a bit better what is going on (I'm coming from a Linux / OS X background. I literally haven't used Windows in more than a decade).

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine? [closed]

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Why does windows (file) explorer try to connect to port 80 (http) instead just using smb?

    - by Erik
    Background: On an almost freshly installed pc I get a message along the lines of : "windows cannot find some-file-server-name. Check the spelling and try again"... when trying to access any fileshare. Troubleshooting so far: pinging works. Both by ip and by name the almost identical pc next to this one can access the file server everyone else can access the file server the pc in question can not access other open fileshares but it can connect to the internet And now for what I think is the interesting part: running wireshark with ip.addr == local.ip.add.ress and ip.addr == server.ip.add.ress tells me that it tries to connect over http. the server replies but after a few messages back and forth it stops the other machine of course just uses smb I guess port 80 just means it defaults to webdav, but I haven't been able to find anything that can cause this. Googling it the closest thing I found was this http://www.techrepublic.com/article/get-vista-and-samba-to-work/6353849 but then again this was an XP pc and I wasn't able to connect to other native Windows shares (and I tried the solution anyway and it didn't work.)

    Read the article

  • Is there a 'global media cache' in Windows 7 that may be used by third party media players?

    - by Pulse
    Here's the background. I don't use Windows Media Player or Media Centre, in fact both components have been 'turned off' via the 'Programs and Features' option. My media player of choice is a nightly build of MPC-HC, which plays virtually everything. I do, however, have VLC portable available for those rare instances when MPC-HC can't or won't play something correctly. This is the situation. I tend to download various media files via torrent, typically, game trailers or freely availably films, such as the recently released, torrent only, Pioneer One. Quite often these files are quite large, being 1GB+ so I quite often like to preview the file after it has downloaded a significant portion of the file. For the most part, this works quite well, and gives me an idea about the worth of continuing the download. Sometimes, however, the file doesn't play as expected and instead plays a completely unrelated file that has been previously played. Here's the strange thing. if I try to preview the file in MPC-HC or VLC both players play the same, previously played file, regardless of whichever player was originally responsible for playback. Most times, it's not even a file that's been played recently. I have searched the registry for some sort of MRU cache, but have found nothing. I have made sure each player has had it's respective history/cache deleted and can fine nothing on disk that seems to be storing this, apparently shared data. So, the question is, where are these unrelated players getting the file information from? Thnaks.

    Read the article

  • Windows preventing running of Telnet client

    - by palswim
    At first, I had issues because Windows 7 doesn't install the Telnet client by default (also, SuperUser has a thread). So, after installing it (and restarting, like Windows asked, though completely unnecessary), I opened a command prompt, and went to run my new Telnet program. I enter telnet, and receive: C:\Users\[USER]>telnet 'telnet' is not recognized as an internal or external command, operable program or batch file. "That's odd," I think to myself. So, in Windows explorer, I navigate to \Windows\System32 and see telnet.exe sitting in that folder. If I double-click on the executable file, the Telnet command prompt opens for me without a problem. So, I return to my Windows Command Prompt, and enter: C:\Users\[USER]>\Windows\System32\telnet.exe '\Windows\System32\telnet.exe' is not recognized as an internal or external command, operable program or batch file. And then (grep comes from cygwin): C:\Users\ryan\Desktop>dir \Windows\System32 | grep telnet Nothing. I've disabled UAC and have no idea why my Command Prompt is lying to me. Anyone experience something similar? To recap: In Windows 7, I have installed Telnet and can see it in my System32 folder, but cannot run it via a Command Prompt.

    Read the article

  • Apache: Setting up local test server with subdomains

    - by RC
    Hi everyone, I have XAMPP running on my desktop machine, and I do all my work on it with no issue. http://localhost ---> points to public_html http://site1.localhost ---> points to site 1 http://site2.localhost ---> points to site 2 http://site3.localhost ---> points to site 3 Entering the above URLs in my web browser on the machine with Apache works great, and I can work on multiple sites within distinct subdomains. But what I want to do now is to transfer Apache and all the files to another Windows 7 machine within the LAN, but still be able to view the subdomains from my main development machine. With a vanilla XAMPP installation on the new hosting machine, entering the IP address of that machine (e.g. 192.168.1.10) into my development computer would send me to the main public_html folder. But how do I set up subdomains such that I can access it externally? For example, http://site1.devmachine Thanks for any help.

    Read the article

  • How can you exclude folders from appearing in the Recent Items feature of Windows 7 start menu?

    - by Jordan Weinstein
    to be clear, I like the 'recent items' feature. I do not want to turn it off. I work at a law firm where we integrate Office with a document management system (DMS). If recent items are turned on, those DMS opened documents will show up in the recent items of a Windows 7 start menu when hovering over Word (or Excel\PPT etc). However the integration doesn't work correctly so if a user were to click on one of those, something wouldn't work right. In short, we've always needed to turn off Recent Items completely for a DMS integrated workstation. Curious if anyone knows of a way to exclude a directory from being "captured" so to speak. When you open a DMS document, the file gets copied to local directory where it saves it as you work, until you close and it checks it back in to the DMS. I'd like to be able to exclude that local directory from recent items. so local files in My Docs and Desktop would show up in recent items, but not DMS opened documents. Hope this makes sense.

    Read the article

  • Eclipse CDT wont recognize standard library

    - by Mike G
    I work primarily on my desktop, but I started working on my mac laptop and noticed a problem with eclipse CDT. The standard library was underlined yellow and cout wouldn't work (It wouldn't recognize/find it). I tried restarting the program and that didn't work. I then tried to see if xcode would work, found that the version of xcode was too old, and updated xcode. Eclipse still didn't work, but xcode did. I tried re-installing eclipse to the new version and re-installing cdt. It still wouldn't work. Restarting my computer wont work either. I'm not sure if this helps (or even matters/applies) but when I type g++ --version into terminal, it doesn't work. (I don't know if that matters but some tutorial told me to do that to check if the compiler was working). So, in review, I have: Re-installed eclipse Restarted my computer Re-installed xcode (which I think has all the compilers eclipse uses.) Updated Eclipse Typed g++ --version to test compiler.

    Read the article

  • Removing expired self-signed certificate in IE9 (created with IIS7.5)

    - by Itison
    Over 1 year ago, I created a self-signed certificate in IIS 7.5 and exported it. I then installed it for IE9 (it may have been IE8 at the time), which worked fine until a year later when the certificate expired. I have put this off, but today I created a new self-signed certificate in IIS, exported it, and attempted to install it in IE9. The problem is that for whatever reason, IE cannot seem to forget about the old, expired certificate. Here's what I tried initially: Accessed my ASP.NET application and see the Certificate error. Clicked "View certificates". Clicked "Install Certificate" and then Next/Next/Finish. At this point, it says the import is successful, but it still only shows the expired certificate. I've tried simply double-clicking on the exported certificate on my desktop. Initially I chose to automatically select the certificate store, but then I tried it again and manually selected "Trusted Root Certification Authorities". I've also tried dragging/dropping the certificate over an IE window and clicking "Open". The process is then exactly the same as it is if I had double-clicked on the certificate, but I had hoped that this would somehow specifically tell IE to use this certificate. I tried opening MMC and with the Certificate snap-in, confirmed that the new certificate was added under "Trusted Root Certification Authorities". It was also under my "Personal" certificates (I guess this is where it goes by default). Nothing worked, so I went through every folder in MMC and deleted the expired certificate. I also deleted the expired certificate in IIS. Nothing has worked. Any ideas? I see no clear resolution and I can't seem to find any posts related to this issue.

    Read the article

  • Cisco AnyConnect VPN client - prevent connecting as work network

    - by Opmet
    From Windows 7 I'm using "Cisco AnyConnect Secure Mobility Client 3.0" to connect to our corporate network. Every time I establish the VPN connection Windows will set the type as "work network". I don't want this. So I go to "network and sharing center" and manually / interactively change it to "public network". But I have to repeat it for every new VPN connection. Is there any way to make Windows remember / persist this configuration? Can it be configured in the VPN client? Do our IT admins need to change something at server end? Motivation: A "work network" per default uses different firewall settings that allows for stuff like "network discovery" and "file shares". But I just need "remote desktop" (mstsc). Additional info: Our IT admins claimed this would be Windows default behaviour and there was nothing we could do about it: Windows would always initiate a VPN connection as "work network". Based on this statement I assume this is a "general" issue and went ahead posting here (at superuser.com).

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Hibernate & Sleep broken after IE 9 RTM installation in Windows 7 x64.

    - by AKa
    I have a question about hibernation. I installed Internet Explorer 9 RTM x64 on my Windows 7 x64 SP1 desktop machine. After this, computer don't entry the hibernation or (hybrid) sleep state properly. After I hibernate the computer the monitor become blank screen and the keyboard and mouse are inactive. But the machine is still running and there isn't any possibility to switch them off as only with power button. But this is recognized on next start as ineligible because of log entry with message “The previous system shutdown at xx:xx:xx on ?xx.?xx.?x was unexpected“ and menu with safe mode option. I’m clearly not sure if it has something to do with Internet Explorer installation, but I’m definitely guaranteed that before of this I never had some problems with hibernation or (hybrid) sleep. In Windows logs isn’t something suspect. I switched the hibernation off and on, installed new drivers for mainboard, graphic and network card, checked the hard disk, nothing was helpful. This is really sad, beacuse I don't like to switch the computer completely off because it takes longer to boot. Any suggestions?

    Read the article

  • How to make Microsoft JVM work on Windows 7?

    - by rics
    I am struggling with the following problem. I cannot install MS JVM 3810 properly on Windows 7. When I start Interner Explorer 8 without starting any java 1.1 programs choosing Java custom settings under Internet options causes the crash of the browser. I have some Java 1.1 programs that work well in Internet Explorer 8 on Windows XP after the installation of MS JVM 3810. I know that it is not advised to use this old JVM but it is not a short-term option to port the programs in newer Java since it contains 3rd party components. Complete rewrite is a long-term plan. Strangely jview and appletviewer (jview /a) works from a console so the MS JVM 3810 is not completely busted just IE 8 does not like it. The problem with the appletviewer is that it cannot connect to the server even if both signed and unsigned content in Java custom settings have been set to Enable all. (Since Java custom settings was unreachable due to the crash the modifications - including My computer - were performed through the registry and pre-checked to behave correctly on Windows XP and Internet Explorer 8.) If jview was working then I could at least think of a workaround. Is there a way to configure MS JVM or jview properly on Windows 7? Another options would be: Checking Internet Explorer 9 Beta. Using virtualbox and Windows XP older IE in it. Delaying Windows 7 upgrade. ... Update Finally we have modified all the programs to work parallelly as applet and application as well. This way the programs can still be used from browser on older Windows versions. On Windows 7 the applications are started from the desktop. Installation to all user machine can easily be solved since they already have a large common application drive. The code update is fortunately only a few lines of modification: including a main method in the applet class. Furthermore instead of the starting html page a bat file is used to set the classpath before the startup with jview.

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • Is it possible to command a common router without using the web interface?

    - by MDeSchaepmeester
    Some background The internet arrangement in my student home is really weird. There is one ethernet outlet and several wifi hotspots. Either way requires a login through a web site to get internet access. This is annoying as each device needs to login seperately and with a PS3 for example, it is impossible to get connected at all since the web login procedure doesn't work. Therefore I have installed a D-Link DIR-635 router which is connected to the ethernet outlet. It has DHCP enabled so it uses NAT, but whatever it is connected to also uses NAT and I've read this should not work. A fellow student tried it with an Apple Airport but that keeps giving errors related to NAT after NAT. Anyway my setup does work so bonus points if you can clarify this. I need to login to the web site I mentioned earlier with any device, after which all devices in my LAN have connectivity. This is great. Except... In short From time to time, I lose internet connectivity and my D-Link DIR-635 router needs to do a DHCP renew. I can do this via the web interface but my life would be easier if I could just run a cmd file which tells my router to do this without all the hassle. This would setup a connection to my router and execute the proper command. I have tried googling but couldn't find much helpful stuff.

    Read the article

  • Mac always boots with incorrect display gamma (for years now including Lion)

    - by Alex Wayne
    I think somewhere, something got installed but I have no idea what or how to fix it :( Basically, my old MacBook Pro running 10.5 Leopard had a problem where on boot it would show everything on the screen in a very sort of crunched color space. Everything below 15% white would just be pure black, everything above 85% white would be pure white and all colors look to be a touch more saturated. It's garish. To fix it, I found that I could boot into almost any fullscreen 3D game. When the game launches, the colors would still be off, but when I then quite the game and return the desktop everything is normal again. I've noticed Blizzard games work most reliably for this (World of Warcraft or Starcraft2). This problem has followed me through the years. When I upgraded to an iMac I migrated everything over to it, and the issue now happens on the iMac too. I then got a new MacBook Pro for work and migrated my iMac over to that, and it has the problem too. I had thought that it was an OS bug, but upgrading to 10.6 Snow Leopard didn't fix it and neither did 10.7 Lion. Furthermore I can't find any reference on any forum or help site where anyone else has this problem. If anyone has any idea what processes or settings or apps I should look at to figure out why this is happening I should would appreciate it! It looks sort of irresponsible when I open my laptop in the office to work and then boot up Starcraft 2 full screen...

    Read the article

  • How is it possible to list all folders that a particular user/group has permissions on?

    - by Lord Torgamus
    Is it possible to list all folders/files that a given group has explicit permissions on, for a machine running Windows Server 2003? If so, how? It would be nice to see inherited permissions as well, but I could do with just explicit permissions. A little background: I'm trying to update groups/permissions on a test server. One of the groups, Devs, wasn't implemented correctly when it was created, and my goal is to remove it from the system. It has been replaced by LeadDevelopers, which has permissions on many — but naturally not all — of the same folders. I want to make sure that I don't accidentally orphan any folders or cause any other issues when I remove Devs. It did have some admin-level permissions. EDIT: The answers so far — at least *cacls and AccessEnum — provide a way to find out which groups/users have permissions on known directories/files. I actually want the reverse of this behavior: I know the group, and I'm looking for the directories/files for which the group has permissions. Also, as I noted in a comment, the Devs group is not itself a member of any other group.

    Read the article

  • What's Keeping My Computer Awake?

    - by phantomdata
    Hey guys, First the question; How do I figure out what is preventing my Windows 7 computer from going into sleep mode? Second; some background... I've been struggling with this for a few days and am utterly perplexed. I setup sleep mode on my Windows 7 PC a few weeks ago, and all was well. The PC would sleep as expected and I was snuggly in knowing that my computer was saving power and some wear and tear on the components (we'll leave the 'is it better to sleep' debate for another thread/day, please don't start it). Well, I noticed the other night that my system stopped ever going to sleep. I set the sleep time down to 1 minute and wandered fully away from the PC (ensuring that no errant mouse or keyboard movements would occur) and the PC never went to sleep. I've also observed this over longer intervals as well, such as overnight. I have sleep mode enabled, of course "multimedia settings - When Sharing Media" is set to allow the computer to sleep. "powercfg -lastwake" show nothing of interest, since it never goes to sleep and can't wake up. "powercfg /requests" shows 3 entries - all "[DRIVER] ?". I assume that 2 of these are my mouse and keyboard - as I've recently used them to run the powercfg command. I'm at a loss for the third though. I've unhooked all USB peripherals save for my keyboard and mouse. Wake on LAN is disabled in my BIOS. I know that you can disable all apps from waking/preventing sleep - but I want the ability to remain for those apps that do legitimately need to keep the system awake. So; does anyone know of a way to figure out what the 3rd phantom "[DRIVER] ?" is in powercfg /requests?

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • How to stop Vista from auto changing video resolution?

    - by bialix
    I have new Acer Aspire Revo R3600 computer with Vista pre-installed. The computer has NVidia video adapter. While connecting 17" LCD monitor (LG L1742S) via VGA cable it works fine, and I can change the resolution of the display from max 1920*1024 down to some other value, and after reboot the settings are restored correctly. But when I'm connecting bigger full HD 1920*1080 display (LG E2250) via VGA cable then every boot I have the same problem: I see boot progress window, then I see MS logo, then I see welcome screen then I start to see desktop and suddenly monitor switch off and show me the message about unsupported frequency of input signal As I understand Vista tries to auto-change resolution and sets wrong parameters. I've tried to boot into safe mode and into low-resolution mode, every time I have the same problem: Vista boot-up and suddenly monitor stops working. I've tried to connect this monitor to notebook with Windows XP and has no problem to work with this display on its native resolution. How can I disable this display resolution auto-changer in Vista? Or maybe there is another workaround?

    Read the article

< Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >