Search Results

Search found 28411 results on 1137 pages for 'think'.

Page 132/1137 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • Is there a way to know what the Windows Disk Cleanup utility will delete?

    - by Cam Jackson
    When I run the Disk Cleanup utility that's built into Windows 8, it tells me that it can free up 53GB by deleting 'Temporary Files'. However, a CCleaner analysis on default settings only finds about 300MB worth of space to free up, so I'm wondering what Disk Cleanup has found that CCleaner does not. Note that this question appears to be similar to what I'm asking, but the accepted answer says that 'Temporary Files' refers to %TEMP%. I've already cleared out most of C:\Users\Cam\AppData\Local\Temp, and it now has only 230MB of stuff in it, even with system files showing. So where is this 53GB located? Is there a way to find out what it is? Edit: I should note that this is on a 110GB SSD, so it's almost half the drive. And in fact I'm only using 86GB, so if it's really going to clear out 53GB, that would be more than 60% of the stuff on my C drive. I'm starting to think that Disk Cleanup caches its analysis, and hasn't updated since I started cleaning up the drive earlier today. Although when I run it it says that it's 'Calculating' how much space can be saved, and it takes about 5-10 seconds to do so. Hmmm... Edit2: Here is what my hard drive looks like, according to SpaceMonger (Right click-Open image in new tab, so you can see it properly): You can see why I was starting to think that the 53GB figure is actually wrong. Even if 'Temporary Files' includes my hiberfil and everything in WinSxS (about 13GB total), that would be 26GB, which is only halfway there. Hard to see where there's 53GB of stuff to delete.

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • Changing externally visible IP on a multi-IP router?

    - by AlternateZ
    I work at a public library and I'm trying to configure OCLC's EzProxy software. I've run into a problem and I think it's related to our network config. I'm punching above my weight here a little so I need some help. I think I'm trying to configure a 1:1 NAT, but not sure how or if our hardware supports it. The EzProxy machine is on an internet line which supports multiple external IPs. Our router is a Billion BiGuard30. There's another server on this line, let's say its IP is x.x.x.9. The EzProxy machine is x.x.x.11 I've set up port forwarding from x.x.x.11 on the http ports to the EzProxy machine. Trying to browse to x.x.x.11 from an external PC works fine - we get to the EzProxy page we are serving. However, if we go to something like WhatIsMyIP from the EzProxy machine, it says that its IP is x.x.x.9. This causes problems with our user authentication software. How do we make the rest of the internet see that the machine is x.x.x.11? There doesnt seem to be any "outbound port forwarding" on the Billion router, nor is there any "1:1 NAT" options in its config webpage. The EzProxy machine is running Ubuntu 12.04, if that helps.

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • Nagios 403 forbidden, indexes?

    - by Georgi
    installed nagios under freebsd 9, but can't get the right way to be public in browser (from other pc's). I think that the problem is in the indexes or that there is not index file (instead main.php). Apache says that syntax is ok. The permissions of the dir are 777. The logs print Directory index forbidden by Options directive: /usr/local/www/nagios/. This is my configuration: ScriptAlias /nagios/cgi-bin/ /usr/local/www/nagios/cgi-bin/ Alias /nagios /usr/local/www/nagios/ <Directory /usr/local/www/nagios> Options +Indexes FollowSymLinks +ExecCGI AllowOverride Indexes AuthConfig FileInfo Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> <Directory /usr/local/www/nagios/cgi-bin> Options +ExecCGI AllowOverride None Order allow,deny Allow from all AuthName "Nagios Access" AuthType Basic AuthUSerFile /usr/local/etc/nagios/htpasswd.users Require valid-user </Directory> I think that the problem is in idexes, maybe? When I remove the options it's public and available but lists the files and says that idnexes are forbidden..

    Read the article

  • NRPE unable to read output, but why?

    - by ticktockhouse
    I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge "Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe: nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading nrpe[14534]: Continuing with errors... nrpe[14535]: Starting up daemon nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients! nrpe[14535]: Listening for connections on port 5666 nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down..

    Read the article

  • Auto-archive IMAP mail folders on OS X

    - by Pradeep
    Hi, I am trying to achieve the following. Download all messages from mail server(and remove downloaded messages from server). Downloaded messages should be in a local mailbox preserving folder structure as was defined on server. The download process should be automatic and shouldn't create duplicates. I am on OSX and looking for solutions using Apple Mail or Thunderbird or similar. So far I have found POP is not the way to go (as it looses folder structure and potentially can cause duplicates). The solution described here seems very good but isn't yet available for thunderbird or apple mail. http://getsatisfaction.com/mozilla_messaging/topics/auto_archive_and_keep_folder_structure. The other alternative is outlook which has auto archive which is paid and I think exports to pst instead of the more common mbox format. Yet another alternative is http://www.pop4.org/ which adds support for folder management to POP. Which I don't think is going to become usable soon. Any other better solutions.? Thank you

    Read the article

  • Can I use this power supply + case combination without causing problems?

    - by evan
    I am putting together a computer with a Antec P180 case and a Thermaltake TR2 RX 650 W power supply. The problem is that the Antec P180 case has a separate compartment for the power supply. With an opening for the on/off switch + ac connector to one side, a wall with a small hole for cables to route through on top, a wall on the bottom, and on the other side a fan which pushes air from the hard drive compartment to the power supply compartment. I think the design of the case assumes the power supplies fan is on the side next to the on/off switch, but the fan on the power supply I have is on top, which makes me worry about overheating the power supply. There is about half an inch between the top of the power supply and the wall and the other fan should keep air flowing to push out the air that the power supply pushes upwards. Do you think this setup should work, or should I go get another power supply? Thanks!! PS: This computer will be running an Ubuntu server, so it will always be on, but the rest of the components shouldn't be generating as much heat as they would on say a gaming machine.

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • Possible DNS issue?

    - by durilai
    I am having an issue, which I think stems from DNS. I have 2 servers. Server 1 is AD server with DNS, which was automatically configured when installing AD. The second server is a web server that is part of the domain, but it is not AD nor any other role. I can remote desktop in from server 1 using internal IP address, but when I attempt to connect from any other computer it fails, the computer can connect to server 1. I am able to ping both servers, as well as nslookup both using their FQDN. I am also able to telnet to port 3389. Any help is appreciated UPDATE I do not think it is DNS anymore, but not sure what it is. The remote desktop connects and I get to the login prompt, but when I start to enter credentials it disconnects. I then am unable to reconnect. If I wait for about 10 minutes it will allow me to repeat, but with the same results. UGH!!!

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • OpenSSL response 404 issue on centOS 6

    - by dsp_099
    I followed this tutorial (though it's for 5.2, I figured I'd be alright). The changes I had to make that seemed to have worked: Rename ca.csr to ca.cslr (that's the one the command generated) List it in the ssl.conf as ca.cslr instead of ca.csr I have the following in the httpd.conf <VirtualHost *:80> DocumentRoot /etc/test ServerName site.com </VirtualHost> <VirtualHost *:433> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /etc/test> AllowOverride All </Directory> DocumentRoot /etc/test ServerName cryptokings.com </VirtualHost> /test contains a folder inside of it, accessible via http://site.com/test/foo, however attempting to access it via https://site.com/test/foo results in warning that the certificate is untrusted (self-signed, no biggie) a 404 error. Chrome's complains about the certificate are the following: The identity of this website has not been verified. • Server's certificate does not match the URL. • Server's certificate is not trusted. I think those warnings are a side-effect of a self-signed certificate - or is the first one something that needs to be addressed? I seem to be able fetch the root page via https just fine though, it shows a standard CentOS setup page. (That said, I haven't added a VirtualHost entry for it so I suppose that makes sense) I think I've made a mistake somewhere during the setup as I'm not too familiar with the process. During setup, I was prompted for a type of password that would be required when apache restarts but running service httpd restart does not seem to prompt me for one. Any help would be appreciated.

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • How to tackle dell support system? [closed]

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Esx servers in a DMZ

    - by James
    I have two ESX 3.5 servers in a DMZ. I can access these servers on any port from my lan via a VPN. Servers in the DMZ are unable to initiate connections back to the lan, for obvious reasons. I have a vCenter server on my lan and can initially connect to the esx servers fine. However the esx servers then try to send a hearth beat back to the vCenter server on udp/902 obviously this will not get back to the vCenter server, which then marks the ESX servers as not responding and disconnects. There are two broad solutions I can think of; 1) Try to tell vCenter to ignore not getting heart beats. The best I can do here is delay the disconnect by 3 mins. 2) Try some clever network solution. However again I am at loss. Note: The vCenter server is on a lan, and cannot be given a public IP, so firewall rules back will not work. And also I cannot setup a VPN from the DMZ to the lan. **I am adding the following, explanation that I added to the comments Ok maybe this is the bit that I not explaining well. The DMZ is on a remote site, an entirely independent network (network 1). The vCenter server is on our office lan (network 2). Network 2 can connect to any machine on any port on network 1. But network 1 is not allowed to initiate a connection to network 2. Any traffic destined to network 2 from network 1 gets dropped by the firewall as it is traffic to a non-routable address. The only solution I can think of is setting up a VPN from network 1 to network 2, but this is not acceptable So any clever folk out there any ideas? J

    Read the article

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

  • Windows 7 wireless not seeing any networks

    - by jkohlhepp
    I think I have managed to confuse Windows 7. When I did the install, I had the network cable plugged in to my router, but the wireless card was also enabled. During the install, Windows 7 seemed to see my wireless network and even asked me for the WEP key. I know that it used the WEP key because I initially entered an invalid one and it gave me an error. Then the network said "SoAndSoWireless Connected". However, when I unplug or disable my wired network card, then I have no internet, and it can't see any networks. When I plug in the wired network card, it says "SoAndSoWireless Connected". Under Network and Internet Network Connections I have "Local Area Connection" and "Wireless Network Connection". The wired one's status is "SoAndSoWireless" and the wireless status is "Not Connected". Also, the wireless connection can't seem to see any other wireless networks in the area and I know there are tons. My neighbors have several. I've somehow seemingly confused Windows 7 into thinking that my wired network card is my wireless card or something. Any ideas on how to un-confuse it? This is a desktop machine by the way, if it matters. EDIT: Ah, I think part of the problem is that I named my network accidentally the same as the name of the wireless network being broadcast by the wireless router. So that might be why it says that name on the hard-wired connection. Perhaps the drivers just are completely not working for the wireless card. Thanks, ~ Justin

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Thunderbird: how to move mails into correct thread? (mailing lists)

    - by unor
    I'm subscribed to some mailing lists and every day people reply to a wrong mail (or they don't reply at all), so that their mail lands in the wrong (or a new) thread. I set the mail display in "View ? Sort by" to "Threaded". Example: mailing list "Foobar": [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic Re: [Foobar] random topic [Foobar] I'm John Doe Re: [Foobar] I'm John Doe Re: [Foobar] Welcome, John Re: [Foobar] random topic There are two discussions, one about "random topic", one about "John Doe". The subject line changes in the discussion about John Doe, which is fine (no problem here). But the last mail should be pigeonholed in the first thread. Instead it is at the top-level. Now, how can I move that last mail into the correct thread? I tried to drag&drop it at the mail I think it should be a reply to, but this doesn't work. I think theoretically it should be possible by fiddling with the mail headers after receiving the mail, but this doesn't seem to be a comfortable way.

    Read the article

  • Prevent Network Printers Automatically being added to 'Devices and Printers' in Windows 7

    - by Ben
    I think my question is a duplicate of this one, but the original question was never properly answered (the steps described are for Windows XP). I am aware of the option to "Turn off Network Discovery" (under Control Panel All Control Panel Items Network and Sharing Center Advanced sharing settings); I set this option (for both Home/Work and Private) but it doesn't seem to stop the printers getting added, and has the side effect of preventing me from browsing the list of machines on the network (which I need). I've tried the Windows XP registry option - but it doesn't seem to make any difference: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\NoNetCrawling I find it annoying having the printer list cluttered with printers from all over the office which I am never going to use (especially since lots of them no longer physically exist, users just haven't deleted them from their machines). This must be a real problem for people in massive offices with large numbers of printers - but I can't seem to find a lot of people complaining about it - which makes me think I'm missing something obvious. I don't really want to hack the firewall or turn off sharing completely, I still want to select and use network printers and file shares. Any ideas?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >