Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 159/230 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • USB drive errors after airport scan

    - by bobobobo
    Well, I just got a new PNY usb drive and it passed through an airport scanner yesterday. For some reason, I wrote to it and then tried to read from it today, and it gave me a corrupted error! chkdsk reports errors like: Bad links in lost chain at cluster 1179 corrected. Lost chain cross-linked at cluster 1200. Orphan truncated. Lost chain cross-linked at cluster 1228. Orphan truncated. Lost chain cross-linked at cluster 1236. Orphan truncated. Lost chain cross-linked at cluster 1237. Orphan truncated. Lost chain cross-linked at cluster 1244. Orphan truncated. Lost chain cross-linked at cluster 1250. Orphan truncated. Lost chain cross-linked at cluster 1266. Orphan truncated. Lost chain cross-linked at cluster 1278. Orphan truncated. etc. What is this from? Could it possibly be from the airport scanner, or is it likely a defective USB chip? How can I check the chip to see if I should just return/throw it away or continue to use it?

    Read the article

  • Completely unintuitive Apache/PHP memory-freeing behavior

    - by David
    Okay, this one's weird. I have a Turnkey Linux server with a gig of dedicated RAM. It's running WP3.2 with a boatload of plug-ins. It's a new site, so it has very limited traffic (other than search engines, maybe 20 hits a week). Now, for a few weeks, every few days, it would max out on main RAM, start eating up virtual RAM, and then crash. It's had this behavior for a while and I've been trying to figure out which element was causing the crash. Nine days ago, I pointed my external server monitor to this server. I wrote a 5-line HTML file (not PHP and not WP) that the server monitor accesses every minute, to see if the server is up. So, now, nine days later, the server has been rock solid, up all the time, no memory leak at all. I changed NOTHING on the server itself to see this behavior change. Have you EVER seen anything like this? All the server monitor is doing is retrieving a single, super-simple HTML file and all the memory leak problems have gone away. Weird, eh?

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Why are ISP's installing routers on my site when the feed is a form of ethernet already?

    - by Cosmin Prund
    I'm connected to 3 ISP's right now. Two of them already have routers at my site, the third one announced me "they need to install some equipment" when I requested BGP session. I can only assume they need to install a Router, since that connection is now working fine, using the usual /30 net block for the connection, and the "last-mile" solution is not going to change since they only installed it last week and the BGP was in the contract from the beginning. I simply don't understand this: the "feed" is already a form of ethernet. Even those they're using different technologies for the last mile, they're all entering the ISP router using an RJ45 WAN port. I assume the ISP router does something really important that can't be done by the Big Router on the other end of the connection. It must also be something that can hurt them if miss-configured, since they don't trust us (the client) to do the stuff on our router. And I'm not talking cheap throw-away routers here: One of the routers is Cisco 2800. Edit to add network details: I'm connected to 3 ISP's, two over Radio links, one over Fiber Optic. One of the radio links is going to get dropped and the other radio link will be turned into fiber sometime next year. The fiber is 20 Mbit, radio 1 is 40 Mbit and radio 2 is 2 Mbit. I've got a /24 of provider independent address space. I'm not doing out-of-the ordinary stuff with my network, I'm overly connected because my network needs to be "up" all the time.

    Read the article

  • Copying windows home server backup offsite

    - by Simon
    What ways are there to copy a windows home server backup to an offsite location? I'm talking specifically (and only) about the automated backup of my entire machine, and not the shared network folders. I am 90% working away from home on my laptop which has a 640GB drive so the shared folders are essentially useless to me. I backup every night, but if my house burns down or broken into the I'm in serious serious trouble ! I'm really looking for some alternative way to back up my entire machine - which much not interfere with the reliability or speed by which my WHS backs up my laptop every night. Either a way to 'export' a complete machine backup from the server, or recommendations on non-conflicting software I can backup to a 1TB drive at work are what I'm looking for. Note: I believe that WHS uses its own completely proprietary backup and doesn't use things like any 'backup bit' or 'archive bit'. I just dont want to install some other backup software that will conflict. PS I'm now running Windows 7 and just realized that I should probably check out the backup functionality it gives me. I assume that won't conflict right! Edit: Thanks for the hosted solutions. I'd also appreciate ways to backup to an 'offsite' location that I control - like my office vs. my home. The hosted solutions I think will be too slow or expensive for my needs.

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/ UPDATE: I put a bounty on this, and would be open to hacking the registry or doing a custom config. Please help me fix my title bar!!! Update2: I would not like to move my library files out of itunes\!library_info to avoid them inter-mingling with my music library.

    Read the article

  • Synchronize Active Directory to Database

    - by Tommy Jakobsen
    We are in a situation where we would like to offer our customers to be able to manage their users themselves. It is around 300 customers with up to a total of 10.000 users. Besides creating, updating and removing users, they will very often read information about users for statics and other useful informations available. All this functionality, should be available from an Intranet web page (.NET Framework 4) that the users will access through Citrix or similar. Now the problem is that we would really like the users not to query AD directly for each request, but rather make them hit a database that is synchronized with AD. It would be sufficient to run this synchronization a few time each day (maybe every 5. hour). When they create a user, it should not be available right away, but reviewed and then created within two days (the next step would be to remove this manual review, but that's out of scope for this question). What do you think about this synchronization of AD? Does anyone have any experience with it and is it something that is done in other organizations, where you will have lots of requests which is better handled by a database than AD (I presume)? Are there any techniques out there for writing such a script that synchronizes AD with database tables? My primary concern is the groups/members relations which can be rather complicated. Or are there software that synchronizes AD with a database? Any comments will be much appreciated. Thank you.

    Read the article

  • Nginx 502 Bad Gateway: It just won't stop

    - by David
    I have the same problem that most people seem to have with Nginx: 502 bad gateway errors. They are intermittent but typically happen more than once per session, which means my users are probably running into it nearly every time they use the app. I've tried adjusting fastcgi_buffers and fastcgi_buffer_size (in both directions) to no avail. I've tried various other things with the configuration file but nothing seems to work. Here's my config (note that I've stripped away most of the things I've tried, since they didn't work and I didn't want to bloat the file with a bunch of un-related directives): server { root /usr/share/nginx/www/; index index.php; # Make site accessible from http://localhost/ server_name localhost; # Pass PHP scripts to PHP-FPM location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Lock the site location / { auth_basic "Administrator Login"; auth_basic_user_file /usr/share/nginx/.htpasswd; } # Hide the password file location ~ /\. { deny all; } client_max_body_size 8M; } I'm running a small Rackspace cloud server, which should be plenty for handling an app with a small user base...

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • Need Help Scoping a Server to use for study (MCITP Ent Admin + SharePoint 2010)

    - by AVFamily76
    i need to study for mcitp, but i also need to study for sharepoint 2010 i have a poweredge 1850 with two single-core CPUs + two 73G drives - it kills me on electricity, so don't want to use it, and it won't do VT, but it could be one of three boxes for a lab that's cheap, but will cost a lot on electricity i was thinking . . . OPTION #1 Opteron 4170 HE (50 watt chip), 6-core, only two-bills ($200), but the board's are $250, so that's an $800 box, then get another box to dual-boot Win7/Hyper-V on the cheap...? OPTION #2 Used Quad - but how many VM's that are really banging away could it run at same time? (Server 2008r2, SQL 2008r2, Search Server) OPTION #3 Study from books and just get one box that can run two VM's at same time, even if slowly. the last time i had and used a home lab was five years ago when i had a DC, SQL, Exchange and business app box, that's where i got my server skills was just banging on it for four years, but didn't read any books, so now i have to get certified and know the material, and just am not sure how much attention i should pay to the box i use versus the studying time and reading. sorry it's a subjective question, and am obviously open to all sorts of abuse here, but hope you can tell me also how many VM's i can run at the same time given what they'll be doing (SQL and SharePoint FAST search server are resource hungry) thanks!

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • How do I stop linux from trying to mount android phone as usb storage?

    - by user1160711
    When I plug in my Motorola Triumph to my fedora 17 linux box USB port, I get an endless series of errors on the linux box as it desperately attempts to mount the phone as a USB drive. Stuff like this: Jun 23 10:26:00 zooty kernel: [528926.714884] end_request: critical target error, dev sdg, sector 4 Jun 23 10:26:00 zooty kernel: [528926.715865] sd 16:0:0:1: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 23 10:26:00 zooty kernel: [528926.715869] sd 16:0:0:1: [sdg] Sense Key : Illegal Request [current] Jun 23 10:26:00 zooty kernel: [528926.715872] sd 16:0:0:1: [sdg] Add. Sense: Invalid field in cdb Jun 23 10:26:00 zooty kernel: [528926.715876] sd 16:0:0:1: [sdg] CDB: Read(10): 28 20 00 00 00 00 00 00 04 00 If I go ahead and tell the phone to allow linux to mount the USB storage, the messages stop, and I get a mounted drive, but if all I want to do is use the debug bridge, my log on linux will continue to fill with this junk. Is there some udev magic I can do to make the system ignore this particular device as far as usb storage goes? I just noticed that if I tell the phone to enable USB storage, let linux recognize the new disk, then tell the phone to disable USB storage again, I get one additional log message about capacity changing to zero, but the endless spew of messages stops, so I guess one work around is to enable and disable USB right away.

    Read the article

  • Intell SSD + Win 7 after crash can not repair, can not re install

    - by Ori
    I have Lenovo w520, after i bought it i took away old hhd (no longer with me) and replaced it with intel ssd, it worked perfectly for 1 year or so, today my system fr0ze and after waiting for some time i didi hard reset - it wasn't able to boot anymore at all, i do not see any messages from windows ever, it only loads Intel boot utility that suggests to pick one of 3 devices to boot, it has my hdd there but nothing happens. /I dont have recovery tools from lenovo since i moved to another country, i got win 7 cd from a friend (came with his laptop) abd if in bios i have AHCI - it doesnt see my ssd, if compatible mode - it sees it but format not available, partition creation gives b\me 8007045 error. I tried diskpart, in compatible mode it sees my disk but doesnt do recover or clean all, also win 7 disk tools dont do anything if i try to do boot fix... I am ok with erasing it but i seem not to be able too, i jus tneed the machine to wpork asap, all my files are on external drives so i dont care about formatting. please help! I am given a very old machine by a friend so i am able to browse internet... it is under XP...

    Read the article

  • CentOS Installation on a Cisco MCS 7800

    - by William
    Hello, I'm having some problems installing CentOS 5.5 Final (i386) Onto my server, a Cisco MCS 7800. The problem comes very early into the installation. When the welcome screen comes up ans gives you the option on how to boot into the DVD, Ill press enter to go into the graphical installer. The Screen will then have a blinking cursor in the top left of the screen and will never go away (I thought that it just might need time but I let it sit for over 5 hours.) I then booted into it again and tried using Linux Text thinking it was a problem with graphical installer. That didn't work, same problem. Then I tried a DVD of RHEL 5 and got the same problem, both graphical and Linux text. At this point i think its a hardware problem. The Server has 2GB of ECC RAM, 1 Pentium 4 CPU @ 3.06GHZ and 2 WD Hard Drives (80GB) Configured for RAID 0. ( Also there is a option in the BIOS for what OS type and that is set to Linux.) If anyone has any idea what is going on, it would be helpful. ================Edit================== ooshro, typing "text" doesn't change a thing. still stuck at the blinking cursor. I looked it up and its really the same thing as typing "linux text", which as stated in the first part of my question, i've already done.

    Read the article

  • how to diagnose a hard system seizure? Dell+Ubuntu

    - by rob
    I've got Ubuntu 9.10 on a Dell Vostro 420 desktop, a little over a year old, which I use for plain vanilla work stuff (email, web, terminal, text editor). Every now and then, at totally random times, it completely freezes on me. Hard. Mouse and keyboard stop working, cursor stops blinking, clock stops moving. All I can do is hold down the power button on the front of the box to shut it off. Sometimes it happens after several months of continuous uptime; sometimes it happens a few minutes after a reboot, while all I've done is open a terminal to look at log files, or maybe firefox to do a google search. Each time, there is nothing at all in /var/log/messages at the time of the crash. This makes it seem like a hardware problem, and indeed a few months ago I opened the box and wiggled everything and the problem went away for a while. But now it's back. I went in and checked everything, took out each RAM card and reseated. No luck. I ran all the system diagnostics (the long version) and everything passed with flying colors. Something is messed up in this box, but without any useful logs or failed tests, how in the world am I going to find it? And of course, Dell's not gonna help me cause I went and replaced Windows with Ubuntu. What steps would you take next to track down this problem?

    Read the article

  • How to migrate WinXP from failing old HD to new one

    - by Péter Török
    Following this issue, we have all our important data backed up now. I also bought and installed a new replacement hard disk (WD 160GB PATA) as secondary (slave) drive. I created two primary NTFS partitions on it: a 40 GB system partition, and a 110GB data partition. In theory I could start reinstalling WinXP from scratch on the new system partition, then copying over all user data from the old drive to the new data partition. Once this is done, I could even throw away the old drive, or keep it just to see what happens. (Note: I don't want to clone the whole drive as it contains a dual boot setup with an old Linux installation which I don't need anymore, and anyway, a fresh reinstall would do WinXP good to get rid of many years' clutter.) However, I am lazy :-) The old HD is still functioning, the problem has not manifested again since. So I feel there is no need to hurry with a complete OS reinstall. What I don't know though is whether I will be able to install WinXP on the new system partition at a later stage without affecting the contents of the data partition on the same drive. If this is possible, I can just move over all our data to the new data partition to have it safe, then continue running WinXP from the old drive as long as it works. Does anyone see any problems/risks with this plan?

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • Troubleshooting an overheating CPU

    - by Jeff Fry
    I & my father just recently put together a new PC. Specs below. From the very beginning, on boot it will often complain that the CPU is too hot. If I sit in BIOS and watch the CPU, it'll drop back down from red to blue (<72C), at which point I've tended to just boot into Windows...and haven't had any problems. In fact, I've played a couple hours straight of Skyrim at max settings, and not had any visible issues. That said, I've occasionally walked away & come back to find that it's crashed. Yesterday, it crashed (while idle) twice in 12 hours, which shifted the balance from busy-with-life to nervous-I'm-about-to-melt-something. I just installed Core Temp which is showing my 4 cores fluxuating between 70-98C. I'm guessing at this point that the CPU fan may be incorrectly installed or defective. My first thought is to either (a) add water cooling (which the case supports) and / or (b) replace the CPU fan with an after-market one. That said, I'm very open to suggestions. A note, while I certainly don't want to burn money here, I have a baby coming any day now and am still unpacking from a recent move so if I have a choice between an option that costs money and another that takes a while...I'll happily spend a bit extra. Side question: Should I be nervous to even have this on at this point? Let me know if there's something useful I could add to my report. Otherwise, I'm looking forward to your suggestions! Thanks. CPU Intel i7-2600 CPU w/ stock fan Other HW ASUS P8Z68-V Pro motherboard 64G SSD boot drive 4 older SATA HDs GIGABYTE ATI Radeon HD6950 1 GB DDR5 8G Kingston T1 Series RAM Corsair 650W Gold Certified power supply Antec P280 case

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Toshiba A205-5804 freezes when plugged in

    - by heron
    Well I have a Toshiba A205-5804 and the problem is that the screen freezes anytime I plug the pc into the external power supply, not as most of the computers having the same issue, my computer DOES freeze in safe mode, and I really can't bear this problem for much longer... It's not an overheating problem, the computer is not getting hot or anything related, I've already tried changing the AC adapter, booting only with AC and no battery, and also all of these suggestions: Try changing the following setting in the bios setup, under the 'Advanced' tab Dynamic CPU Frequency: Mode = Always Low (NOT DYNAMIC) My laptop has been running on AC power without a problem for 24hours, including many restarts, and when I went back to the original bios setting, the problem returned almost straight away. EDIT Other suggestions I found on the web from here and here: Set the power plan to high performance Set the power plan to "Minimal Power Management" (1 and 2 do conflict) Start - Control Panel - Device Manager -- Processor - disable one of two processors - reboot normally 4.Do this: Only plug battery into laptop Turn on the laptop and start Windows normally Plug AC adapter into laptop, the screen will freeze Leave the laptop the way it is for 12-24 hours After 12-24 hours, turn it off the hard way Once it is turned off, turn it back on. The laptop is working now. I have no idea of what can it be...

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >