Search Results

Search found 7204 results on 289 pages for 'almost dead'.

Page 111/289 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Mac internet problems

    - by Bradley Herman
    Our office is set up with mostly macs (7 of them) but we do have a windows laptop and a windows desktop on the network as well. The network is configured with a modem going into a switch/router throughout the office to the computers, along with a wireless router. Everything runs fine most of the time, but periodically while using the web, certain sites will stop loading and timeout repeatedly. This usually lasts 20 minutes or so and can be incredibly annoying. Resetting the modem/router and/or rebooting the computer never helps. The weirdest part is that in almost every case, the websites are fine on our Windows machines. I frequently use github, google, Stack Overflow, and jQuery reference and I can count on the sites being unavailable to me at least once a day. While I can't get them to load, I can spin my chair around to the windows server behind me and load the sites just fine. Any idea what the hell could be going on here?

    Read the article

  • PC shut downs automatically after 10-20 second. No POST screen, no beeps

    - by emzero
    I have this not-so-old computer that's not being used for a year or so. Specs: Motherboard: ASUS PN5-E SLI CPU: Intel Core2Duo E4300 RAM:2x2GB SuperTalent DDR2-800 VGA: Zogis GeForce 7950GT PSU: Vitsuba San-55-S 550w HD: No hardrives yet When I power on the computer, everything seem to start, but right away the whole system shuts down. I've removed and changed the RAM sticks, take out the VGA, everything I could think of. So what could it be causing this? The PSU? The motherboard is dead? The CPU? Any help to isolate the problem will be useful. Thanks PS: Please don't close the question, this could be helpful to anybody having a similar problem, even with different hardware. UPDATE I've removed the old thermal paste and apply a brand new one. I also cleaned every dust using a high pressure gas dust remover. Checked for bad capacitors, all of them seem ok. Opened the PSU, removed big giant dust balls, cleaned with high pressure dust remover. Still the same problem, but now it stays powered on for almost 20 seconds maybe. But no POST screen, no beeps at all, nothing. So I suspect it's a motherboard or PSU failure. Unfortunately I don't have an energy tester to test the PSU... Don't know what else to try. I don't have another 775-motherboard to test the CPU, RAM and VGA with it.

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • How to get partition information from non-booting server?

    - by gravyface
    Need to manually rebuild a mirrored array on a server and am in the process of reinstalling SBS 2003 on it. However, it's a Dell server, and know that there's the Dell FAT32 diagnostics partition, a system partition, and a data partition, but do not know the size of each. Planning on reinstalling SBS 2003, all applications on the server, and then doing a System State restore, but figured that not having the correct partitions will cause some grief: am I right? Almost thinking that the size of the partitions shouldn't matter, but not positive. Question: should I care about the size of the partition? If so, how can I get this partition information from a non-booting drive? We have an Acronis image of the one working disk and the partitions are mounted/viewable in Explorer on a workstation, but I'm not sure where the Logical Disk Manager/Disk Management data is stored and/or if there's a way to retrieve it without having a working Windows installation.

    Read the article

  • How to repair a broken .EXE file association

    - by Pointy
    After (hopefully) scrubbing viruses out of a Windows 7 installation (after deciding not to simply run over the laptop repeatedly with my car), I've got everything almost back to normal. The only lingering issue I have is that for my non-admin users, the ".exe" file extension doesn't work. That is, clicking on the various desktop application links results in a "How do you want to open this?" dialog. I've been through the alleged registry fixing from "winhelponline" and that had absolutely no effect. I've tried running "assoc" for the affected users, but it reports the .exe association to be "exefile" even though it persistently does not work. Right-clicking on a desktop icon and then choosing "start" does successfully open an application, but that's clearly a terrible situation. For my admin user, things seem to work fine. What do I need to do to get things working?

    Read the article

  • smtp sasl authentication failure

    - by cromestant
    hello, I have configured and fixed almost all the problems with my postfix +courier + mysql setup for virtual mailboxes. I can now receive mail and send it from webmail (squirrel). BUT, what I can't do is authenticate from outside client. Since my isp blocks port 25 I setup postfix to work on 1025 for smtp and setup verbose loging. Here is the verbose log of a failed authentication process LOG Authentication for imap and pop3 seem to be working but this one is not. Here is the postconf -n output. Also through mysql I can verify that it is trying to validate through the system, running a query that returns the encrypted password stored in the database. I can't seem to find the error for this. thank you in advance

    Read the article

  • A simple Volume Replication Tool for large data set?

    - by Jin
    I'm looking for a solution to the following: Server A (Site A) - Win 2008 R2 - approx 10TB (15TB max) of data - well over 8 million files Server B (Site B) - Win 2008 R2 I want to assynchronously replicate Server A's volume to a volume on Server B for data redundancy. Something that I can say to my users, "go here for data" when/if Server A goes belly up due to machine problems, disaster, etc. Windows 2008 R2 does have DFS, but microsoft does not apparently support this large of a dataset (or more accurately, more than 8 million files - according to the docs I could find). I also looked at Veritas Volume Replication, but this seems almost too much as I would also require Veritas Volume Manager. There are numerous "back-up" software which makes a 1-1 backup, which would be ok, but since it will be transfering over internet, I'd like something that has compression during transfer like DFS has. Does anyone have any suggestions regarding this?

    Read the article

  • wbadmin incremental system state backup

    - by user74513
    I am doing system state backups on a Windows Server 2008 R2 Enterprise (Service Pack 1) machine and expected the backups after the first one to be incremental. However with each backup a new directory with vhd files are created and the vhd files are almost the same size as the with the first backup. So the backups does not seem to be incremental. I used the following command to do the backup: wbadmin start systemstatebackup -backupTarget:f: I played around with the settings under "Configure Performance Settings" in the Windows Server Backup plugin in Server Manager but according to the description at the top of the dialog these settings are not applied to system state backups. Are there any settings available for wbadmin system state backup to make the backups incremental?

    Read the article

  • Wireless Keyboard/Mouse: Should batteries be removed when not in use?

    - by abel
    I recently bought a Logitech Wireless Keyboard and mouse. I use it almost daily but for a couple of hours only. The keyboard has 2 AAA batteries and the mouse has 1 AA battery. The box mentions that the keyboard has a 24 month battery life and the mouse has a 5 month battery life. Should I keep the batteries in the keyboard/mouse, when they are not in use?Is it safe? Does the battery life mean 24 months of continuous usage or 24 months of average usage?

    Read the article

  • Why is my NTP controlled computer clock two minutes ahead?

    - by Martin Liversage
    The clock in my computer is configured to be synchronized using NTP. To verify this I have tried two NTP clients using various NTP servers. My computer and the NTP clients are in complete agreement about the current time even across a wide range of NTP servers. I also have a GPS and my national phone company provides an accurate clock available by calling a specific phone number. Both my GPS and the phone company agrees on the current time. However, my computer is almost precisely two minutes (or 1 minute and 59 seconds) ahead of what I believe to be the "real" current time where I live. Why is my computer two minutes ahead? I realize that synchronizing clocks using the internet may not be entirely accurate as there is latency, but two minutes is a very long time on the internet. Is NTP really two minutes ahead? I'm running Windows 7 and live in the time zone UTC+1, but I don't think that is important in understanding my problem.

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Just one client bound to address and port: does it make a difference broadcast versus unicast in terms of overhead?

    - by chrisapotek
    Scenario: I am implementing failed over for a network node, so my idea is to make the master node listens on a broadcast ip address and port. If the master node fails, another failover node will start listening on this broadcast address (and port) and take over. Question: My concern is that I will be using a broadcast IP address just for a single node: the master. The failover node only binds if the master fails, in other words, almost never. In terms of network/traffic overhead, is it bad to talk to a single node through a broadcast address or the network somehow is smart enough to know that nobody else is listening to this broadcast address and kind of treat it as a unicast in terms of overhead? My concern is that I will be flooding my network with packets from this broadcast address even thought I am just really talking to a single node (the master). But I can't use unicast because the failover node has to be able to pick up the master stream quickly and transparently in case it fails.

    Read the article

  • Windows XP Boot Issue - Diagnosing A Hard Drive Failure

    - by duffymo
    My five-year-old HP desktop running Windows XP SP3 wouldn't boot from the hard drive yesterday afternoon. I would see the boot sequence begin, then nothing but a black screen. Fortunately, I had just done an Acronis backup to my external drive in the morning, and I have a bootable USB key. I put the USB key into the drive, powered up the machine, and put the USB key first in line in the boot sequence. Voila! My machine came alive. But now I'm confused as to what the problem is and what to do next. I assumed that my hard drive was toast. But now that the machine is alive I can see files on my C: drive that have changes I made just yesterday. Clearly the drive is not dead. Here are my questions: What could explain my inability to boot from the hard drive? What would a remedy be? What's my best course of action? Should I replace the hard drive with a new one? If I replace the hard drive, do I reinstall the OS and apply the backup I did yesterday? If I decide that re-installing Windows XP makes no sense, how do I get back the Acronis backup that I did yesterday? I don't want to lose that. UPDATE: I just learned one more key fact. I'm having some work done on my house. I neglected to shut my machine down before the contractor came. My wife said he shut down the power to do some work on a circuit and then powered the house back up. I have a surge protector, but is it possible that cycling the power did some damage?

    Read the article

  • Multisession burn in Imgburn

    - by blntechie
    Is Multisession burn available in Imgburn? If not, any idea whether it will be implemented in future? I almost recommended Imgburn instead of Nero or Roxio to one of my friend. He requires multisession burning and I found no options to enable it,if available in Options. Note: Please don't question the question. Like, Why would you want multisession anyway? or Isn't USB stick/RW Disk is what you need instead of a RO CD/DVD? Please keep the answers in context. I know that I can use USB sticks instead of CD/DVD and my friend require mulisession anyway. May be I can ask him to keep Nero as a backup for this purpose if Imgburn don't support this.

    Read the article

  • Set up SSL/HTTPS in zend application via .htaccess

    - by davykiash
    I have been battling with .htaccess rules to get my SSL setup working right for the past few days.I get a requested URL not found error whenever I try access any requests that does not do through the index controller. For example this URL would work fine if I enter the it manually https://www.example.com/index.php/auth/register However my application has been built in such a way that the url should be this https://www.example.com/auth/register and that gives the requested URL not found error My other URLs such as https://www.example.com/index/faq https://www.example.com/index/blog https://www.example.com/index/terms work just fine. What rule do I need to write in my htaccess to get the URL https://www.example.com/auth/register working? My htaccess file looks like this RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] I posted an almost similar question in stackoverflow

    Read the article

  • Remote support via VPN without split tunnel

    - by Robe Eleckers
    my title might not be very clear, but I'll explain my setup in more detail now. We have several customers (companies) that need to be remotely supported. At these customers we have servers running with our software that needs to be serviced. These servers are (almost) never connected to the internet. For this we have multiple PC's running VPN clients. These PC's run a VNC server so our service engineers can login from their home laptop remotely to these PC's and connect from there to our customer via the VPN connection on the PC. The problem is however that several customers do not allow split tunneling. That means that when we connect via VPN to such a customer the VNC connection drops. Our current workaround is using a Citrix VM which we control via XenCenter console, but it's quite slow. Are there common solutions to handle this?

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • Disabling Laptop (PB TJ-75) faulty card reader Linux

    - by Gab
    My problem comes from that my laptop [PB TJ-75] has a faulty Alcor card reader. It’s 100% sure, the device is dead and unusable whatever the OS is. It cannot be disabled in BIOS [latest: Vendor: Phoenix Technologies LTD Version: V1.26 Release Date: 05/04/2010]. If I could take it apart from the main board easily, and if with that, the system would never look again for it, I’ll be very happy! Is it possible, has anyone ever tried this? Or maybe, replacing the BIOS with a more open one, which let you disable the card reader. Does this exists? Here's what I've tried to disable it so far. In Win7, I choose ‘disable’ in device manager and that’s ok. If not, the device keeps on appearing and disappearing and lot of resources are used. In Lubuntu 13.04, I got extra boot time, with the msg:'sdb, assuming drive cache, etc.’ I tried other distros (isos booted by grub). I can boot Puppy, Gparted, and Redobackup apparently without any problem. I cannot boot Debian, live or install + tried Crunchbang and Tails. I got a loop :’usb device, scsi n+1 blabla‘. I tried "nousb", no result, I have blacklisted EHCI, no result, then usb_storage module, better boot time in Lubuntu, with just the message "...data transfer failed", better shutdown time too. But, no way to use usb storage medias. In Debian, it ends with BusyBox prompt. Is it possible to just disable that Alcor card reader? Does it have a specific module? Is there a special kernel boot option that I missed? Does it have something to do with kernel recompiling, and if yes, how to do with isos? Programming a driver which says everything is ok (out of my comprehension for the moment)? Disabling device by vendor id? What is the best way?

    Read the article

  • FTP client that supports 2 concurrent FTP sessions

    - by oninea
    I'm looking for an FTP client that can connect to two different FTP servers at the same time and allow file transfer or synchronization between those two servers. Basically what I want to achieve is to transfer/synchronize files between 2 different sites from my local machine. Are there any clients around that support this functionality? If there are none, is there an alternative to achieve this? I've taken a look at net2ftp, a web based FTP client, which provides almost the same functionality that I need. What I'm looking for though is a desktop app. Any ideas?

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Creating/renaming folder in Windows 7x64 extremely slow

    - by Newtopian
    Hi I have this very annoying problem : Whenever I want to create or edit a folder on my system it takes a very long time to complete. Right click-new folder... wait... wait... wait a good 30-60 seconds then type name and enter... wait again 30-60 seconds and then you can enter it. Browsing is normal and I have no problem creating folders through applications like eclipse but through explorer it is a real pain. Renaming folders has similar effect. otherwise the computer is (almost) normal, any ideas ?

    Read the article

  • Outlook new message size nearly 1mb

    - by Yossi Dahan
    I've been using Outlook 2010 for several weeks with no issues. Suddently, a few days ago, the size of my outgoing messages got huge. Looking at thsi it appeas that a huge CSS style is beign created with around 14,000 definition for list items, making the message almost 1mb before I even typed in one word. Emails before that point were very small. Needless to say I can't remember changing anything, nor can anyone around here provide any possible explanation... Any ideas?

    Read the article

  • Redhat 6 gui installation VS kickstart gives me different packages?

    - by jonaz
    If i do the graphical install and select basic server + aide and screen i get a system with 535 installed packages. If i look at the /root/anaconda-ks.cfg file in that freshly installed system i see: %packages @base @console-internet @core @debugging @directory-client @hardware-monitoring @java-platform @large-systems @network-file-system-client @performance @perl-runtime @security-tools @server-platform @server-policy @system-admin-tools pax python-dmidecode oddjob sgpio certmonger pam_krb5 krb5-workstation nscd pam_ldap nss-pam-ldapd perl-DBD-SQLite aide screen If i then install a NEW system using a kickstart only containing those packages i get 620 installed packages. So basicly my question is why does the system install almost 100 more packages when using kickstart compared to the GUI installation when the exact same packagegroups are selected?

    Read the article

  • Any opinions on Paragon's NTFS for Mac solution?

    - by AngryHacker
    I am currently using the free NTFS-3G to access my NTFS drive from the Mac. It seems pretty stable (except once in the very beginning, it locked up the Mac and corrupted my NTFS drive, which I then fixed with chkdsk from a PC). However, speed is NOT one of its virtues. I've been looking at buying Paragon NTFS for Mac OS X 7.0. Their product comparison PDF claims nearly double the speed (vs NTFS-3G) in almost every category (read, write, etc...). Can someone tell me whether the product is stable and whether the speed claims are true.

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >