Search Results

Search found 88447 results on 3538 pages for 'running time'.

Page 91/3538 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Can't seem to get chassis fans running

    - by TK Kocheran
    I've got a ASUS ROG Maximus V Extreme and I'm trying to connect my fans to the chassis fan pins to get them running according to the motherboard. I know for sure that my fans work, as when I test them with my Molex connector, they all happily power on. Here's two of my chassis fans connectors (there are 3-4): Here's the connector that came with either my motherboard or the PSU, can't remember :) I've never seen one of these strange cables before. All I know is that if I plug in the 4-pin mobo connector to either of these fan plugs, fans don't come on and don't show up in the BIOS. (Motherboard has a crazy awesome UEFI BIOS and shows you if it sees the fans.) If I try plugging the 4-pin connection into the mobo and the other side into the PSU, I can't POST. If I plug the PSU connector in without the mobo connector, fans come on. What could I be doing wrong here? Is it a problem with the cable I'm using? Is there something I may have missed in the build?

    Read the article

  • How to make HOME or END keys work in mc running on OS X (ssh)

    - by Sorin Sbarnea
    I installed MacPorts on OS X 10.5 and I found out that when I connect to the computer using SSH and use mc - Midnight Commander - the HOME and END keys do not work. I have to mention that I'm using putty and I am able to use the keyboard very well on Linux machines like Fedora, Ubuntu,... Here is putty keyboard configuration (a configuration I found to be optimal over time): Backspace key: 127 Home/End keys: Standard Function keys: Xterm R6 Cursor keys: Normal Numpad: normal Terminal type string: xterm-color I'm looking for a command line solution/script that does these changes, this make much easier to create a prepare OS script for configuring a new OS.

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

  • How to netboot ubuntu running iniside VirtualBox on Mac Air

    - by murungu
    Having configured a virtual machine for Ubuntu on VirtualBox on my mac air I need to install Ubuntu OS itself. I have selected the hardrive as the primary boot device and the network as the secondary boot device, so I am not prompted to install an Ubuntu disk at boot time. It attempts to netboot but is unable to locate Ubuntu and cannot find anywhere in the configuration where I can explicitly specify where to find and Ubuntu image, so assume it reverts to some default location and fails. Has anybody out there ever successfully installed ubuntu on virtual box on their Mac Air? What steos did you take to get it right?

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

  • Celery daemon as a Ubuntu service does not consume tasks while running from terminal does

    - by Guy
    On Ubuntu 11.10, I have to issue python tasks from django using celery. I'm currently testing on the same machine but eventually the celery worker should run on a remote machine. django uses the following settings: BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "/my_vhost" BROKER_USER = "celery" BROKER_PASSWORD = "celery" I can also see my task queued in http://localhost:55672/#/queues the celery daemon uses the following configuration (celeryconfig.py): BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_USER = "celery" BROKER_PASSWORD = "celery" BROKER_VHOST = "/my_vhost" CELERY_RESULT_BACKEND = "amqp" import os import sys sys.path.append(os.getcwd()) CELERY_IMPORTS = ("tasks", ) running celeryd -l info works well and now I want to run it as a service. I've followed the instructions from http://ask.github.com/celery/cookbook/daemonizing.html and now I'm trying to run it using: sudo /etc/init.d/celeryd start But the message is not being consumed, no error in the celery log either. /etc/default/celeryd CELERYD_NODES="w1" CELERYD_CHDIR="/path/to/django/project" CELERYD_OPTS="--time-limit=300 --concurrency=1" CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. CELERYD_USER="celery" CELERYD_GROUP="celery" I've also created user celery in Ubuntu not sure if its necessary. Any help will be appreciated, Thanks, Guy

    Read the article

  • SEAGATE Barracuda 7200.11 HDD not running

    - by Dane411
    After a huge research, I'm stuck at the beggining of getting my HDD data back. Whats happening to me is that in the moment when I plug the power wire to my external 1TB SEAGATE Barracuda 7200.11 ST31000333AS HDD Fw LC15, it makes the sound like it's spinning to almost full speed and then shuts down and spins up again, and so on. It's well known that those HDDs have a bad firmware that someday randomly fails. There are like 2 main problems identified, BSY (busy) state, and LBA0 error. Last time I connected it to power nothing happened, it didnt try to start at all, is it that so called bricked state? I guess my HDDs error is the first one, but I dont really know if what I described is that BSY state or not, neither I know how to check it. How could I know it? Thank you so much!

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • How to know whether mongodb is running on 64 bit mode or 32 bit mode

    - by Jim Thio
    My programmer install mongodb. Then somehow it doesn't work. I run C:\mongod\bin>mongod mongod --help for help and startup options Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 warning: 32-bit servers don't have journaling enabled by def ault. Please use --journal if you want durability. Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 [initandlisten] MongoDB starting : pid=3800 port=27017 dbpat h=/data/db 32-bit host=haryantoi5 Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sat Aug 11 22:57:50 [initandlisten] ** see http://blog.mongodb.org/post/13 7788967/32-bit-limitations Sat Aug 11 22:57:50 [initandlisten] ** with --journal, the limit is lower Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] db version v2.0.7-rc1, pdfile version 4.5 Sat Aug 11 22:57:50 [initandlisten] git version: 9efe4cce272373b52b96de1309c1fbf 0c984305f Sat Aug 11 22:57:50 [initandlisten] build info: windows sys.getwindowsversion(ma jor=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB _VERSION=1_42 Sat Aug 11 22:57:50 [initandlisten] options: {} ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Sat Aug 11 22:57:50 [initandlisten] exception in initAndListen: 12596 old lock f ile, terminating Sat Aug 11 22:57:50 dbexit: Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close listening sockets.. . Sat Aug 11 22:57:50 [initandlisten] shutdown: going to flush diaglog... Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close sockets... Sat Aug 11 22:57:50 [initandlisten] shutdown: waiting for fs preallocator... Sat Aug 11 22:57:50 [initandlisten] shutdown: closing all files... Sat Aug 11 22:57:50 [initandlisten] closeAllFiles() finished Sat Aug 11 22:57:50 dbexit: really exiting now It seems that mongod is running on 32 bit. I have a 64 bit computer and I want to run mongodb in 64 bit enviroment. How do I do so?

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Moved servers running Windows Server 2003

    - by Charles
    Our company has two locations and each location has a Windows Server 2003 machine as the DC and several servers, running on two different sub-nets. We are consolidating the locations. I changed the IP address on one of the web servers prior to moving to the main location. I didn't change the IP address on either the DC or the other web servers prior to moving to the main location. Now, only the web server whose IP was changed is able to serve pages. The other web servers are not able to serve pages, cannot be pinged, or be accessed via RDP. Since we don't need the second DC, it has been powered down. When I tried to ping it, the previous IP address was received. My colleague changed the IP address in the DC's DNS, but when I ping it, a timeout error is received. I know that I should have read a lot more before doing this. What can I do to fix it? Thanks, in advance, for your help! Update MarkM, thanks for the info on demoting a DC. That's one of the things I want to do after everything is working. Is there a good, clear article you recommend? Rusty, there are no DMZs involved at this point. I need to set up a DMZ, but that's another project.

    Read the article

  • Poor write performance on Debian server running NFS with 22TB exported JFS filesystem

    - by user143546
    I am currently running a debian server that is exporting a large JFS filesystem (22TB) over NFS (nfs-kernel-server.) When attempting to write to the NFS share, the performance is very poor. The 22TB disk is sitting on a NAS mounted using iSCSI. It will bust for a moment near expected line speed, and then sit idle for several seconds. Very little traffic measured in the low kb/sec. The wait peeks on write. When reading from the NFS mount, the system operates at expected speeds (11MB/sec). The issue does not occur when using SFTP, rsync, or local coping (non-nfs). The issue persists between stable and testing releases. On the same machine I have a 14TB ext4 filesystem using the exact same export configuration that does not share the issue. This share is not in regular use and thus not consuming resources. NFS Server: cat /etc/exports /data2 10.1.20.86(rw,no_subtree_check,async,all_squash) cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /etc/default/nfs-kernel-server RPCNFSDCOUNT=8 RPCNFSDPRIORITY=0 RPCMOUNTDOPTS=--manage-gids NEED_SVCGSSD= RPCSVCGSSDOPTS= NFS Client: cat /etc/fstab 10.1.20.100:/data2 /root/incoming nfs rw,noatime,soft,intr,noacl 0 2 cat /sys/block/sdb/queue/scheduler noop [deadline] cfq cat /proc/mounts 10.1.20.100:/data2/ /root/incoming nfs4 rw,noatime,vers=4,rsize=262144,wsize=262144,namlen=255,soft,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.1.20.86,minorversion=0,addr=10.1.20.100 0 0 This problem has me pretty stumped. Any help would be greatly welcomed. Thanks.

    Read the article

  • Problem uninstalling and installing Java on new pc running Windows 7 64 bit os

    - by Brian Gerrin
    I have a new Dell Studio XPS running Windows 7 64 bit os. I am attending online classes which require IE 8 and Java version 6 build 20. The pc came with IE 8 32 bit and Java 6 build 21 already installed. I tried to uninstall Java using add and remove programs but after about 45 minutes of "Preparing to remove application" I got an error refering to a missing dll file and the uninstall failed. I used a third party program to remove Java and downloaded Java 6 build 20. My problem is when I try to install it I get the box telling me "Installing program ... this may take a few minutes" however after 30 to 45 minutes nothing has happend and there is no indication in the progress bar that anything is happening then all of a sudden the program bar is full and the program is supposedly installed. When I try to run it however it doesn't work. Someone help please! I can't get access to my classwork with out this! Thanks

    Read the article

  • Extending a home wireless network using two routers running tomato

    - by jalperin
    I have two Asus RT-N16 routers each flashed with Tomato (actually Tomato USB). UPSTAIRS: Router 'A' (located upstairs) is connected to the internet via the WAN port and connected via a LAN port to a 10/100/1000 switch (Switch A). Several desktops are also attached to Switch A. Router A uses IP 192.168.1.1. DOWNSTAIRS: I've just acquired Router 'B' and set it to IP 192.168.1.2. I have a cable running from Switch A downstairs to another switch (Switch B). Tivo, a blu-ray player and a Mac are connected to Switch B. My plan was to connect Router B to Switch B so that I have improved wireless access downstairs. (The wireless signal from Router A gets weak downstairs in a number of locations.) How should I configure Router B so that all devices in the house can see and talk to one another? I know that I need to change DHCP on Router B so that it doesn't cover the same range as DHCP on Router A. Should I be using WDS on the two routers, or is that unnecessary since I already have a wired connection between the two routers? Any other thoughts or suggestions? Thanks! --Jeff

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Best method(s) to backup VMs running on HyperV?

    - by Kara Marfia
    We're in the middle of P2V'ing most of the network, so the current backup method is likely the worst - the backup agent is still installed on the guest OSs, and the backup device is dutifully pulling them onto tape, one file at a time. I suspect there's a clever way to script (PowerShell?) a suspend on the VMs, then backup of the .vhd files, and unsuspend the VMs. This seems like it would provide big speed benefits, while losing file-level restore (might be best for things like DCs and app servers). What methods/policies have you hammered out?

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    Hi Geeks, I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • APC serving old code intermittently running with Lighttpd and PHP Fast CGI

    - by APZ
    I recently started facing this problem that APC shows old code when we upload a html template file to fix/ change something on our websites. We run APC with Stat=0 and want to keep it that way because we seldomly make changes to templates. Every time we upload a template we make sure to flush APC cache and we execute this script(shown only some part of the script here) to clear the cache: apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('system'); apc_clear_cache(opcode); We use lightpd and PHP Fast CGI and fast cgi has "max-procs" = 2, "PHP_FCGI_CHILDREN" = "5", Even after flushing APC once upload is complete it serves the old template intermittently. Any help would be appreciated.

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >