Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 654/780 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • Virtual box host-only adapter configuration

    - by Xoundboy
    I have VirtualBox 4 running on Win 7 with a Centos 6 guest VM set up for hosting my dev server. When I'm connected to my home network the guest can be accessed via a static IP address that I configured (192.168.56.2), but not when I'm in the office. I'm guessing that the DHCP server in the office doesn't have a gateway configured for the 192.168.56.x IP range. I read something about the VB host-only adapter that should allow me to set this guest VM up in such a way that I don't need to be on any network to be able to access the guest from the host using a static IP. I've not been able to find out exactly how to configure this though. Can anyone give me an example configuration, thanks. UPDATE: Thanks for your responses. I've now set up a single virtual network adapter in VirtualBox and set it to host-only: C:\Users\Ben>vboxmanage list hostonlyifs Name: VirtualBox Host-Only Ethernet Adapter GUID: d419ef62-3c46-4525-ad2d-be506c90459a Dhcp: Disabled IPAddress: 192.168.56.2 NetworkMask: 255.255.255.0 IPV6Address: fe80:0000:0000:0000:78e3:b200:5af3:2a57 IPV6NetworkMaskPrefixLength: 64 HardwareAddress: 08:00:27:00:94:e8 MediumType: Ethernet Status: Up VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter On the guest I've set up eth0 to use the same IP address as the host-only adapter (192.168.56.2) but when I try to log in using Putty I still get "Network Error : connection refused". VirtualBox DHCP servier is enabled but I can't ping the gateway (192.168.56.1) from either host nor guest. There's no firewall running on either OS. What next?

    Read the article

  • "Network is unreachable" When pinging google, can connect to internal computers on debian VM

    - by musher
    Similar to this SU question: "Network is unreachable" when attempting to ping google, but internal addresses work Actually, it's pretty much the same base issue. I went through that thread trying to find a solution, I changed my resolv.conf: before: domain [my work domain] search [my work domain] nameserver [my gateway] nameserver [my gateway2] I changed it to: after: domain [my work domain] search [my work domain] nameserver 8.8.8.8 nameserver 8.8.4.4 However, any time I reboot the computer the resolv.conf gets overwritten to the previous version (the 'before' above). The issues began after I installed virtualbox additions, X server and (specifically) LXDE: Cat of apt history.log: Start-Date: 2014-08-21 10:03:42 Commandline: apt-get install virtualbox-guest-utils virtualbox-guest-dkms Install: x11-xkb-utils:amd64 (7.7+1, automatic), libxaw7:amd64 (1.0.12-2, automatic), xfonts-utils:$ End-Date: 2014-08-21 10:03:56 Start-Date: 2014-08-21 10:18:39 Commandline: apt-get install lxde Install: desktop-base:amd64 (7.0.3, automatic), libgoa-1.0-0b:amd64 (3.12.4-1, automatic), lxmenu-d$ End-Date: 2014-08-21 10:21:52 Start-Date: 2014-08-21 10:26:40 Commandline: apt-get upgrade Upgrade: libio-socket-ssl-perl:am ifconfig on the guest: root@Peridot:~# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:89:c9:20 og inet addr:172.31.2.102 Bcast:172.31.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe89:c920/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2281 errors:0 dropped:1 overruns:0 frame:0 TX packets:463 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:266507 (260.2 KiB) TX bytes:120554 (117.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) The adapter in VBox is a bridged adapter directly onto my ethernet connection; as are my other 2 VMs (which work) Other SU questions I've tried: "connect: Network is unreachable" in VirtualBox VM

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Matching up HomeGroup users created in different versions of Windows

    - by pdr
    In our household, we have 2 laptops and a desktop, all currently on Windows 8.1. The desktop has been upgraded in stages from Windows 7. Laptop 1 was bought with Windows 8 and has now been upgraded. Laptop 2 is new and came with Windows 8.1. We've been trying to set up a HomeGroup such that we have one user on laptop 1, one on laptop 2 and both can log into the desktop, with each user able to edit their files on their laptop and desktop, while the other user has read-only access. And we now have that situation ... except ... because the user on laptop 1 was created in Windows 8 but the same user on the desktop was created in 8.1, they have different names in Windows Explorer (say, firstuser and first_000). Likewise, the other use was created on laptop 2 in Windows 8.1 and on the desktop in Windows 7 (so let's say secon_000 and seconduser). This is ultimately confusing. Now if we expand the HomeGroup, we get four users (firstuser, first_000, seconduser and secon_000) and each has a single computer inside it. What I'd like to see is firstuser = laptop1, desktop and seconduser = laptop2, desktop. An acceptable alternative is first_000 = laptop1, desktop and secon_000 = laptop2, desktop. But what I don't want to have to do is delete firstuser and seconduser and rebuild them. Is there a better way?

    Read the article

  • DirectAdmin Centos4 server has virus

    - by Rogier21
    Hello all, I have a problem with a webserver that runs Centos4 with DirectAdmin. Since a few weeks some websites hosted on it are not redirecting on search engines properly, they are redirected to some malware site, resulting in a ban from google. Now I have used 3 virusscanners: ClamAV: Didn't find anything Bitdefender: Found a 2-3 files with JS infection, deleted them AVG: Finds lots of files, but doesn't have the option to clean! The virus that it finds is: JS/Redir JS/Dropper Still the strange thing is: website a (www.aa.com) does not have any infected files (have gone through all the files manually, is a custom PHP app, nothing special) but does still have the same virus. Website b (www.bb.com) does have the infected files as only one. I deleted all these files and suspended the account, but no luck, still the same error. I do get the log entries on the website from the searchengines so the DNS entries are not changed. But now I have gone through the httpd files but cannot find anything. Where can I start looking for this?

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • How does one debug Windows network share authentication?

    - by ajs410
    I have machine0 with 32-bit Vista, logged in as a domain user, running a VMWare image of 32-bit Vista, logged in as a local user, with the VM set to bridge the network. From an administrator account (called admin) within the VM, I try to access the hidden C$ share on machine0 (i.e. start - run - "\\machine0\C$\"). I get no prompts for credentials. Worse, machine0 has an admin account (different password), and machine0\admin gets locked out when VM\admin tries to access the network share. I get a message several seconds later, which feels like a cached credential failure leading to the lockout. I have checked several places for cached credentials; net use, Stored Usernames and Passwords, mapped shares. I rebooted (both machine0 and VM) to make sure the session was clear of any cached credentials. I can force net use to use my domain credentials when accessing machine0, and then I can see the share. I can also see shares that do not require credentials. I decided to try another machine on the network (machine1), 64-bit Vista, local user. This machine has no lockout policy, and after several seconds (feels like failed cached credentials again) it prompts me for credentials. After I enter them, it re-prompts me, saying "logon unsuccessful" (tried my domain credentials, and also machine1\admin's). Which is bogus, because I proceed to log on with remote desktop using the machine1\admin credentials. I have tried this on another machine (machine2, 64-bit Vista), running a copy of the same 32-bit VM, and I don't remember having this problem. machine0 has a fingerprint reader...could that try storing passwords and interfere? Are there any places I'm missing where there could be cached credentials? Is there a way to see what credentials are flying around when I try to connect?

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • SQL Server 2012 Maintenance Plan can't modify

    - by Crazyd
    Click on any created Maintenance Plan: TITLE: Microsoft SQL Server Management Studio Value cannot be null. Parameter name: component (System.Design) BUTTONS:OK Create a new Plan I get this error: TITLE: Maintenance Plan Wizard Progress Saving maintenance plan failed. ADDITIONAL INFORMATION: The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. BUTTONS:OK Edit an already created Backup Plan: Error 1 Error loading 'BackupDb' : The LoadFromSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. . server=SERVER;package=Maintenance Plans\BackupLeadsDb; 1 1 Attemped Solutions: I've changed password for SA Account; I use Windows Authentication to log in; and I've registered C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\DTS.dll. Repair SQL Server 2012, Uninstall/ReInstall SQL Server 2012.

    Read the article

  • finding files that match a precise size: a multiple of 4096 bytes

    - by doub1ejack
    I have several drupal sites running on my local machine with WAMP installed (apache 2.2.17, php 5.3.4, and mysql 5.1.53). Whenever I try to visit the administrative page, the php process seems to die. From apache_error.log: [Fri Nov 09 10:43:26 2012] [notice] Parent: child process exited with status 255 -- Restarting. [Fri Nov 09 10:43:26 2012] [notice] Apache/2.2.17 (Win32) PHP/5.3.4 configured -- resuming normal operations [Fri Nov 09 10:43:26 2012] [notice] Server built: Oct 24 2010 13:33:15 [Fri Nov 09 10:43:26 2012] [notice] Parent: Created child process 9924 [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Child process is running [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Acquired the start mutex. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting 64 worker threads. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting thread to listen on port 80. Some research has led me to a php bug report on the '4096 byte bug'. I would like to see if I have any files whose filesize is a multiple of 4096 bytes, but I don't know how to do that. I have gitBash installed and can use most of the typical linux tools through that (find, grep, etc), but I'm not familiar enough with linux to figure it out on my own. Little help?

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by user59232
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • Why am I experiencing random connection timeouts? (CentOS)

    - by Ryan
    I have a CentOS server setup that currently hosts several websites (all relative of each other in some form or another). As of recently throughout the day at the most random times the website speed will lag to a crawl and eventually hit a connection timeout. When I say random times this typically happens anywhere between 10am and 1pm usually, however, this morning this happened to me at 8am. I do not have a lot of familiarity with server knowledge as far as what I am looking for in this situation. What are some possible causes of why my server is slowing the websites down to a complete crawl or timing out? Are there specific things I should be checking for when this happens? I have noticed using: tail /var/log/httpd/access_log That usually when this down time occurs there are lot of IP addresses related to BingBot, Googlebot, and sometimes various bots or spiders that I am unfamiliar with. Could this be related and if so how can I avoid this from causing my websites to lag out? Thanks in advance for any help or advice. The websites that are timing out are built with PHP and use a MySQL database to display information.

    Read the article

  • lenovo x1 carbon windows 8 frequent wifi disconnect issue

    - by hIpPy
    I'm having frequent wifi disconnects on my Lenovo X1 Carbon Touch laptop. I got this laptop 2 months back and it has been happening ever since about 3-5 times a day and 10 times a week on average. I've Frontier Fios internet. Power connected or not does not matter. Once I get disconnected, I try below to connect again in that order: turn Airplane mode on and off, troubleshoot network problems windows troubleshooter), restart the laptop I'd find that the WiFi adapter would get disabled and sometimes windows troubleshooting would help but more than often I'd end up restarting the laptop. A week back, I upgraded my wifi network adapter drivers (now Intel, version 15.5.6.48, 10/3/2012). I still get disconnected frequently but turning Airplane mode on and off gets me connected again. So the driver update did help. Windows 8 is updated. None of the other devices (nexus, iphone phones, nexus7, ipad tablets) would have wifi issues when my laptop would get disconnected. config: Intel(R) Centrino(R) Advanced-N 6205 (WiFi network adapter) Microsoft Windows 8 Pro Microsoft Windows [Version 6.2.9200] x64-based PC LENOVO System Model: 3443CTO X1 Carbon Touch I recently noticed this log message When I got disconnected in event viewer: Your computer was not assigned an address from the network (by the DHCP Server) for the Network Card with network address 0x[XXXXXXXXXXXX]. The following error occurred: 0x79. Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. Any idea?

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • Getting a "CPU over temperature error", but temperatures seem to be normal

    - by Luis Parker
    I built a PC with the following parts: CPU: i5-2500 Motherboard: Asus P8H67-M EVO rev 3.0 RAM: G-Skill Ripjaws Series 8GB (2 x 4GB) DDR3 1333 Video: GTX 560-Ti 1GB I used a crappy (but functioning) old case and a 500W Powercooler supply. The rig also includes 3 HDs and a DVD-RW. Whenever I push the system mildly it resets right away (looks more like power off/power on) and gives me a "CPU over temperature error". However, BIOS always reports <65ºC for the CPU at the moment of the reset, same as Core Temp and RealTemp. This is RealTemp's log since launching BF3 until the reset (just over 1 minute, as you can see) Just to be sure I've checked the CPU cooler and re-applied the thermal paste twice, but nothing changed. I'm not overclocking at all. What am I missing here? Could it be that the old power supply is generating this error? Maybe the mobo isn't reporting temperatures correctly? I don't have a clue on how to troubleshoot this, help Thanks in advance!

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >