Search Results

Search found 5286 results on 212 pages for 'logs'.

Page 154/212 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • How can I display additional boot and shutdown information on the Windows 7 welcome screen?

    - by Daniel Saner
    There is a small tweak, I believe it is a registry key, that allows to display additional information on the Welcome and Shutting down screens of Windows 7 (and most likely Vista, too). I have activated this tweak on one of my systems; unfortunately I forgot how I did it, and I can't seem to find the website that originally gave me that information. Usually, the Windows 7 welcome screen will just display "Welcome" when logging in. With the tweak activated, my Welcome screen gives status information such as "Loading user settings" or "Preparing desktop". When shutting down, the default screen simply says "Shutting down". With the tweak activated, it gives additional status information such as "Stopping Windows services". This appears the same way that Windows gives information when updates are installed or configured during the startup or shutdown procedure, and I find them quite helpful in getting a feel for what task takes how long during that process. The only setting I was able to find is the Boot log checkbox on the Boot tab of the msconfig application. However, this results in Windows displaying console logs of drivers it is loading, etc., instead of the animated Windows title. This is NOT the setting I am looking for. The "additional boot information" setting that I have activated on this system still displays the regular animated Windows logo, and only replaces the strings displayed on the blue Welcome and Shutdown screens. Could someone direct me to the registry key (or whatever setting) that is used to get this behaviour? Edit: Here are a few pictures of the enhanced Welcome and Shutdown screens taken with my mobile phone—they're in German though. Login screens "Waiting for User Profile Service" and "Preparing desktop": Logout screen "Stopping Windows services":

    Read the article

  • with nginx having the base url rewrite to https

    - by jchysk
    I'd like only my base domain www.domain.com to be rewritten to https://www.domain.com By default in my https block I have it reroute to http:// if it's not ~uri = "/" (base domain) or static content. server { listen 443; set $ssltoggle 2; if ($uri ~ ^/(img|js|css|static)/) { set $ssltoggle 1; } if ($uri = '/') { set $ssltoggle 1; } if ($ssltoggle != 1) { rewrite ^(.*)$ http://$server_name$1 permanent; } } So in my http block I need to do the rewrite if it has to https: server { listen 80; if ($uri = '/') { set $ssltoggle 1; } if ($ssltoggle = 1) { rewrite ^(.*)$ https://$server_name$1 permanent; } } If I don't have the $uri = '/' if-statement in the http block, then https works fine if I go directly to it, but I won't get redirected if I go to regular http which is expected. If I do put that in-statement in the http block then everything stops working within minutes. It might work for a few requests, but will always stop within a minute or so. In browsers I just get a blank page for all requests. If I restart nginx it continues to not work until I remove both if-statement blocks in both the https and http blocks and restart nginx. When I look in the error logs I don't see anything logged. When I look in the access log I see this message: "-" 400 0 "-" "-" which I assume means a 400 error. I don't understand why this doesn't work for me. My end goal is to have the base domain be https-only while all other pages default to http. How can I achieve this?

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • Managing per-user rc.d init scripts

    - by Steve Schnepp
    I want to delegate SysV init scripts to each user. Like the SysV init, each item in ${HOME}/rc.d starting with S will be launched on server start-up with the start argument. The same for the server shut-down with the one starting with K and with the stop argument. I thought about scripting it myself, but maybe there is already some kind of implementation out there1. In summary it would be a script in /etc/init.d/ that iterates through all the users and launches runparts as the user on the relevant scripts. The platform here is a Linux (Debian flavour), but I think the solution would be quite portable among various Unix-like platforms. Update: The point here is for users to be able to create their own init scripts that should be launch on their behalf when the system boots up. As Dan Carley pointed out, the services won't be able to access any system asset (priviledged ports, system logs, ...). 1. This way I don't have to think that much about all the subtle security implications such as script timeouts for example...

    Read the article

  • mod_rpaf with apache error_log

    - by Camden S.
    I'm using mod-rpaf with Apache 2.4 and it's working properly (showing the real client IP's) in my Apache access_log... but not in my error_log. My error log just shows the client IP address of the proxy server (my load balancer in this case) Here's an example of what I see in my error_log where 123.123.123.123 is the IP of my load balancer/proxy. == /usr/local/apache2/logs/error_log <== [Tue Jun 05 20:24:31.027525 2012] [access_compat:error] [pid 9145:tid 140485731845888] [client 123.123.123.123:20396] AH01797: client denied by server configuration: /wwwroot/private/secret.pdf The exact same request produces the following in my access_log where 456.456.456.456 is a real client IP (not the IP of the load balancer). 456.456.456.456 - - [05/Jun/2012:20:24:31 +0000] "GET /wwwroot/private/secret.pdf HTTP/1.1" 403 228 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" Here's my httpd.conf entry: # RPAF LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFproxy_ips 127.0.0.1 123.123.123.123 RPAFsethostname On RPAFheader X-Forwarded-For What do I need to do to get the real IP addresses showing in my Apache error_log?

    Read the article

  • Postfix qmgr process causes heavy overload on mailservers

    - by Mattias
    We are using Postfix as MTA for our e-mailmarketing software and once in a while we see that the load on one of the mailservers rises above 5. The load is caused by the qmgr-process which is the heart of Postfix and I see that it is consuming a lot of CPU resources. The process seems to be stuck because after 15 minutes it is still doing the samething and still increasing the load. Once I restart the postfix service the load rapidly decreases to below 1 and Postfix continues to send e-mails without any problems. I'm wondering if anyone else has encountered this problem and if people have suggestions on how to prevent it. The problem shows up on all our mailservers but almost never at more than 1 at the time. It seems to be triggered only when we are sending a mailing but the size (10 or 100.000 e-mails doesn't seem to make a difference). It maybe happens once a week or even less often and the time and day is also different every time. We tried to solve the problem by decreasing the amount of messages qmgr is allowed to process but this didn't solve it. We are using Postfix 2.5.5 on Debian Lenny 5.0.8 (postfix is installed through the default Debian repository). No special messages can be found in the logs (syslog, messages, mail.*). Thank you for your time

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Redis as substitution for Memcache

    - by Boban P.
    We have distributed web app, and for now, as session handler, we use two separate instances of memcache in redundancy, so everything that is written in one memcache is also written in other. Memcache is fairly easy to install, use, and maintain but we have one problem: if one memcache fail, everything is fine, php comunicate with other instance which has all data (although, half of connections have a delay because they try to use failed one, wait a little, and then contact other memcache). When failed instance comes back to life again, it starts up empty. If established session request data from that instance, session fails, and user logs out, and that happens to half of users.So, we are thinking about to switch to redis for session handling, and maybe keep memcache for cache only. My questions are: If we setup redis instances as master-slave, and if master fails, can sentinel promote slave as new master and when old master comes back to life, will it stay as slave or not? Is redis call malloc at startup to allocate part of memory, like memcache or varnish, or it calls malloc for every key inserted? And what are pros and cons of that?

    Read the article

  • How can I enable pid and ppid fields in psacct dump-acct?

    - by annavt
    I am currently using the psacct package on Centos to perform accounting on processes run by users. The info file1 suggests that it is possible to output pid and ppid depending on what information your operating system provides in it's struct acct. pid and ppid are listed in /usr/include/linux/acct.h on my system: struct acct_v3 { char ac_flag; /* Flags */ char ac_version; /* Always set to ACCT_VERSION */ __u16 ac_tty; /* Control Terminal */ __u32 ac_exitcode; /* Exitcode */ __u32 ac_uid; /* Real User ID */ __u32 ac_gid; /* Real Group ID */ __u32 ac_pid; /* Process ID */ __u32 ac_ppid; /* Parent Process ID */ ... But pid and ppid are not output when I run dump-acct: # dump-acct /var/account/pacct.1 | tail awk | 0.0| 0.0| 81.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 tmpwatch | 0.0| 0.0| 1.0| 0| 0|3816.0|Thu Nov 24 04:03:04 2011 cups | 0.0| 0.0| 4.0| 0| 0|8728.0|Thu Nov 24 04:03:04 2011 awk | 0.0| 0.0| 4.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 runlevel | 0.0| 0.0| 0.0| 0| 0|3804.0|Thu Nov 24 04:03:04 2011 chkconfig | 0.0| 0.0| 0.0| 0| 0|3840.0|Thu Nov 24 04:03:04 2011 inn-cron-expire | 0.0| 0.0| 0.0| 0| 0|8728.0|Thu Nov 24 04:03:04 2011 awk | 0.0| 0.0| 0.0| 0| 0|8792.0|Thu Nov 24 04:03:04 2011 gzip | 5.0| 0.0| 9.0| 0| 0|4044.0|Thu Nov 24 04:03:04 2011 accton | 0.0| 0.0| 1.0| 0| 0| 0.0|Thu Nov 24 04:03:04 2011 Is it likely that there is no support in my kernel for this feature or that my psacct version does not support this? How can I add pid and ppid to my accounting logs? CentOS release 5.6 Kernel 2.6.18-238.19.1.el5 psacct 6.3.2 Thanks in advance Anna

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • PLESK PostFix Error Local in maillog, how to troubleshoot

    - by RCNeil
    I'm using the PHP mail() function, using PostFix, on CentOS6, Plesk 10.4, and my email is not getting delivered to a particular address. My personal GMail and Yahoo email addresses receive email from my server fine and do not produce errors. After a wonderful suggestion on here, I checked my mail logs, and this is the error I see : Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: from= <[email protected]>, size=645, nrcpt=1 (queue active) Apr 10 10:26:29 ######### postfix-local[8331]: postfix-local: [email protected], [email protected], dirname=/var/qmail/mailnames Apr 10 10:26:29 ######### postfix-local[8331]: cannot chdir to mailname dir name: No such file or directory Apr 10 10:26:29 ######### postfix-local[8331]: Unknown user: [email protected] Apr 10 10:26:29 ######### postfix/pipe[8330]: 19EA21827: to=<[email protected]>, relay=plesk_virtual, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered via plesk_virtual service) Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: removed [email protected] is the name I've declared in php.ini for sendmail_from = "[email protected]" sendmail_path = "/usr/sbin/sendmail -t -f [email protected]" and the recipient is supposed to be [email protected]. Is this an error on my side or the recipients? Can I address this on my server? Many thanks SF.

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • OS X borked; need to backup outside of Time Machine

    - by rlbgator
    Quick Background: iMac G5 (the white one; 4 years old?) Running Leopard 10.5.something. Time Machine started failing on me; and every time I touch the Finder, things beachball like crazy. Booting from install disk then using Disk Utility to "Repair Disk" also fails. I'm left with the conclusion that I have a corrupt file somewhere important, that's (i) keeping TM from working and (ii) messing with basic functionality. I am not (yet) savvy enough in OS X to know what logs to look in, or how to decipher them - but 'corrupt file' seems to be the likely case, based on my readings of apple.com forum threads. So I think I need to backup, outside of Time Machine, then install fresh OS X on a new drive (or maybe SpinRite the current drive?). I'm able to put a (non-Time Machine) external USB drive on, so I dragged all 3 Users' folders to that... am I done backing up? Am I going to have a massive Permissions problem, trying to put things back together after a re-install? Thanks for reading.

    Read the article

  • Sending email to google apps mailbox via exim4

    - by Andrey
    I have a hosting server with several users. One of the customers decided to move his email account to google apps and added the corresponding MX records so he can receive email now. But when it comes to sending email from my server to those email addresses, they don't make it. I guess it's because exim still thinks these domains are local. That's what i see in logs (example.com is my domain, example.net is the customer's domain): 2010-06-02 14:55:37 1OJmXp-0006yh-UG <= [email protected] U=root P=local S=342 T="lsdjf" from <[email protected]> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG ** [email protected] F=<[email protected]> R=virtual_aliases: 2010-06-02 14:55:38 1OJmXq-0006yl-2A <= <> R=1OJmXp-0006yh-UG U=mail P=local S=1113 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG Completed 2010-06-02 14:55:38 1OJmXq-0006yl-2A User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A == [email protected] R=localuser T=local_delivery defer (-29): User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A ** [email protected]: retry timeout exceeded 2010-06-02 14:55:38 1OJmXq-0006yl-2A [email protected]: error ignored 2010-06-02 14:55:38 1OJmXq-0006yl-2A Completed What should i do to fix that?

    Read the article

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

  • Exim4 Smart Host Relay

    - by ColinM
    I am running Exim 4.71. I want to: Route all email from A.com through mail.A.com Route all email from [B-E].com through mail.B.com Send all other email directly. Here is the configuration I have that doesn't work like I hoped: domainlist a_domains = a.com domainlist b_domains = b.com : c.com : d.com : e.com begin routers smart_route_a: driver = manualroute domains = +a_domains transport = remote_smtp route_list = +a_domains mail.a.com no_more smart_route_b: driver = manualroute domains = +b_domains transport = remote_smtp route_list = +b_domains mail.mollenhour.com no_more dnslookup: driver = dnslookup domains = ! +local_domains transport = remote_smtp ignore_target_hosts = 0.0.0.0 : 127.0.0.0/8 no_more When I send an email e.g. with PHP's mail() or Zend_Mail_Transport_Smtp setting both From: and Return-Path: as [email protected], the smart_route_a router is not used, the dnslookup is used instead. Disabling dnslookup results in no mail being sent. From the logs it appears that email sent to [email protected] uses smart_route_a, but the same email sent from [email protected] to [email protected] is sent using dnslookup. How do I make email from [email protected] be relayed via mail.a.com?

    Read the article

  • Wireless disconnects every 30 minutes

    - by Kez
    I have had a look through all the related questions and I get the feeling my problem is unique. My wireless connection disconnects every 30 minutes, for maybe 1 to 3 seconds. If I am browsing the web while it happens, I get the page cannot be displayed error message. I have checked the event logs as I was curious to know if there was anything in there. There is. Event 8033: BROWSER - The browser has forced an election on network \Device\NetBT_Tcpip_{B919CC30-25A9-45DD-A09F-549A6262FC9E} because a master browser was stopped. Reported exactly every 30 minutes which coincides with my wireless problem. I am running Windows 7 Ultimate, 32-bit. My wireless is Realtek RTL8187 integrated into a ASUS P5K-E/Wifi motherboard. It is on a workgroup and has never been on a domain. This problem does not affect any other computers. Wireless reception is great, and I have ensured that the wireless unit is transmitting on a frequency not used by any nearby wireless basestations. How can I fix this pesky problem?

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by Bluebill
    I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: fsck from util-linux-ng 2.16 There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >