Search Results

Search found 892 results on 36 pages for 'unavailable'.

Page 31/36 | < Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Generic delegate instances

    - by Luc C
    I wonder if C# (or the underlying .NET framework) supports some kind of "generic delegate instances": that is a delegate instance that still has an unresolved type parameter, to be resolved at the time the delegate is invoked (not at the time the delegate is created). I suspect this isn't possible, but I'm asking it anyway... Here is an example of what I'd like to do, with some "???" inserted in places where the C# syntax seems to be unavailable for what I want. (Obviously this code doesn't compile) class Foo { public T Factory<T>(string name) { // implementation omitted } } class Test { public void TestMethod() { Foo foo = new Foo(); ??? magic = foo.Factory; // No type argument given here yet to Factory! // What would the '???' be here (other than 'var' :) )? string aString = magic<string>("name 1"); // type provided on call int anInt = magic<int>("name 2"); // another type provided on another call // Note the underlying calls work perfectly fine, these work, but i'd like to expose // the generic method as a delegate. string aString2 = foo.Factory<string>("name 1"); int anInt2 = foo.Factory<int>("name 2"); } } Is there a way to actually do something like this in C#? If not, is that a limitation in the language, or is it in the .NET framework?

    Read the article

  • Apprequest Dialog stopped working (FB-AS3 API)

    - by warran
    I'm writing ActionScript game and wanted to integrate it with FB, so I used http://code.google.com/p/facebook-actionscript-api/ with custom dialog function I've found in same issue thread. It goes like this: protected function dialog(method:String, callback:Function, stageReference:Stage, stageWebView:StageWebView, params:* = null):void { dialogCallback = callback; stageRef = stageReference; webView = stageWebView; webView.stage = stageReference; webView.assignFocus(); dialogWindow = new DialogWindow(handleDialog); dialogWindow.open(method, applicationId, webView, params); } I've written module to handle all the FB stuff and it worked perfectly. But few days ago I've noticed that dialog shows up, but when I select friends and try to send apprequest to them i get error: An error occurred with your app. Please try again later. API Error Code: 2 API Error Description: Service temporarily unavailable Error Message: User can't send this request: Unknown error I've checked it, and found out, that after selecting friends and clicking send dialog changes location to http://www.facebook.com/dialog/apprequest, the error occures, and than after clicking "ok" it changes location to redirect_uri. Do you have any ideas? Is this my fault or facebooks'?

    Read the article

  • Group Policy error 1006 with and error code 52

    - by Bernesto
    I have a hyper-v cluster operating on win2k8 R2 in a 2003 forest. These servers are at our NOC with a DC that connects to our PDC at HQ via a persistent VPN. The cluster boxes are reporting a error event ID 1006 shown below. The DC is also reporting an error 5805 also shown below. I have found numorus posts regarding 1006 errors, but none for error ID 52's. It's weird, I can ping and I can browse network shares on the DC from each. I thought maybe a DNS or net work issue, but nslook up works too. Event 1006 <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-GroupPolicy" Guid="{AEA1B4FA-97D1-45F2-A64C-4D69FFFD92C9}" /> <EventID>1006</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>1</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2013-12-17T00:08:19.582292600Z" /> <EventRecordID>41808</EventRecordID> <Correlation ActivityID="{26B10592-6228-4A3E-845B-E04B49702A54}" /> <Execution ProcessID="964" ThreadID="1384" /> <Channel>System</Channel> <Computer>NEOREEFVH1.neoreef.com</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="SupportInfo1">1</Data> <Data Name="SupportInfo2">5012</Data> <Data Name="ProcessingMode">0</Data> <Data Name="ProcessingTimeInMilliseconds">1138</Data> <Data Name="ErrorCode">52</Data> <Data Name="ErrorDescription">Unavailable</Data> <Data Name="DCName" /> </EventData> </Event> Event 5805 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5805 Date: 12/16/2013 Time: 2:32:01 PM User: N/A Computer: NEOREEFSRV15 Description: The session setup from the computer NEOREEFVH3 failed to authenticate. The following error occurred: Access is denied. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 22 00 00 c0 "..À Here are the networks on the hosts: Any with a "Enabled" Are virtual switches.

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • Xen won't start after it had been working

    - by Paul Tomblin
    I've been setting up this Debian Stable system with a dom0 and 3 domUs. It was working fine for several days, and I'm almost ready to deploy it to the rack. But last night I shut it down with all three domUs still running for the first time, and today when I started it up, xend won't start. In /var/log/messages, I have: Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: blktapctrl: v1.0.0 Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (aio)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [raw image (sync)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [vmware image (vmdk)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [ramdisk image (ram)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Found driver: [qcow disk (qcow)] Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: couldn't find device number for 'blktap0' Apr 18 13:01:33 xen-test BLKTAPCTRL[4248]: Unable to start blktapctrl and in /var/log/xen/xend.log, I have this: [2010-04-18 12:46:32 3523] INFO (SrvDaemon:219) Xend exited with status 1. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:331) Xend Daemon started [2010-04-18 13:01:34 4255] INFO (SrvDaemon:335) Xend changeset: unavailable. [2010-04-18 13:01:34 4255] INFO (SrvDaemon:342) Xend version: Unknown. [2010-04-18 13:01:34 4255] ERROR (SrvDaemon:353) Exception starting xend (no element found: line 1, column 0) Traceback (most recent call last): File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvDaemon.py", line 345, in run servers = SrvServer.create() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvServer.py", line 251, in create root.putChild('xend', SrvRoot()) File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvRoot.py", line 40, in __init__ self.get(name) File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 82, in get val = val.getobj() File "/usr/lib/xen-3.2-1/lib/python/xen/web/SrvDir.py", line 52, in getobj File "/usr/lib/xen-3.2-1/lib/python/xen/xend/server/SrvNode.py", line 30, in _ _init__ self.xn = XendNode.instance() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 709, in instance inst = XendNode() File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendNode.py", line 164, in __init__ saved_pifs = self.state_store.load_state('pif') File "/usr/lib/xen-3.2-1/lib/python/xen/xend/XendStateStore.py", line 104, in load_state dom = minidom.parse(xml_path) File "/usr/lib/python2.5/xml/dom/minidom.py", line 1915, in parse return expatbuilder.parse(file) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 924, in parse result = builder.parseFile(fp) File "/usr/lib/python2.5/xml/dom/expatbuilder.py", line 211, in parseFile parser.Parse("", True) ExpatError: no element found: line 1, column 0 [2010-04-18 13:01:34 4253] INFO (SrvDaemon:219) Xend exited with status 1. Any clues as to what might be going wrong?

    Read the article

  • cron doesn't execute it's commands

    - by Silvio Keller
    I created an own small server with Debian. Last night i updated it. It created an error while generating the initrd and it didn't boot. Today i booted from another filesystem and did dpkg --configure -a with chroot. I also checked the filesystem. Now everything should be ok. But cron doesn't work:-( It is the same /etc/crontab-File but it doesn't work. I reinstalled cron and tried many things. Is there a way to see cron's log? I only readed about rsyslog, but i have not installed rsyslog, because the server is based on a minimal system (Freeagent Dockstar). Has someone an idea? Best regards Silvio Keller Update There is no file /var/log/syslog and dpkg -l|grep syslog gives me no output, so i think syslog is not installed. It is only a minimal system. cron -l gives: cron: can't lock /var/run/crond.pid, otherpid may be 687: Resource temporarily unavailable So i stopped cron with /etc/init.d/cron stop and executed cron -l again, this gives no output. At this moment i tried to start cron with /etc/init.d/cron start: Starting periodic command scheduler: cron failed! But there's no additional error info... But i see there's now in the background a proccess called cron -l which runs. If i stop it /etc/init.d/cron start works: Starting periodic command scheduler: cron. I used the crontab-file /etc/crontab, this worked for me always. Till i updated my kernel and the initrd it doesn't. The file's content is: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 00 5 * * * root dummy 23 45 * * 7 root dummy 00 * * * * root dummy */1 * * * * root dummy 00 1 * * * root dummy 00 4 * * * root dummy */5 * * * * root dummy #00 */10 * * * root dummy 01 0 * * * root dummy 00 5 * * * root dummy 00 4 * * * root dummy # If i start crontab -e it creates a new file /tmp/crontab.vn87tv/crontab, which is unfortunaly on a tmpfs and which also doesn't work. Thanks & Best regards

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: <<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> </head> <body> <h1>Index of /</h1> <ul><li><a href="MyPageLinks/"> Links/</a></li> ..... .... </ul> </body></html> But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Linux Mint Wireless doesn't connect

    - by guisantogui
    I'm having a great problem, I've installed Linux mint debian edition (LMDE), and following this tutorial http://community.linuxmint.com/tutorial/view/161 I did installed the network driver. The available connections appears to me, but when i try to connect to my connection at first time, I got this message: "(4) Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken." And the following tries, I got this another message: "(32) Insufficient privileges." I'm accepting ideas. Thanks. EDIT: The last piece of the logs: Oct 5 00:22:38 gsouza-host ntpd[2116]: peers refreshed Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): bringing up device. Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: nl80211: 'nl80211' generic netlink not found Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: Failed to initialize driver 'nl80211' Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: rfkill: WLAN soft blocked Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> WiFi hardware radio set enabled Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> WiFi now enabled by radio killswitch Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): supplicant interface state: starting -> ready Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): device state change: unavailable -> disconnected (reason 'supplicant-available') [20 30 42] Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <info> (wlan0): supplicant interface state: ready -> inactive Oct 5 00:22:42 gsouza-host NetworkManager[2019]: <warn> Trying to remove a non-existant call id. Oct 5 00:22:42 gsouza-host wpa_supplicant[2055]: rfkill: WLAN unblocked Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: Joining mDNS multicast group on interface wlan0.IPv6 with address fe80::7ae4:ff:fe4a:13a9. Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: New relevant interface wlan0.IPv6 for mDNS. Oct 5 00:22:44 gsouza-host avahi-daemon[1827]: Registering new address record for fe80::7ae4:ff:fe4a:13a9 on wlan0.*. Oct 5 00:22:46 gsouza-host ntpd[2116]: Listen normally on 7 wlan0 fe80::7ae4:ff:fe4a:13a9 UDP 123 Oct 5 00:22:46 gsouza-host ntpd[2116]: peers refreshed

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • DNSBL listed at zen.spamhaus.org - cant get outgoing mail working? Am I interpreting the response correctly?

    - by Joe Hopfgartner
    I have problem with a mailserver and there is something I kind of not understand! I can connect, authenticate, specify the sender address - but when specifying the reciever i get a error 550 which looks like so: RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 Now the strange thing is that 62.178.15.161 is my local client address. Not the servers ip address. Also the error code 550 seems to be defined as so: 550 Requested action not taken: mailbox unavailable To me that makes totally no sense. Why this error code with this spamhaus message? Why the local ip adress and not the servers? There is exim running and there is nothing turning up in the logs mail.err mail.info mail.log mail.warn in /var/log I looked up both the servers and the clients ip adress on blacklists. The clients ip adress is listed on some (as expected), but the server is totally clean. Here is the complete telnet log when I reproduced the error. Mail clients like Evolution and Thunderbird give me the same spamhaus error message. joe@joe-desktop:~$ telnet mail.hunsynth.org 25 Trying 193.164.132.42... Connected to mail.hunsynth.org. Escape character is '^]'. 220 hunsynth.org ESMTP Exim 4.69 Sat, 01 Jan 2011 17:52:45 +0100 HELP 214-Commands supported: 214 AUTH STARTTLS HELO EHLO MAIL RCPT DATA NOOP QUIT RSET HELP EHLO AUTH 250-hunsynth.org Hello chello062178015161.6.11.univie.teleweb.at [62.178.15.161] 250-SIZE 52428800 250-PIPELINING 250-AUTH PLAIN LOGIN CRAM-MD5 250-STARTTLS 250 HELP AUTH LOGIN 334 VXNlcm5hbWU6 dGVzdEBodW5zeW50aC5vcmc= 334 UGFzc3dvcmQ6 ***** 235 Authentication succeeded MAIL FROM:[email protected] 250 OK RCPT TO:[email protected] 550-DNSBL listed at zen.spamhaus.org 550 http://www.spamhaus.org/query/bl?ip=62.178.15.161 quit 221 hunsynth.org closing connection Connection closed by foreign host. joe@joe-desktop:~$ Update: I tried the same thing from my other server and could successfully send an email. So it really looks like the server does check the IP wich establiches the connection is in some blacklist. This is theoretically a good thing - but - the authentication on the server should prevent that? Or shouldn't it? Well I just think it would be absurd if I couldn't send email over my smtp server from my dynamic ISP connection because the dynamic is listed, altough i have a clean server with login?

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • web services access not being reached thru the web browser

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Windows cannot find the host name "download.microsoft.com" using DNS

    - by joedotnot
    When trying to download a file found on the Microsoft downloads center that starts with, for example, http://download.microsoft.com/download/6/8/7/(some_GUID)/(some_file_name.ext) i get a timeout with "Internet Explorer cannot display the webpage". More information says: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. Diagnose Connection problems says: Windows cannot find the host name "download.microsoft.com" using DNS Bear with me while i expand on the problem: It all started when i tried to download Windows XP mode for my Windows 7 machine. I went to the virtual PC site, then thru the motions of Windows Genuine Advantage which validated ok, but when it redirects to grab the file just times out with above error. (NB: i also tried with the latest Chrome and Firefox but no use due to the Genuine Advantage stuff, so i decided to stick with IE). I am behind an ADSL2+ modem router connecting via wireless (Win 7 Pro laptop); so i hop over to the desktop connected via ethernet (Vista Business), and same result; begin to think site download.microsoft.com site is down. So i give it a break an read up on EDNS, flushing the cache, hosts file, etc... Try again an hour later on the Win 7 machine, still no go; so i turn off the Win 7 (software) firewall, and lo and behold, i can connect and grab any files from download.microsoft.com; (...nice, so we have a Micro$0ft firewall preventing access to a Micro$0ft website, no wonder my auto-updates kept failing but that's another story). But i still am not happy that the desktop connected via ethernet still cannot get to download.microsoft.com, even though i turned off all firewalls, defenders, anti-virus, etc. What is so special / specific about the url download.microsoft.com, any other site is ok, including www.microsoft.com. Any networking guru know what's REALLY going on, and how can i get the desktop to connect? Ping download.microsoft.com - Ping request could not find host download.microsoft.com. Please check the name and try again. Ping google.com or even www.microsoft.com works gives me an IP address. NB: On the wireless laptop ping download.microsoft.com works, i get xxxx.ms.akamai.net [202.7.177.33].

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

  • Varnish 503 Guru Mediation errors with pfsense and healthy apache

    - by Fammy
    We are running a pfsense firewall / load balancer with varnish as service, In front of Fedora linux webservers running apache. We are getting intermittent 503 guru mediation errors. We are a bit stuck scratching our heads because it is not easily repeatable. The timeouts are set to 30s (connect and first byte) but yet the 503 page will show instantly, not after 30s. Then if you refresh immediately it may very well work instantly and sometimes for a 100 refreshes. The load average on the web servers is < 1, the DB server is < 3 (all servers (web, db, pfsense/varnish) are physical rather than VM. I would have thought if the timeouts were being hit then the 503 page would only appear after 30s am I mistaken? Also when an error happens there does not appear to be any corresponding error in apache's log files. This seems to affect pages as well as images, so it is possible to have the page load fine, and for 9/10 images on the page to be fine but 1 not work An example of the varnish debug is below. It says no backend connection but I can't figure out why, if the load was high on apache I could understand it being flaky The machines are on the same gig ethernet lan 21 ReqStart c *IP-REMOVED* 33418 1274368062 21 RxRequest c GET 21 RxURL c /fashion/ 21 RxProtocol c HTTP/1.1 21 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.5) Gecko/2008121622 Fedora/3.0.5-1.fc10 Firefox/3.0.5 21 RxHeader c Host: *ourdomain.com* 21 RxHeader c Accept: */* 21 RxHeader c Accept-Encoding: deflate, gzip 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error restart 21 VCL_call c recv lookup 21 VCL_call c hash 21 Hash c /fashion/ 21 Hash c *ourdomain.com* 21 VCL_return c hash 21 VCL_call c miss fetch 21 FetchError c no backend connection 21 VCL_call c error deliver 21 VCL_call c deliver deliver 21 TxProtocol c HTTP/1.1 21 TxStatus c 503 21 TxResponse c Service Unavailable 21 TxHeader c Server: Varnish 21 TxHeader c Content-Type: text/html; charset=utf-8 21 TxHeader c Content-Length: 384 21 TxHeader c Accept-Ranges: bytes 21 TxHeader c Date: Wed, 11 Apr 2012 10:36:17 GMT 21 TxHeader c X-Varnish: 1274368062 21 TxHeader c Age: 0 21 TxHeader c Via: 1.1 varnish 21 TxHeader c Connection: close 21 TxHeader c X-Cache: MISS 21 Length c 384 21 ReqEnd c 1274368062 1334140577.449995041 1334140577.450334787 1.794108152 0.000282764 0.000056982

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • What Are All the Variables Necessary to Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/O?/B?' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = ? - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = ? - The count of bytes sent including headers. %B = ? - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • Ubuntu stopped recognizing my iPod

    - by flashnode
    Rythmbox on Ubuntu 10.10 used to recognize my 3rd gen Nano and transfer mp3s. Now I plug it in and Ubuntu doesn't pop-up that box that asks what you want to do anymore. It is only recognized if I reboot and the thing is plugged in. Here is the output to 'lsusb -v -s bus:device' Bus 001 Device 008: ID 05ac:1262 Apple, Inc. iPod Nano 3.Gen Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x05ac Apple, Inc. idProduct 0x1262 iPod Nano 3.Gen bcdDevice 0.01 iManufacturer 1 Apple Inc. iProduct 2 iPod iSerial 3 000A27001A670128 bNumConfigurations 2 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 149 bNumInterfaces 3 bConfigurationValue 2 iConfiguration 4 iPod USB Interface bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 1 Control Device bInterfaceProtocol 0 iInterface 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdADC 1.00 wTotalLength 30 bInCollection 1 baInterfaceNr( 0) 1 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0201 Microphone bAssocTerminal 2 bNrChannels 2 wChannelConfig 0x0003 Left Front (L) Right Front (R) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0101 USB Streaming bAssocTerminal 1 bSourceID 1 iTerminal 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 2 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 35 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 2 bSubframeSize 2 bBitResolution 16 bSamFreqType 9 Discrete tSamFreq[ 0] 8000 tSamFreq[ 1] 11025 tSamFreq[ 2] 12000 tSamFreq[ 3] 16000 tSamFreq[ 4] 22050 tSamFreq[ 5] 24000 tSamFreq[ 6] 32000 tSamFreq[ 7] 44100 tSamFreq[ 8] 48000 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c0 1x 192 bytes bInterval 4 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 0 Undefined wLockDelay 0 Undefined Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.01 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 208 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 2 Device Status: 0x0000 (Bus Powered) This ubuntu forum told me to check the automount settings under /apps/nautilus/preferences/media_automount_open in gconf-editor. And I did that. Any clues?

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >