Search Results

Search found 3942 results on 158 pages for 'logged'.

Page 151/158 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Write-error on swap-device, Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

    - by Jan
    My root server at 1&1 was unresponsive on HTTP and SSH, so I logged into the serial console . It flooded my connection with endless error messages like quoted below. I initiated a reboot and now everything seems to work properly. After googling, I installed smartctl and ran a short self test, which told me the device was healthy. Is this likely a disk failure soon to happen or could it be just some program going wild? I assume, the swap device could also grow full when huge amounts of memory get consumed by a buggy program? How can I find out for sure? The sever was already unresponsive a week ago when I just restarted it without proper investigation. The server is running on CentOS. Write-error on swap-device (8:16:8351055) Write-error on swap-device (8:16:8351063) Write-error on swap-device (8:16:8351071) Write-error on swap-device (8:16:8351079) Write-error on swap-device (8:16:8351087) Write-error on swap-device (8:16:8351095) Write-error on swap-device (8:16:8351103) Write-error on swap-device (8:16:8351111) Write-error on swap-device (8:16:8351119) Write-error on swap-device (8:16:8351127) Write-error on swap-device (8:16:8351135) Write-error on swap-device (8:16:8351143) Write-error on swap-device (8:16:8351151) Write-error on swap-device (8:16:8351159) Write-error on swap-device (8:16:8351167) Write-error on swap-device (8:16:8351175) Write-error on swap-device (8:16:8351183) Write-error on swap-device (8:16:8351191) sd 1:0:0:0: [sdb] Unhandled error code sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 9c 00 ef 00 00 08 00 end_request: I/O error, dev sdb, sector 10223855 Write-error on swap-device (8:16:10223863) sd 1:0:0:0: [sdb] Unhandled error code sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 9c 0e 97 00 00 10 00 end_request: I/O error, dev sdb, sector 10227351 Write-error on swap-device (8:16:10227359) Write-error on swap-device (8:16:10227367) sd 1:0:0:0: [sdb] Unhandled error code sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 9c b0 1f 00 00 10 00 end_request: I/O error, dev sdb, sector 10268703 Write-error on swap-device (8:16:10268711) Write-error on swap-device (8:16:10268719) sd 1:0:0:0: [sdb] Unhandled error code sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 a0 84 7f 00 00 08 00 end_request: I/O error, dev sdb, sector 10519679 Write-error on swap-device (8:16:10519687) sd 1:0:0:0: [sdb] Unhandled error code sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 a7 26 af 00 04 00 00 end_request: I/O error, dev sdb, sector 10954415 Write-error on swap-device (8:16:10954423) Write-error on swap-device (8:16:10954431) Write-error on swap-device (8:16:10954439) Write-error on swap-device (8:16:10954447) Write-error on swap-device (8:16:10954455) Write-error on swap-device (8:16:10954463) Write-error on swap-device (8:16:10954471) Write-error on swap-device (8:16:10954479) Write-error on swap-device (8:16:10954487) Write-error on swap-device (8:16:10954495) Write-error on swap-device (8:16:10954503) Write-error on swap-device (8:16:10954511) Write-error on swap-device (8:16:10954519) Write-error on swap-device (8:16:10954527) Write-error on swap-device (8:16:10954535) Write-error on swap-device (8:16:10954543) Write-error on swap-device (8:16:10954551) Write-error on swap-device (8:16:10954559) Write-error on swap-device (8:16:10954567) Write-error on swap-device (8:16:10954575) Write-error on swap-device (8:16:10954583) Write-error on swap-device (8:16:10954591) Write-error on swap-device (8:16:10954599) Write-error on swap-device (8:16:10954607) Write-error on swap-device (8:16:10954615) Write-error on swap-device (8:16:10954623) Write-error on swap-device (8:16:10954631) Write-error on swap-device (8:16:10954639) Write-error on swap-device (8:16:10954647) Write-error on swap-device (8:16:10954655) Write-error on swap-device (8:16:10954663) Write-error on swap-device (8:16:10954671) Write-error on swap-device (8:16:10954679) Write-error on swap-device (8:16:10954687) Write-error on swap-device (8:16:10954695) Write-error on swap-device (8:16:10954703) Write-error on swap-device (8:16:10954711) Write-error on swap-device (8:16:10954719) Write-error on swap-device (8:16:10954727) Write-error on swap-device (8:16:10954735) Write-error on swap-device (8:16:10954743) Write-error on swap-device (8:16:10954751) Write-error on swap-device (8:16:10954759) Write-error on swap-device (8:16:10954767) Write-error on swap-device (8:16:10954775) Write-error on swap-device (8:16:10954783) Write-error on swap-device (8:16:10954791) Write-error on swap-device (8:16:10954799) Write-error on swap-device (8:16:10954807) Write-error on swap-device (8:16:10954815) Write-error on swap-device (8:16:10954823) Write-error on swap-device (8:16:10954831) Write-error on swap-device (8:16:10954839) Write-error on swap-device (8:16:10954847) Write-error on swap-device (8:16:10954855) Write-error on swap-device (8:16:10954863) Write-error on swap-device (8:16:10954871) Write-error on swap-device (8:16:10954879) Write-error on swap-device (8:16:10954887) Write-error on swap-device (8:16:10954895) Write-error on swap-device (8:16:10954903) Write-error on swap-device (8:16:10954911) Write-error on swap-device (8:16:10954919) Write-error on swap-device (8:16:10954927) Write-error on swap-device (8:16:10954935) Write-error on swap-device (8:16:10954943) Write-error on swap-device (8:16:10954951) Write-error on swap-device (8:16:10954959) Write-error on swap-device (8:16:10954967) Write-error on swap-device (8:16:10954975) Write-error on swap-device (8:16:10954983) Write-error on swap-device (8:16:10954991) Write-error on swap-device (8:16:10954999) Write-error on swap-device (8:16:10955007) Write-error on swap-device (8:16:10955015) Write-error on swap-device (8:16:10955023) Write-error on swap-device (8:16:10955031) Write-error on swap-device (8:16:10955039) Write-error on swap-device (8:16:10955047) Write-error on swap-device (8:16:10955055) Write-error on swap-device (8:16:10955063) Write-error on swap-device (8:16:10955071) Write-error on swap-device (8:16:10955079) Write-error on swap-device (8:16:10955087) Write-error on swap-device (8:16:10955095) Write-error on swap-device (8:16:10955103) Write-error on swap-device (8:16:10955111) Write-error on swap-device (8:16:10955119) Write-error on swap-device (8:16:10955127) Write-error on swap-device (8:16:10955135) Write-error on swap-device (8:16:10955143) Write-error on swap-device (8:16:10955151) Write-error on swap-device (8:16:10955159) Write-error on swap-device (8:16:10955167) Write-error on swap-device (8:16:10955175) Write-error on swap-device (8:16:10955183)

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • How does this decorator make a call to the 'register' method?

    - by BryanWheelock
    I'm trying to understand what is going on in the decorator @not_authenticated. The next step in the TraceRoute is to the method 'register' which is also located in django_authopenid/views.py which I just don't understand because I don't see anywhere that register is even mentioned in signin() How is the method 'register' called? def not_authenticated(func): """ decorator that redirect user to next page if he is already logged.""" def decorated(request, *args, **kwargs): if request.user.is_authenticated(): next = request.GET.get("next", "/") return HttpResponseRedirect(next) return func(request, *args, **kwargs) return decorated @not_authenticated def signin(request,newquestion=False,newanswer=False): """ signin page. It manage the legacy authentification (user/password) and authentification with openid. url: /signin/ template : authopenid/signin.htm """ request.encoding = 'UTF-8' on_failure = signin_failure next = clean_next(request.GET.get('next')) form_signin = OpenidSigninForm(initial={'next':next}) form_auth = OpenidAuthForm(initial={'next':next}) if request.POST: if 'bsignin' in request.POST.keys() or 'openid_username' in request.POST.keys(): form_signin = OpenidSigninForm(request.POST) if form_signin.is_valid(): next = clean_next(form_signin.cleaned_data.get('next')) sreg_req = sreg.SRegRequest(optional=['nickname', 'email']) redirect_to = "%s%s?%s" % ( get_url_host(request), reverse('user_complete_signin'), urllib.urlencode({'next':next}) ) return ask_openid(request, form_signin.cleaned_data['openid_url'], redirect_to, on_failure=signin_failure, sreg_request=sreg_req) elif 'blogin' in request.POST.keys(): # perform normal django authentification form_auth = OpenidAuthForm(request.POST) if form_auth.is_valid(): user_ = form_auth.get_user() login(request, user_) next = clean_next(form_auth.cleaned_data.get('next')) return HttpResponseRedirect(next) question = None if newquestion == True: from forum.models import AnonymousQuestion as AQ session_key = request.session.session_key qlist = AQ.objects.filter(session_key=session_key).order_by('-added_at') if len(qlist) > 0: question = qlist[0] answer = None if newanswer == True: from forum.models import AnonymousAnswer as AA session_key = request.session.session_key alist = AA.objects.filter(session_key=session_key).order_by('-added_at') if len(alist) > 0: answer = alist[0] return render('authopenid/signin.html', { 'question':question, 'answer':answer, 'form1': form_auth, 'form2': form_signin, 'msg': request.GET.get('msg',''), 'sendpw_url': reverse('user_sendpw'), }, context_instance=RequestContext(request)) Looking at the request, it seems that account/register/ does reference the register method with 'PATH_INFO': u'/account/register/' Here is the request: <WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'username': [u'BryanWheelock'], u'email': [u'[email protected]'], u'bnewaccount': [u'Signup']}>, COOKIES:{'__utma': '127460431.1218630960.1266769637.1266769637.1266864494.2', '__utmb': '127460431.3.10.1266864494', '__utmc': '127460431', '__utmz': '127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)', 'sessionid': 'fb15ee538320170a22d3a3a324aad968'}, META:{'CONTENT_LENGTH': '74', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'DOCUMENT_ROOT': '/usr/local/apache2/htdocs', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8', 'HTTP_CACHE_CONTROL': 'max-age=0', 'HTTP_CONNECTION': 'close', 'HTTP_COOKIE': '__utmz=127460431.1266769637.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utma=127460431.1218630960.1266769637.1266769637.1266864494.2; __utmc=127460431; __utmb=127460431.3.10.1266864494; sessionid=fb15ee538320170a22d3a3a324aad968', 'HTTP_HOST': 'workproject.com', 'HTTP_ORIGIN': 'http://workproject.com', 'HTTP_REFERER': 'http://workproject.com/account/signin/complete/?next=%2F&janrain_nonce=2010-02-22T18%3A49%3A53ZG2KXci&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fud&openid.response_nonce=2010-02-22T18%3A49%3A53Znxxxxxxxxxw&openid.return_to=http%3A%2F%2Fworkproject.com%2Faccount%2Fsignin%2Fcomplete%2F%3Fnext%3D%252F%26janrain_nonce%3D2010-02-22T18%253A49%253A53ZG2KXci&openid.assoc_handle=AOQobUepU4xs-kGg5LiyLzfN3RYv0I0Jocgjf_1odT4RR9zfMFpQVpMg&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle&openid.sig=Jf76i2RNhqpLTJMjeQ0nnQz6fgA%3D&openid.identity=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItxxxxxxxxxs9CxHQ3PrHw_N5_3j1HM&openid.claimed_id=https%3A%2F%2Fwww.google.com%2Faccounts%2Fo8%2Fid%3Fid%3DAItOaxxxxxxxxxxx4s9CxHQ3PrHw_N5_3j1HM', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.7 Safari/532.9', 'HTTP_X_FORWARDED_FOR': '96.8.31.235', 'PATH': '/usr/bin:/bin', 'PATH_INFO': u'/account/register/', 'PATH_TRANSLATED': '/home/spirituality/webapps/work/spirit_app.wsgi/account/register/', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '59956', 'REQUEST_METHOD': 'POST', 'REQUEST_URI': '/account/register/', 'SCRIPT_FILENAME': '/home/spirituality/webapps/spirituality/spirit_app.wsgi', 'SCRIPT_NAME': u'', 'SERVER_ADDR': '127.0.0.1', 'SERVER_ADMIN': '[no address given]', 'SERVER_NAME': 'workproject.com', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.0', 'SERVER_SIGNATURE': '', 'SERVER_SOFTWARE': 'Apache/2.2.12 (Unix) mod_wsgi/2.5 Python/2.5.4', 'mod_wsgi.application_group': 'www.workProject.com|', 'mod_wsgi.callable_object': 'application', 'mod_wsgi.listener_host': '', 'mod_wsgi.listener_port': '25931', 'mod_wsgi.process_group': '', 'mod_wsgi.reload_mechanism': '0', 'mod_wsgi.script_reloading': '1', 'mod_wsgi.version': (2, 5), 'wsgi.errors': <mod_wsgi.Log object at 0xb7ce0038>, 'wsgi.file_wrapper': <built-in method file_wrapper of mod_wsgi.Adapter object at 0xb7e94b18>, 'wsgi.input': <mod_wsgi.Input object at 0x999cc78>, 'wsgi.multiprocess': True, 'wsgi.multithread': False, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>

    Read the article

  • Un-failing over a Cisco PIX 515e

    - by ABrown
    We had a power outage at our data center last week and when our dual PIX 515E running IOS 7.0(8) (configured with a failover cable) came back, they were in a failed over state where the Secondary unit is active and the Primary unit is standby I have tried 'failover reset', 'failover active', and 'failover reload-standby' as well as executing reloads on both units in a variety of orders, and they don't come back Primary/Active Secondary/Standby. The only thing in my arsenal that I haven't tried is driving to the data center and performing a hard reboot, which I hate to do. I have read How Failover Works on the Cisco Secure Firewall and it seems like this should be wicked straight forward. output of show failover on Primary: Failover On Cable status: Normal Failover unit Primary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:52:05 UTC Mar 10 2010 This host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Other host: Secondary - Active Active time: 897045 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. output of show failover on Secondary: Failover On Cable status: Normal Failover unit Secondary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:03:04 UTC Feb 28 2010 This host: Secondary - Active Active time: 896925 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Other host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. I'm seeing the following in my syslog: Mar 10 03:05:00 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:05:09 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reload-standby' command. Mar 10 03:05:12 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:05:12 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:06:09 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:06:10 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:06:10 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:06:23 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:06:23 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:06:24 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. Mar 10 03:07:05 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:07:31 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover active' command. Mar 10 03:08:04 fw1 %PIX-5-611103: User logged out: Uname: enable_1 Mar 10 03:08:04 fw1 %PIX-6-315011: SSH session from admin1_int on interface inside for user "pix" terminated normally Mar 10 03:08:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:08:39 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:09:10 fw1 %PIX-6-605005: Login permitted from admin1_int/36891 to inside:192.168.4.4/ssh for user "pix" Mar 10 03:09:23 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:09:38 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:09:52 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:09:52 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:09:53 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. I'm not exactly sure how to interpret that syslog data. Primary doesn't seem to even try to become Active. When I reload the individual units separately, my connections are retained, so it doesn't seem like I have a real hardware failure. Is there something I can query (IOS or SNMP) to check for hardware issues? Any thoughts? My IOS-fu is weak. Thanks for any help you might provide, Aaron

    Read the article

  • Jboss logging issue

    - by balaji
    Our application is deployed on JBoss As 4.0x and we face some issues with JBoss logging. Whenever the server is restarted, JBoss stops logging, and there is no update in server.log. After that it is not updating the log file. Then we do touch cmd on log4j.xml, so that it creates the log files again. Please help me in fixing the issue we cant do touch everytime. We face this issue in both the nodes. I could not figure where the problem is? If any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Contents of log4j.xml, copied from the comments below: <appender name="FILE" class="org.jboss.logging.appender.DailyRollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="${jboss.server.log.dir}/server.log"/> <param name="Append" value="false"/> <param name="DatePattern" value="'.'yyyy-MM-dd"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="Target" value="System.out"/> <param name="Threshold" value="INFO"/> <layout class="org.apache.log4j.PatternLayout"> <!-- The default pattern: Date Priority [Category] Message\n --> <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n"/> </layout> </appender> <root> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> </root> <category name="org.apache"> <priority value="INFO"/> </category> <category name="org.apache.axis"> <priority value="INFO"/> </category> <category name="org.jgroups"> <priority value="WARN"/> </category> <category name="jacorb"> <priority value="WARN"/> </category> <category name="org.jboss.management"> <priority value="INFO"/> </category>

    Read the article

  • Server overloaded with log messages: tty_release_dev: pts0: read/write wait queue active!

    - by Raph
    In the logs, I have this (extract from the full kernel messages logges at 06:01:14): Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) And then the server logs flooded by this message: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active! In the end, 2 hours later, I had to reboot because it had become inaccessible: the load hat grown to 160%. The last command does not show anyone logged on pts0 at that time. I also don't know where this telnet process could come from.... This is an AWS instance running UBUNTU 10.04 LTS And here are the complete logs: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861007] IP: [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861019] PGD ee13d067 PUD f8698067 PMD 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861025] Oops: 0000 [#1] SMP Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861028] last sysfs file: /sys/devices/xen/vbd-2208/block/sdk/removable Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861032] CPU 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861034] Modules linked in: ipv6 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861040] Pid: 20247, comm: telnet Not tainted 2.6.32-312-ec2 #24-Ubuntu Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861042] RIP: e030:[<ffffffff81363dde>] [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861047] RSP: e02b:ffff8800f8599d88 EFLAGS: 00010246 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861049] RAX: 0000000000000015 RBX: ffff8800f8598000 RCX: 0000000001aed069 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861052] RDX: 0000000000000000 RSI: ffff8800f8599e67 RDI: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861054] RBP: ffff8800f8599e98 R08: ffffffff8135eb10 R09: 7fffffffffffffff Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861057] R10: 0000000000000000 R11: 0000000000000246 R12: ffff8801dd833800 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861059] R13: 0000000000000000 R14: ffff8801dd833a68 R15: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861065] FS: 00007f90121f6720(0000) GS:ffff880002c40000(0000) knlGS:0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861068] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861070] CR2: 0000000000000015 CR3: 0000000032a59000 CR4: 0000000000002660 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861076] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861083] Stack: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861085] 0000000000000000 0000000001aed069 ffff8801dd8339c8 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861089] <0> ffff8801dd8339c0 ffff8801dd833c90 0000000001aed027 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861094] <0> ffff8801dd8338d8 0000000000000000 ffff8800024d4500 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861099] Call Trace: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861107] [<ffffffff81034bc0>] ? default_wake_function+0x0/0x10 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861113] [<ffffffff8135ebb6>] tty_read+0xa6/0xf0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861118] [<ffffffff810ee7e5>] vfs_read+0xb5/0x1a0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861122] [<ffffffff810ee91c>] sys_read+0x4c/0x80 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861127] [<ffffffff81009ba8>] system_call_fastpath+0x16/0x1b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861131] [<ffffffff81009b40>] ? system_call+0x0/0x52 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861133] Code: 85 d2 0f 84 92 00 00 00 45 8b ac 24 5c 02 00 00 f0 45 0f b3 2e 45 19 ed 49 63 84 24 5c 02 00 00 49 8b 94 24 50 02 00 00 4c 89 ff <0f> be 1c 02 e8 a9 d3 14 00 41 8b 94 24 5c 02 00 00 41 83 ac 24 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861177] CR2: 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861205] ---[ end trace f10eee2057ff4f6b ]--- Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active!

    Read the article

  • postfix relaying all mail through office365 problems

    - by amrith
    This is a rather long question with a long list of things tried and travails so please bear with me. The summary is this. I am able to relay email from ubuntu through office365 using postfix; the configuration works. It only works as one of the users; more specifically the user who authenticates against office365 is the only valid "from" More details follow. I have a machine in Amazon's cloud on which I run a bunch of jobs and would like to have statuses mailed over to me. I use office365 at work so I want to relay mail through office365. I'm most familiar with postfix so I used that as the MTA. Configuration is ubuntu 12.04LTS; I've installed postfix and mail-utils. For this example, let me say my company is "company.com" and the machine in question (through an elastic IP and a DNS entry) is called "plaything.company.com". hostname is set to "plaything.company.com", so is /etc/mailname On plaything, I have the following users registered alpha, bravo, and charlie. I have the following configuration files. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 mydestination = plaything.company.com, localhost.company.com, , localhost myhostname = plaything.company.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = [smtp.office365.com]:587 sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes As the machine is called plaything.company.com I went through the exercise of registering all the appropriate DNS entries to make office365 recognize that I owned plaything.company.com and allowed me to create a user called [email protected] in office365. In office365, I setup [email protected] as having another email address of [email protected]. Then, I made the following sender_canonical [email protected] [email protected] I created a sasl_passwd file that reads: smtp.office365.com [email protected]:123456password123456 let's just say that the password for [email protected] is 1234...456 With all this setup, login as alpha and mail [email protected] Cc: Subject: test test and the whole thing works wonderfully. email gets sent off by postfix, TLS works like a champ, authenticates as daemon@... and [email protected] in Office365 gets an email message. The issue comes up when logged in as bravo to the machine. sender is [email protected] and office365 says: status=bounced (host smtp.office365.com[132.245.12.25] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) this is because I'm trying to send mail as bravo@... and authenticating with office365 as daemon@.... The reason it works with alpha@... is because in office365, I setup [email protected] as having another email address of [email protected]. In Postfix Relay to Office365, Miles Erickson answers the question thusly: Don't send mail to Office365 as a user from your Office365-hosted e-mail domain. Use a subdomain instead, e.g. [email protected] instead of [email protected]. It wouldn't hurt to set up an SPF record for services.mydomain.com or whatever you decide to use. Don't authenticate against mail.messaging.microsoft.com as an Office365 user. Just connect on port 25 and deliver the mail to your domain as any foreign SMTP agent would do. OK, I've done #1, I have those records on DNS but for the most part they are not relevant once Office365 recognizes that I own the domain. Here are those records: CNAME records: - msoid.plaything.company.com - autodiscover.plaything.company.com MX record: - plaything.company.com (plaything-company-com.mail.protection.outlook.com) TXT record: - plaything.company.com (v=spf1 include:spf.protection.outlook.com -all) I've tried #2 but no matter what I do, office365 just blows away the connection with "not authenticated". I can try even a simple telnet to port 25 and attempt to send and it doesn't work. 250 BY2PR01CA007.outlook.office365.com Hello [54.221.245.236] 530 5.7.1 Client was not authenticated Connection closed by foreign host. Is there someone out there who has this kind of a configuration working where multiple users on a linux machine are able to relay mail using postfix through office365? There has to be someone out there doing this who can tell me what is wrong with my setup ...

    Read the article

  • Missing Password check

    - by AAA
    I am using the code below, it checks for empty fields and verifies email, but even if the password is correct it won't login. the password has been inserted with md5 protection, below is the code. I am new to this so please bare with me. Thanks! PHP: session_start(); //Checks if there is a login cookie if(isset($_COOKIE['ID_my_site'])) //if there is, it logs you in and directes you to the members page { $email = $_COOKIE['ID_my_site']; $pass = $_COOKIE['Key_my_site']; $check = mysql_query("SELECT * FROM accounts WHERE email = '$email'")or die(mysql_error()); while($info = mysql_fetch_array( $check )) { if ($pass != $info['password']) { } else { header("Location: home.php"); } } } //if the login form is submitted if (isset($_POST['submit'])) { // if form has been submitted // makes sure they filled it in if(!$_POST['email'] | !$_POST['password']) { die('You did not fill in a required field.'); } // checks it against the database if (!get_magic_quotes_gpc()) { $_POST['email'] = addslashes($_POST['email']); } $check = mysql_query("SELECT * FROM accounts WHERE email = '".$_POST['email']."'")or die(mysql_error()); //Gives error if user dosen't exist $check2 = mysql_num_rows($check); if ($check2 == 0) { die('That user does not exist in our database. <a href=add.php>Click Here to Register</a>'); } while($info = mysql_fetch_array( $check )) { $_POST['password'] = stripslashes($_POST['password']); $info['password'] = stripslashes($info['password']); $_POST['password'] = md5($_POST['password']); //gives error if the password is wrong if ($_POST['password'] != $info['password']) { die('Incorrect password, please try again.'); } else { // if login is ok then we add a cookie $_POST['email'] = stripslashes($_POST['email']); $hour = time() + 3600; setcookie(ID_my_site, $_POST['email'], $hour); setcookie(Key_my_site, $_POST['password'], $hour); //then redirect them to the members area header("Location: home.php"); } } } else { // if they are not logged in <form action="<?php echo $_SERVER['PHP_SELF']?>" method="post"> <table border="0"> <tr><td colspan=2><h1>Login</h1></td></tr> <tr><td>email:</td><td> <input type="text" name="email" maxlength="40"> </td></tr> <tr><td>Password:</td><td> <input type="password" name="password" maxlength="50"> </td></tr> <tr><td colspan="2" align="right"> <input type="submit" name="submit" value="Login"> </td></tr> </table> </form> } Here is the registration code: PHP: // here we encrypt the password and add slashes if needed $_POST['password'] = md5($_POST['password']); if (!get_magic_quotes_gpc()) { $_POST['password'] = mysql_escape_string($_POST['password']); $_POST['email'] = mysql_escape_string($_POST['email']); $_POST['full_name'] = mysql_escape_string($_POST['full_name']); $_POST['user_url'] = mysql_escape_string($_POST['user_url']); } // now we insert it into the database $insert = "INSERT INTO accounts (Uniquer, Full_name, Email, Password, User_url) VALUES ('".$uniquer."','".$_POST['full_name']."', '".$_POST['email']."','".$_POST['password']."', '".$_POST['user_url']."')"; $add_member = mysql_query($insert); After using ini_set function i got to see the error, i am getting this message but not sure what it means: Notice: Undefined index: password in /var/www/domain.com/htdocs/login.php on line 103 Notice: Use of undefined constant password - assumed 'password' in /var/www/domain.com/htdocs/login.php on line 11

    Read the article

  • Bundler doesn't want to install hpricot on Windows XP with Ruby 1.8.7

    - by Nick Gorbikoff
    Hello I develop on a Windows machine but deploy to Debian. Trying to use hpricot with Rails 3 app. I can get the gem to install using : gem install hpricot --platform=mswin32 But when I do this in the bundle file - it keeps throwing an error (I think it's trying to install the wrong version of hpricot (not windows specific) group :production do gem "hpricot", "0.8.3" end group :development, :test do gem "hpricot", "0.8.3", :platforms => [:mswin, :mingw] end This is from another question here on stackoverflow - but it's not working for me. Any ideas? P.S.: Windows XP sp3 with Ruby 1.8.7 with Rails 3.0.3 with bundler 1.0.7 EDIT Forgot to paste my error: bundle install Fetching source index for http://rubygems.org/ which: no sudo in (.;C:\Program Files\ImageMagick-6.6.5-Q16;C:\ruby\Ruby187\bin;C:\Program Files\ActiveState Komodo Edit 6\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\e\cmd;C:\Program Files\MySQL\MySQL Server 5.1\bin;C:\WINDOWS\system32\WindowsPowerShell\v1.0;c:\tools;C:\gnuwin32\bin;C:\tools\wkhtmltopdf;C:\Python31;C:\Program Files\TortoiseHg\;C:\Program Files\TortoiseGit\bin; c:\program files\videolan\vlc;C:\Program Files\SMPlayer\mplayer;C:\Program Files\Git\cmd;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Calibre2\;c:\ruby\jruby-1.5.5\bin;C:\Program Files\Common Files\Shoes\0.r1514\..) Using rake (0.8.7) Using abstract (1.0.0) Using activesupport (3.0.3) Using builder (2.1.2) Using i18n (0.4.2) Using activemodel (3.0.3) Using erubis (2.6.6) Using rack (1.2.1) Using rack-mount (0.6.13) Using rack-test (0.5.6) Using tzinfo (0.3.23) Using actionpack (3.0.3) Using mime-types (1.16) Using polyglot (0.3.1) Using treetop (1.4.9) Using mail (2.2.10) Using actionmailer (3.0.3) Using arel (2.0.4) Using activerecord (3.0.3) Using activeresource (3.0.3) Using bcrypt-ruby (2.1.4) Using bundler (1.0.7) Using cancan (1.5.0) Using haml (3.0.24) Using compass (0.10.6) Using warden (1.0.3) Using devise (1.1.5) Installing hpricot (0.8.3) Temporarily enhancing PATH to include DevKit... with native extensions C:/ruby/Ruby187/lib/ruby/site_ruby/1.8/rubygems/installer.rb:483:in `build_extensions': ERROR: Failed to build gem native extension. (Gem::Installer::ExtensionBuildError) C:/ruby/Ruby187/bin/ruby.exe extconf.rb checking for stdio.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/ruby/Ruby187/bin/ruby Gem files will remain installed in C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/hpricot-0.8.3 for inspection. Results logged to C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/hpricot-0.8.3/ext/fast_xs/gem_make.out from C:/ruby/Ruby187/lib/ruby/site_ruby/1.8/rubygems/installer.rb:446:in `each' from C:/ruby/Ruby187/lib/ruby/site_ruby/1.8/rubygems/installer.rb:446:in `build_extensions' from C:/ruby/Ruby187/lib/ruby/site_ruby/1.8/rubygems/installer.rb:198:in `install' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/source.rb:95:in `install' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/installer.rb:55:in `run' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/spec_set.rb:12:in `each' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/spec_set.rb:12:in `each' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/installer.rb:44:in `run' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/installer.rb:8:in `install' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/cli.rb:225:in `install' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/vendor/thor/task.rb:22:in `send' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/vendor/thor/task.rb:22:in `run' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/vendor/thor/invocation.rb:118:in `invoke_task' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/vendor/thor.rb:246:in `dispatch' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/vendor/thor/base.rb:389:in `start' from C:/ruby/Ruby187/lib/ruby/gems/1.8/gems/bundler-1.0.7/bin/bundle:13 from C:/ruby/Ruby187/bin/bundle:19:in `load' from C:/ruby/Ruby187/bin/bundle:19

    Read the article

  • Cannot install .NET Framework 4.0 on Windows XP SP3

    - by Bob
    I'm using Windows XP SP3 logged in as the administrator. I had RAID Mirroring running. The motherboard broke earlier in the year. When I got a new battery I did not resync. I just use the disks as two separate disks. I searched Google for the errors but I didn't find anything detailed enough. The following Microsoft components are in Add/Remove programs: .NET Framework 1.1 .NET Framework 2.0 Service Pack 2 .NET Framework 3.0 Service Pack 2 .NET Framework 3.5 SP1 Microsoft Compression Client Pack 1.0 for Windows XP Microsoft Office Enterprise 2007 Microsoft Silverlight Microsoft USB Flash Driver Manager Micrsoft User-mode Driver Framework Feature Pack 1.0 Microsoft Visual C++ 2005 ATL Update KB 973923 x86 8.050727.4053 Microsoft Visual C++ 2005 Redistributable This is the installation log: Exists: evaluating... [10/21/2011, 22:17:14]MsiGetProductInfo with product code {3C3901C5-3455-3E0A-A214-0B093A5070A6} found no matches [10/21/2011, 22:17:14] Exists evaluated to false [10/21/2011, 22:15:50]calling PerformAction on an installing performer [10/21/2011, 22:15:50] Action: Performing actions on all Items... [10/21/2011, 22:15:50]Wait for Item (clr_optimization_v2.0.50727_32) to be available [10/21/2011, 22:15:50]clr_optimization_v2.0.50727_32 is now available to install [10/21/2011, 22:15:50]Creating new Performer for ServiceControl item [10/21/2011, 22:15:50] Action: ServiceControl - Stop clr_optimization_v2.0.50727_32... [10/21/2011, 22:15:50]ServiceControl operation succeeded! [10/21/2011, 22:15:50] Action complete [10/21/2011, 22:15:50]Error 0 is mapped to Custom Error: [10/21/2011, 22:15:50]Wait for Item (Windows6.0-KB956250-v6001-x86.msu) to be available [10/21/2011, 22:15:51]Windows6.0-KB956250-v6001-x86.msu is now available to install [10/21/2011, 22:15:51]Created new DoNothingPerformer for File item [10/21/2011, 22:15:51]No CustomError defined for this item. [10/21/2011, 22:15:51]Wait for Item (Windows6.1-KB958488-v6001-x86.msu) to be available [10/21/2011, 22:15:51]Windows6.1-KB958488-v6001-x86.msu is now available to install [10/21/2011, 22:15:51]Created new DoNothingPerformer for File item [10/21/2011, 22:15:51]No CustomError defined for this item. [10/21/2011, 22:15:51]Wait for Item (netfx_Core.mzz) to be available [10/21/2011, 22:15:52]netfx_Core.mzz is now available to install [10/21/2011, 22:15:52]Created new DoNothingPerformer for File item [10/21/2011, 22:15:52]No CustomError defined for this item. [10/21/2011, 22:15:52]Wait for Item (netfx_Core_x86.msi) to be available [10/21/2011, 22:15:52]netfx_Core_x86.msi is now available to install [10/21/2011, 22:15:52]Creating new Performer for MSI item [10/21/2011, 22:15:52] Action: Performing Action on MSI at F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi... [10/21/2011, 22:15:52]Log File F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_20111021_221545515-MSI_netfx_Core_x86.msi.txt does not yet exist but may do at Watson upload time [10/21/2011, 22:15:52]Calling MsiInstallProduct(F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi, EXTUI=1 [10/21/2011, 22:17:14]MSI (F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi) Installation failed. Msi Log: Microsoft .NET Framework 4 Client Profile Setup_20111021_221545515-MSI_netfx_Core_x86.msi.txt [10/21/2011, 22:17:14]PerformOperation returned 1603 (translates to HRESULT = 0x80070643) [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14]OnFailureBehavior for this item is to Rollback. [10/21/2011, 22:17:14] Action: Performing actions on all Items... [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14]Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:01:29). [10/21/2011, 22:17:41]WM_ACTIVATEAPP: Focus stealer's windows WAS visible, NOT taking back focus SECOND LOG REQUESTED BELOW: MSI (s) (6C:EC) [22:17:13:828]: Invoking remote custom action. DLL: F:\WINDOWS\Installer\MSIBB4.tmp, Entrypoint: NgenUpdateHighestVersionRollback MSI (s) (6C:64) [22:17:13:984]: Executing op: ActionStart(Name=CA_NgenRemoveNicPFROs_I_DEF_x86.3643236F_FC70_11D3_A536_0090278A1BB8,,) MSI (s) (6C:64) [22:17:13:984]: Executing op: ActionStart(Name=CA_NgenRemoveNicPFROs_I_RB_x86.3643236F_FC70_11D3_A536_0090278A1BB8,,) MSI (s) (6C:64) [22:17:13:984]: Executing op: CustomActionRollback(Action=CA_NgenRemoveNicPFROs_I_RB_x86.3643236F_FC70_11D3_A536_0090278A1BB8,ActionType=17729,Source=BinaryData,Target=NgenRemoveNicPFROs,) MSI (s) (6C:AC) [22:17:13:984]: Invoking remote custom action. DLL: F:\WINDOWS\Installer\MSIBB5.tmp, Entrypoint: NgenRemoveNicPFROs MSI (s) (6C:64) [22:17:14:000]: Executing op: End(Checksum=0,ProgressTotalHDWord=0,ProgressTotalLDWord=0) MSI (s) (6C:64) [22:17:14:000]: Error in rollback skipped. Return: 5 MSI (s) (6C:64) [22:17:14:015]: No System Restore sequence number for this installation. MSI (s) (6C:64) [22:17:14:015]: Unlocking Server MSI (s) (6C:64) [22:17:14:015]: PROPERTY CHANGE: Deleting UpdateStarted property. Its current value is '1'. MSI (s) (6C:64) [22:17:14:031]: Note: 1: 1708 MSI (s) (6C:64) [22:17:14:031]: Product: Microsoft .NET Framework 4 Client Profile -- Installation failed. MSI (s) (6C:64) [22:17:14:078]: Cleaning up uninstalled install packages, if any exist MSI (s) (6C:64) [22:17:14:078]: MainEngineThread is returning 1603 MSI (s) (6C:EC) [22:17:14:171]: Destroying RemoteAPI object. MSI (s) (6C:9C) [22:17:14:171]: Custom Action Manager thread ending. MSI (c) (F4:C4) [22:17:14:203]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (F4:C4) [22:17:14:203]: MainEngineThread is returning 1603 === Verbose logging stopped: 10/21/2011 22:17:14 ===

    Read the article

  • Perl - WWW::Mechanize Cookie Session Id is being reset with every get(), how to make it stop?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • ASP.NET Membership API not working on Win2008 server/IIS7

    - by Program.X
    I have a very odd problem. I have a web app that uses the .NET Membership API to provide login functionality. This works fine on my local dev machine, using WebDev 4.0 server. I'm using .NET 4.0 with some URL Rewriting, but not on the pages where login is required. I have a Windows Server 2008 with IIS7 However, the Membership API seemingly does not work on the server. I have set up remote debugging and the LoginUser.LoggedIn event of the LoginUser control gets fired okay, but the MembershipUser is null. I get no answer about the username/password being invalid so it seems to be recognising it. If I enter an invalid username/password, I get an invalid username/password response. Some code, if it helps: <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="validation-error-list" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend>Account Information</legend> <p> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">Username:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="validation-error" Display="Dynamic" ErrorMessage="User Name is required." ToolTip="User Name is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="validation-error" Display="Dynamic" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="login-action"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" CssClass="submitButton" Text="Log In" ValidationGroup="LoginUserValidationGroup"/> </p> and the code behind: protected void Page_Load(object sender, EventArgs e) { LoginUser.LoginError += new EventHandler(LoginUser_LoginError); LoginUser.LoggedIn += new EventHandler(LoginUser_LoggedIn); } void LoginUser_LoggedIn(object sender, EventArgs e) { // this code gets run so it appears logins work Roles.DeleteCookie(); // this behaviour has been removed for testing - no difference } void LoginUser_LoginError(object sender, EventArgs e) { HtmlGenericControl htmlGenericControl = LoginUser.FindControl("errorMessageSpan") as HtmlGenericControl; if (htmlGenericControl != null) htmlGenericControl.Visible = true; } I have "Fiddled" with the Login form reponse and I get the following Cookie-Set headers: Set-Cookie: ASP.NET_SessionId=lpyyiyjw45jjtuav1gdu4jmg; path=/; HttpOnly Set-Cookie: .ASPXAUTH=A7AE08E071DD20872D6BBBAD9167A709DEE55B352283A7F91E1066FFB1529E5C61FCEDC86E558CEA1A837E79640BE88D1F65F14FA8434AA86407DA3AEED575E0649A1AC319752FBCD39B2A4669B0F869; path=/; HttpOnly Set-Cookie: .ASPXROLES=; expires=Mon, 11-Oct-1999 23:00:00 GMT; path=/; HttpOnly I don't know what is useful here because it is obviously encrypted but I find the .APXROLES cookie having no value interesting. It seems to fail to register the cookie, but passes authentication

    Read the article

  • Odd tcp deadlock under windows

    - by John Robertson
    We are moving large amounts of data on a LAN and it has to happen very rapidly and reliably. Currently we use windows TCP as implemented in C++. Using large (synchronous) sends moves the data much faster than a bunch of smaller (synchronous) sends but will frequently deadlock for large gaps of time (.15 seconds) causing the overall transfer rate to plummet. This deadlock happens in very particular circumstances which makes me believe it should be preventable altogether. More importantly if we don't really know the cause we don't really know it won't happen some time with smaller sends anyway. Can anyone explain this deadlock? Deadlock description (OK, zombie-locked, it isn't dead, but for .15 or so seconds it stops, then starts again) The receiving side sends an ACK. The sending side sends a packet containing the end of a message (push flag is set) The call to socket.recv takes about .15 seconds(!) to return About the time the call returns an ACK is sent by the receiving side The the next packet from the sender is finally sent (why is it waiting? the tcp window is plenty big) The odd thing about (3) is that typically that call doesn't take much time at all and receives exactly the same amount of data. On a 2Ghz machine that's 300 million instructions worth of time. I am assuming the call doesn't (heaven forbid) wait for the received data to be acked before it returns, so the ack must be waiting for the call to return, or both must be delayed by something else. The problem NEVER happens when there is a second packet of data (part of the same message) arriving between 1 and 2. That part very clearly makes it sound like it has to do with the fact that windows TCP will not send back a no-data ACK until either a second packet arrives or a 200ms timer expires. However the delay is less than 200 ms (its more like 150 ms). The third unseemly character (and to my mind the real culprit) is (5). Send is definitely being called well before that .15 seconds is up, but the data NEVER hits the wire before that ack returns. That is the most bizarre part of this deadlock to me. Its not a tcp blockage because the TCP window is plenty big since we set SO_RCVBUF to something like 500*1460 (which is still under a meg). The data is coming in very fast (basically there is a loop spinning out data via send) so the buffer should fill almost immediately. According to msdn the buffer being full and at least one pending send should cause the data to be sent (though in another place it mentions that there various "heuristics" used in deciding when a send hits the wire). Anway, why the sender doesn't actually send more data during that .15 second pause is the most bizarre part to me. The information above was captured on the receiving side via wireshark (except of course the socket.recv return times which were logged in a text file). We tried changing the send buffer to zero and turning off Nagle on the sender (yes, I know Nagle is about not sending small packets - but we tried turning Nagle off in case that was part of the unstated "heuristics" affecting whether the message would be posted to the wire. Technically microsoft's Nagle is that a small packet isn't sent if the buffer is full and there is an outstanding ACK, so it seemed like a possibility).

    Read the article

  • Active Directory Password Policy Problem

    - by Will
    To Clarify: my question is why isn't my password policy applying to people in the domain. Hey guys, having trouble with our password policy in Active Directory. Sometimes it just helps me to type out what I’m seeing It appears to not be applying properly across the board. I am new to this environment and AD in general but I think I have a general grasp of what should be going on. It’s a pretty simple AD setup without too many Group Policies being applied. It looks something like this DOMAIN Default Domain Policy (link enabled) Password Policy (link enabled and enforce) Personal OU Force Password Change (completely empty nothing in this GPO) IT OU Lockout Policy (link enabled and enforced) CS OU Lockout Policy Accouting OU Lockout Policy The password policy and default domain policy both define the same things under Computer ConfigWindows seetings sec settings Account Policies / Password Policy Enforce password History : 24 passwords remembered Maximum Password age : 180 days Min password age: 14 days Minimum Password Length: 6 characters Password must meet complexity requirements: Enabled Store Passwords using reversible encryption: Disabled Account Policies / Account Lockout Policy Account Lockout Duration 10080 Minutes Account Lockout Threshold: 5 invalid login attempts Reset Account Lockout Counter after : 30 minutes IT lockout This just sets the screen saver settings to lock computers when the user is Idle. After running Group Policy modeling it seems like the password policy and default domain policy is getting applied to everyone. Here is the results of group policy modeling on MO-BLANCKM using the mblanck account, as you can see the policies are both being applied , with nothing important being denied Group Policy Results NCLGS\mblanck on NCLGS\MO-BLANCKM Data collected on: 12/29/2010 11:29:44 AM Summary Computer Configuration Summary General Computer name NCLGS\MO-BLANCKM Domain NCLGS.local Site Default-First-Site-Name Last time Group Policy was processed 12/29/2010 10:17:58 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (15), Sysvol (15) WSUS-52010 NCLGS.local/WSUS/Clients AD (54), Sysvol (54) Password Policy NCLGS.local AD (58), Sysvol (58) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Security Group Membership when Group Policy was applied BUILTIN\Administrators Everyone S-1-5-21-507921405-1326574676-682003330-1003 BUILTIN\Users NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users NCLGS\MO-BLANCKM$ NCLGS\Admin-ComputerAccounts-GP NCLGS\Domain Computers WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 10:17:59 AM EFS recovery Success (no data) 10/28/2010 9:10:34 AM Registry Success 10/28/2010 9:10:32 AM Security Success 10/28/2010 9:10:34 AM User Configuration Summary General User name NCLGS\mblanck Domain NCLGS.local Last time Group Policy was processed 12/29/2010 11:28:56 AM Group Policy Objects Applied GPOs Name Link Location Revision Default Domain Policy NCLGS.local AD (7), Sysvol (7) IT-Lockout NCLGS.local/Personal/CS AD (11), Sysvol (11) Password Policy NCLGS.local AD (5), Sysvol (5) Denied GPOs Name Link Location Reason Denied Local Group Policy Local Empty Force Password Change NCLGS.local/Personal Empty Security Group Membership when Group Policy was applied NCLGS\Domain Users Everyone BUILTIN\Administrators BUILTIN\Users NT AUTHORITY\INTERACTIVE NT AUTHORITY\Authenticated Users LOCAL NCLGS\MissingSkidEmail NCLGS\Customer_Service NCLGS\Email_Archive NCLGS\Job Ticket Users NCLGS\Office Staff NCLGS\CUSTOMER SERVI-1 NCLGS\Prestige_Jobs_Email NCLGS\Telecommuters NCLGS\Everyone - NCL WMI Filters Name Value Reference GPO(s) None Component Status Component Name Status Last Process Time Group Policy Infrastructure Success 12/29/2010 11:28:56 AM Registry Success 12/20/2010 12:05:51 PM Scripts Success 10/13/2010 10:38:40 AM Computer Configuration Windows Settings Security Settings Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 24 passwords remembered Password Policy Maximum password age 180 days Password Policy Minimum password age 14 days Password Policy Minimum password length 6 characters Password Policy Password must meet complexity requirements Enabled Password Policy Store passwords using reversible encryption Disabled Password Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 10080 minutes Password Policy Account lockout threshold 5 invalid logon attempts Password Policy Reset account lockout counter after 30 minutes Password Policy Local Policies/Security Options Network Security Policy Setting Winning GPO Network security: Force logoff when logon hours expire Enabled Default Domain Policy Public Key Policies/Autoenrollment Settings Policy Setting Winning GPO Enroll certificates automatically Enabled [Default setting] Renew expired certificates, update pending certificates, and remove revoked certificates Disabled Update certificates that use certificate templates Disabled Public Key Policies/Encrypting File System Properties Winning GPO [Default setting] Policy Setting Allow users to encrypt files using Encrypting File System (EFS) Enabled Certificates Issued To Issued By Expiration Date Intended Purposes Winning GPO SBurns SBurns 12/13/2007 5:24:30 PM File Recovery Default Domain Policy For additional information about individual settings, launch Group Policy Object Editor. Public Key Policies/Trusted Root Certification Authorities Properties Winning GPO [Default setting] Policy Setting Allow users to select new root certification authorities (CAs) to trust Enabled Client computers can trust the following certificate stores Third-Party Root Certification Authorities and Enterprise Root Certification Authorities To perform certificate-based authentication of users and computers, CAs must meet the following criteria Registered in Active Directory only Administrative Templates Windows Components/Windows Update Policy Setting Winning GPO Allow Automatic Updates immediate installation Enabled WSUS-52010 Allow non-administrators to receive update notifications Enabled WSUS-52010 Automatic Updates detection frequency Enabled WSUS-52010 Check for updates at the following interval (hours): 1 Policy Setting Winning GPO Configure Automatic Updates Enabled WSUS-52010 Configure automatic updating: 4 - Auto download and schedule the install The following settings are only required and applicable if 4 is selected. Scheduled install day: 0 - Every day Scheduled install time: 03:00 Policy Setting Winning GPO No auto-restart with logged on users for scheduled automatic updates installations Disabled WSUS-52010 Re-prompt for restart with scheduled installations Enabled WSUS-52010 Wait the following period before prompting again with a scheduled restart (minutes): 30 Policy Setting Winning GPO Reschedule Automatic Updates scheduled installations Enabled WSUS-52010 Wait after system startup (minutes): 1 Policy Setting Winning GPO Specify intranet Microsoft update service location Enabled WSUS-52010 Set the intranet update service for detecting updates: http://lavender Set the intranet statistics server: http://lavender (example: http://IntranetUpd01) User Configuration Administrative Templates Control Panel/Display Policy Setting Winning GPO Hide Screen Saver tab Enabled IT-Lockout Password protect the screen saver Enabled IT-Lockout Screen Saver Enabled IT-Lockout Screen Saver executable name Enabled IT-Lockout Screen Saver executable name sstext3d.scr Policy Setting Winning GPO Screen Saver timeout Enabled IT-Lockout Number of seconds to wait to enable the Screen Saver Seconds: 1800 System/Power Management Policy Setting Winning GPO Prompt for password on resume from hibernate / suspend Enabled IT-Lockout

    Read the article

  • Nginx + PHP-FPM executes script, but returns 404

    - by MorfiusX
    I am using Nginx + PHP-FPM to run a Wordpress based site. I have a URL that should return dynamically generated JSON data for use with the DataTables jQuery plugin. The data is returned properly, but with a return code of 404. I think this is a Nginx config issue, but I haven't been able to figure out why. The script 'getTable.php' works properly on the production version of the site which is currently using Apache. Anyone know how I can get this to work on Nginx? URL: http://dev.iloveskydiving.org/wp-content/plugins/ils-workflow/lib/getTable.php SERVER: CentOS 6 + Varnish (caching disabled for development) + Nginx + PHP-FPM + Wordpress + W3 Total Cache Nginx Config: server { # Server Parameters listen 127.0.0.1:8082; server_name dev.iloveskydiving.org; root /var/www/dev.iloveskydiving.org/html; access_log /var/www/dev.iloveskydiving.org/logs/access.log main; error_log /var/www/dev.iloveskydiving.org/logs/error.log error; index index.php; # Rewrite minified CSS and JS files location ~* \.(css|js) { if (!-f $request_filename) { rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; expires max; } } # Set a variable to work around the lack of nested conditionals set $cache_uri $request_uri; # Don't cache uris containing the following segments if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|wp-.*\.php|index\.php|wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $cache_uri "no cache"; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp\-postpass|wordpress_logged_in") { set $cache_uri 'no cache'; } # Use cached or actual file if they exists, otherwise pass request to WordPress location / { try_files /wp-content/w3tc/pgcache/$cache_uri/_index.html $uri $uri/ /index.php?q=$uri&$args; } # Cache static files for as long as possible location ~* \.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { try_files $uri =404; expires max; access_log off; } # Deny access to hidden files location ~* /\.ht { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass unix:/var/lib/php-fpm/php-fpm.sock; # port where FastCGI processes were spawned } } Fast CGI Params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; UPDATE: Upon further digging, it looks like Nginx is generating the 404 and PHP-FPM is executing the script properly and returning a 200. UPDATE: Here are the contents of the script: <?php /** * Connect to Wordpres */ require(dirname(__FILE__) . '/../../../../wp-blog-header.php'); /** * Define temporary array */ $aaData = array(); $aaData['aaData'] = array(); /** * Execute Query */ $query = new WP_Query( array( 'post_type' => 'post', 'posts_per_page' => '-1' ) ); foreach ($query->posts as $post) { array_push( $aaData['aaData'], array( $post->post_title ) ); } /** * Echo JSON encoded array */ echo json_encode($aaData);

    Read the article

  • Layout does not show up after Activity launch

    - by Peter
    I have an activity which invokes an onItemClick and launches another activity. This activity has a static layout(for testing purposes), but only thing I see is black(I even set the text color to white to check it out). My listener list.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View arg1, int arg2,long arg3) { //create new intent Intent item = new Intent(getApplicationContext(), Item.class); // Close all views before launching logged //item.putExtra("name", ((TextView)arg1).getText()); //item.putExtra("uid", user_id); item.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(item); // Close Login Screen onPause(); } }); My activity is here(not much to do it just launches the layout) public class Item extends Activity{ protected SQLiteDatabase myDB=null; protected String name; protected int uid; TextView yeart,year,itemname,comment,commentt,value,valuet,curr,currt; protected void onStart(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.herp); /*name=getIntent().getStringExtra("name"); uid=Integer.parseInt(getIntent().getStringExtra("uid")); itemname=(TextView) findViewById(R.id.itemName);//itemname.setText(name); year=(TextView) findViewById(R.id.itemYear); yeart=(TextView) findViewById(R.id.year); comment=(TextView) findViewById(R.id.itemComments); commentt=(TextView) findViewById(R.id.comments); curr=(TextView) findViewById(R.id.itemcurrent); currt=(TextView) findViewById(R.id.current); value=(TextView) findViewById(R.id.itemValue); valuet=(TextView) findViewById(R.id.value);*/ Database openHelper = new Database(this); myDB = openHelper.getReadableDatabase(); myDB=SQLiteDatabase.openDatabase("data/data/com.example.login2/databases/aeglea", null, SQLiteDatabase.OPEN_READONLY); }} And finally my XML layout <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <TextView android:id="@+id/itemName" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="asdasd" android:gravity="center" android:layout_marginBottom="10px" android:textAppearance="?android:attr/textAppearanceLarge" android:textColor="#fff" /> <TextView android:id="@+id/current" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Current" android:textSize="20dp" android:textStyle="bold" /> <TextView android:id="@+id/itemcurrent" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="asdasd" /> <TextView android:id="@+id/year" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Year" android:textSize="20dp" android:textStyle="bold" /> <TextView android:id="@+id/itemYear" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="asdasd" /> <TextView android:id="@+id/value" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Value" android:textSize="20dp" android:textStyle="bold" /> <TextView android:id="@+id/itemValue" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="TextView" /> <TextView android:id="@+id/comments" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Comments" android:textSize="20dp" android:textStyle="bold" /> <TextView android:id="@+id/itemComments" android:layout_width="match_parent" android:layout_height="wrap_content" android:text="TextView" /> </LinearLayout>

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • ERROR: Failed to build gem native extension on Mavericks

    - by Kyle Decot
    I'm attempting to run bundle in my Rails project on OSX 10.9. It fails when getting to the pg gem with this error: Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for pg_config... no No pg_config... trying anyway. If building fails, please try again with --with-pg-config=/path/to/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQescapeLiteral()... yes checking for PQescapeIdentifier()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for pg_char_to_encoding()... yes checking for PQsetClientEncoding()... yes checking for PQlibVersion()... yes checking for PQping()... yes checking for PQsetSingleRowMode()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for rb_thread_call_without_gvl()... yes checking for rb_thread_call_with_gvl()... yes checking for rb_thread_fd_select()... yes checking for rb_w32_wrap_io_handle()... no checking for PGRES_COPY_BOTH in libpq-fe.h... no checking for PGRES_SINGLE_TUPLE in libpq-fe.h... no checking for PG_DIAG_TABLE_NAME in libpq-fe.h... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make "DESTDIR=" compiling gvl_wrappers.c clang: warning: argument unused during compilation: '-fno-fast-math' compiling pg.c clang: warning: argument unused during compilation: '-fno-fast-math' pg.c:272:9: warning: implicit declaration of function 'PQlibVersion' is invalid in C99 [-Wimplicit-function-declaration] return INT2NUM(PQlibVersion()); ^ In file included from pg.c:48: In file included from ./pg.h:17: In file included from /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/include/ruby-2.0.0/ruby.h:33: /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/include/ruby-2.0.0/ruby/ruby.h:1167:21: note: instantiated from: # define INT2NUM(v) INT2FIX((int)(v)) ^ pg.c:272:9: note: instantiated from: return INT2NUM(PQlibVersion()); ^ pg.c:272:17: note: instantiated from: return INT2NUM(PQlibVersion()); ^ pg.c:375:48: error: use of undeclared identifier 'PQPING_OK' rb_define_const(rb_mPGconstants, "PQPING_OK", INT2FIX(PQPING_OK)); ^ pg.c:375:56: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_OK", INT2FIX(PQPING_OK)); ^ pg.c:377:52: error: use of undeclared identifier 'PQPING_REJECT' rb_define_const(rb_mPGconstants, "PQPING_REJECT", INT2FIX(PQPING_REJECT)); ^ pg.c:377:60: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_REJECT", INT2FIX(PQPING_REJECT)); ^ pg.c:379:57: error: use of undeclared identifier 'PQPING_NO_RESPONSE' rb_define_const(rb_mPGconstants, "PQPING_NO_RESPONSE", INT2FIX(PQPING_NO_RESPONSE)); ^ pg.c:379:65: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_NO_RESPONSE", INT2FIX(PQPING_NO_RESPONSE)); ^ pg.c:381:56: error: use of undeclared identifier 'PQPING_NO_ATTEMPT' rb_define_const(rb_mPGconstants, "PQPING_NO_ATTEMPT", INT2FIX(PQPING_NO_ATTEMPT)); ^ pg.c:381:64: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_NO_ATTEMPT", INT2FIX(PQPING_NO_ATTEMPT)); ^ 1 warning and 4 errors generated. make: *** [pg.o] Error 1 Gem files will remain installed in /Users/kyledecot/.rvm/gems/ruby-2.0.0-p247@skateboxes/gems/pg-0.17.0 for inspection. Results logged to /Users/kyledecot/.rvm/gems/ruby-2.0.0-p247@skateboxes/gems/pg-0.17.0/ext/gem_make.out An error occurred while installing pg (0.17.0), and Bundler cannot continue. Make sure that `gem install pg -v '0.17.0'` succeeds before bundling.

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • claimsResponse Always Return Null

    - by Chirag Pandya
    hello i have a following code in asp.net. i have used DotNetOpenAuth.dll for openID. the code is under protected void openidValidator_ServerValidate(object source, ServerValidateEventArgs args) { // This catches common typos that result in an invalid OpenID Identifier. args.IsValid = Identifier.IsValid(args.Value); } protected void loginButton_Click(object sender, EventArgs e) { if (!this.Page.IsValid) { return; // don't login if custom validation failed. } try { using (OpenIdRelyingParty openid = this.createRelyingParty()) { IAuthenticationRequest request = openid.CreateRequest(this.openIdBox.Text); // This is where you would add any OpenID extensions you wanted // to include in the authentication request. ClaimsRequest objClmRequest = new ClaimsRequest(); objClmRequest.Email = DemandLevel.Request; objClmRequest.Country = DemandLevel.Request; request.AddExtension(objClmRequest); // Send your visitor to their Provider for authentication. request.RedirectToProvider(); } } catch (ProtocolException ex) { this.openidValidator.Text = ex.Message; this.openidValidator.IsValid = false; } } protected void Page_Load(object sender, EventArgs e) { this.openIdBox.Focus(); if (Request.QueryString["clearAssociations"] == "1") { Application.Remove("DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.ApplicationStore"); UriBuilder builder = new UriBuilder(Request.Url); builder.Query = null; Response.Redirect(builder.Uri.AbsoluteUri); } OpenIdRelyingParty openid = this.createRelyingParty(); var response = openid.GetResponse(); if (response != null) { switch (response.Status) { case AuthenticationStatus.Authenticated: // This is where you would look for any OpenID extension responses included // in the authentication assertion. var claimsResponse = response.GetExtension<ClaimsResponse>(); State.ProfileFields = claimsResponse; // Store off the "friendly" username to display -- NOT for username lookup State.FriendlyLoginName = response.FriendlyIdentifierForDisplay; // Use FormsAuthentication to tell ASP.NET that the user is now logged in, // with the OpenID Claimed Identifier as their username. FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); break; case AuthenticationStatus.Canceled: this.loginCanceledLabel.Visible = true; break; case AuthenticationStatus.Failed: this.loginFailedLabel.Visible = true; break; // We don't need to handle SetupRequired because we're not setting // IAuthenticationRequest.Mode to immediate mode. ////case AuthenticationStatus.SetupRequired: //// break; } } } private OpenIdRelyingParty createRelyingParty() { OpenIdRelyingParty openid = new OpenIdRelyingParty(); int minsha, maxsha, minversion; if (int.TryParse(Request.QueryString["minsha"], out minsha)) { openid.SecuritySettings.MinimumHashBitLength = minsha; } if (int.TryParse(Request.QueryString["maxsha"], out maxsha)) { openid.SecuritySettings.MaximumHashBitLength = maxsha; } if (int.TryParse(Request.QueryString["minversion"], out minversion)) { switch (minversion) { case 1: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V10; break; case 2: openid.SecuritySettings.MinimumRequiredOpenIdVersion = ProtocolVersion.V20; break; default: throw new ArgumentOutOfRangeException("minversion"); } } return openid; } for above code i am always getting var claimsResponse = response.GetExtension<ClaimsResponse>(); i am always getting claimsResponse= null. what is the reason why it happen. is there any requirement which is required for openid like domain validation for RelyingParty?? please give me answer as soon as possible.

    Read the article

  • Print driver installs failing

    - by Kasius
    All of the Windows 7 64-bit Enterprise machines in my organization are failing to install a good number of printer drivers that previously installed without issue. This only happens with printer drivers. And not with all printer drivers. Just some. Network drivers, video drivers, etc. have had no problems. Here is part of setupapi.dev.log for a Dymo LabelWriter printer driver that is failing to install: dvi: {Plug and Play Service: Device Install for USBPRINT\DYMOLABELWRITER_450_TURBO\6&538F51D&0&USB001} ump: Creating Install Process: DrvInst.exe 09:36:58.071 ndv: Infpath=C:\Windows\INF\oem0.inf ndv: DriverNodeName=dymo.inf:DYMO.NTamd64.6.0:LW_450_TURBO_VISTA:8.1.0.363:usbprint\dymolabelwriter_450_aa08 ndv: DriverStorepath=C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf ndv: Building driver list from driver node strong name... dvi: Searching for hardware ID(s): dvi: usbprint\dymolabelwriter_450_aa08 dvi: dymolabelwriter_450_aa08 inf: Opened PNF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) dvi: Selected driver installs from section [LW_450_TURBO_VISTA] in 'c:\windows\system32\driverstore\filerepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf'. dvi: Class GUID of device changed to: {4d36e979-e325-11ce-bfc1-08002be10318}. dvi: Set selected driver complete. ndv: {Core Device Install} 09:36:58.133 inf: Opened INF: 'C:\Windows\INF\oem0.inf' ([strings]) inf: Saved PNF: 'C:\Windows\INF\oem0.PNF' (Language = 0409) dvi: {DIF_ALLOW_INSTALL} 09:36:58.164 dvi: Using exported function 'ClassInstall32' in module 'C:\Windows\system32\ntprint.dll'. dvi: Class installer == ntprint.dll,ClassInstall32 dvi: No CoInstallers found dvi: Class installer: Enter 09:36:58.164 dvi: Class installer: Exit dvi: Default installer: Enter 09:36:58.180 dvi: Default installer: Exit dvi: {DIF_ALLOW_INSTALL - exit(0xe000020e)} 09:36:58.180 ndv: Installing files... dvi: {DIF_INSTALLDEVICEFILES} 09:36:58.180 dvi: Class installer: Enter 09:36:58.180 inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 ndv: Performing device install final cleanup... ! ndv: Queueing up error report since device installation failed... ndv: {Core Device Install - exit(0x00000490)} 09:37:22.063 dvi: {DIF_DESTROYPRIVATEDATA} 09:37:22.063 dvi: Class installer: Enter 09:37:22.063 dvi: Class installer: Exit dvi: Default installer: Enter 09:37:22.063 dvi: Default installer: Exit dvi: {DIF_DESTROYPRIVATEDATA - exit(0xe000020e)} 09:37:22.063 ump: Server install process exited with code 0x00000490 09:37:22.063 ump: {Plug and Play Service: Device Install exit(00000490)} Notice these lines in particular: !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 From what I have read, the "Element not found" error should be accompanied by an event describing what element was not found. The error that appears in Device Manager is "The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner." It appears to be signed fine though. It has an accompanying .CAT file and worked previously. And when installing, the following messages are logged in setupapi.dev.log: sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE} 09:36:56.277 inf: Opened INF: 'C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf' ([strings]) sig: {_VERIFY_FILE_SIGNATURE} 09:36:56.292 sig: Key = dymo.inf sig: FilePath = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf sig: Catalog = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\DYMO.CAT sig: Success: File is signed in catalog. sig: {_VERIFY_FILE_SIGNATURE exit(0x00000000)} 09:36:56.355 sto: Validating driver package files against catalog 'DYMO.CAT'. sto: Driver package is valid. sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE exit(0x00000000)} 09:36:56.402 sto: Verified driver package signature: sto: Digital Signer Score = 0x0D000005 sto: Digital Signer Name = Microsoft Windows Hardware Compatibility Publisher Now here's where it gets strange. If I take it off the domain, it installs fine. But it doesn't seem to have anything to do with Group Policy. I moved the machine to an OU that blocks inheritance, ran a gpupdate, ran rsop.msc to verify, and tried again. And it still didn't work. Likewise, I removed a machine from the domain, manually set all of the domain Group Policy settings in gpedit.msc, and tried that way, and it worked fine. So it seems like the Group Policy settings are irrelevant. What other domain-related issue could be causing this though? Any ideas on what to try next would be greatly appreciated. I'm not sure where to go from here. Thanks!

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >