Search Results

Search found 2183 results on 88 pages for 'dbconsole certificate expiry'.

Page 84/88 | < Previous Page | 80 81 82 83 84 85 86 87 88  | Next Page >

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • How do I dig myself out of this DEEP hole? [closed]

    - by user74847
    I may be a bit bias in the way i word this but any opinions and suggestions are welcome. I should start by saying i have a MSc in CS and a degree in new media +6 years expereince and im probably around a middleweight developer. I started a web development company with my friend from uni a year ago, there was a 4 month gap in the middle where i went miles away work on a big project. Ive since returned and picked up where we left off. A year on though i find im still staying up til 5am and getting up at 9 sometimes 2-3 days without sleep. While i was away i was working 9-5 and struggling to keep up with doing stuff for my clients 8 hours ahead, after work, so things stagnated. We currently have about 12 active projects, with one other part time developer and a full time freelancer who is dealing with one of our major projects. I am solely responsible for concurrently developing 2 big sites similar to gumtree in functionality, at the same time as about 5-6+ small WordPress based 5-10page sites. a lot of the content isnt in yet or the client is delaying so i chop and change project every other day which does my head in. Is it reasonable to expect myself to remember the intricate details of each project when i come back to it a week later? and remember the details of a task which hasnt been written down? my business partner seems to think so. or am i just forgetful? Im particularly bad at estimating timescales which doesnt help, added to that a lot of the technologies im am using are new to me (a magento site took weeks to theme rather than days and was full of bugs, even after 1000's of google searches and hours reading forums) im still trying to learn and find the best CMS for us to use and getting my head around the likes of Bootstrap and jquery, Cpanel / Linux (we just got a blank vps for me to set up with no experience) even installing an SSL certificate caused everyone's mail clients to go down which was more stress for me to sort out. I find the pressure of the workload and timescales and trying to learn this stuff so fast is beginning to turn me against my career path. The fact that i never seem to get anything done really winds up my business partner and iv come to associate him with the stress and pain of the whole situation especially when I get berated or a look that says "oh you retard" when I forget something. Even today i spent hours learning how a particular themeforest theme worked with wordpress and how i could twist it to work for our partiuclar needs, on the surface had done no work, that triggered a 30 minute tirade of anger and stress and questioning what i had done from my business partner. had i taken too long to work on that? shoudl i have done it in 2 hours instead of 6? i told him i would take 2 hours. i was wrong. I feel like im running myself into the ground. My sleeping pattern has got so bad that when im working im half asleep and making mistakes, my eyes are constantly purple underneath, i literally fall asleep at my desk, its affecting my social life too, ive not slept more than lightly for the last year and grind through impossible code puzzles in my half sleep wich keeps me awake, when im already exhausted. plus the work is rushed and buggy when it does get done so drags on into the next project. I also procrastinate quite badly, pacing the livingroom, looking out the window when Im alone for three days straight in the flat and start to get cabin fever which means i do even less work and the negative feedback loop continues. I get told im the only one with the problem when i say that i cant work from home any more, and examples of other freelancers get brought up. an office wouldnt bring any extra cash in to the company but im convinced having that moving more than 2 meters away from my bed to go to "work" would get me working, at the moment i feel guilty like i should be working 24-7. It is important that we do all this work to raise enough cash to get our business to the next level but every month still feels like a struggle to pay the rent (there is about £20K coming in by Jan) and i have to borrow money from friends often to buy food or get a taxi to a meeting, so it is vital the money keeps coming in. (im also 20 mins late for nearly all meetings but thats a different issue) have you experienced anything similar? how can i deal with the issues ive raised? is it realistic to develop 10 sites at once? how can i improve my relationship with my business partner? do you struggle to work at home? how do you deal with that? i think if i dont get my life on track by feb i will seriously consider giving it all up, but that seems like such a waste. any ideas!!? i need help! Thanks.

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Ajax call to wcf windows service over ssl (https)

    - by bpatrick100
    I have a windows service which exposes an endpoint over http. Again this is a windows service (not a web service hosted in iis). I then call methods from this endpoint, using javascript/ajax. Everything works perfectly, and this the code I'm using in my windows service to create the endpoint: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("http://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); webServiceHost.Open(); Now, my goal is to get this same model working over SSL (https not http). So, I have followed the guidance of several msdn pages, like the following: http://msdn.microsoft.com/en-us/library/ms733791(VS.100).aspx I have used makecert.exe to create a test cert called "bpCertTest". I have then used netsh.exe to configure my port (1213) with the test cert I created, all with no problem. Then, I've modified the endpoint code in my windows service to be able to work over https as follows: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("https://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); webServiceHost.Credentials.ServiceCertificate.SetCertificate("CN=bpCertTest", StoreLocation.LocalMachine, StoreName.My); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpsGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpsBinding(), "mex"); webServiceHost.Open(); The service creates the endpoint successfully, recognizes my cert in the SetCertificate() call, and the service starts up and running with success. Now, the problem is my javascript/ajax call cannot communicate with the service over https. I simply get some generic commication error (12031). So, as a test, I changed the port I was calling in the javascript to some other random port, and I get the same error - which tells me that I'm obviously not even reaching my service over https. I'm at a complete loss at this point, I feel like everything is in place, and I just can't see what the problem is. If anyone has experience in this scenario, please provide your insight and/or solution! Thanks!

    Read the article

  • WCF, Rampart, ADFS2 and SAML Interop issue

    - by user317647
    Hi, I'm working on establishing interoperability between .NET WCF 3.5 and Axis2/Rampart using ADFS2 as the STS and using SAML authentication. Initially I used Axis 1.4.1/Rampart 1.4 but in an attempt to rule out issues relating to WS-* standards compatbility have also created a duplicate environment running Axis 1.5.1/Rampart 1.5. Both envionment use Eclipse 3.5.1 (Galileo)/Tomcat 5.5 for the Java service side. My objective is: WCF-ADFS2-SAML token-Axis2/Rampart Using Kerberos authentication to obtain a SAML token from ADFS2 and propagating this to Rampart. Much progress has been made so far, but the error I'm now getting on Rampart is as follows (on both versions 1.4 & 1.5): [ERROR] General security error (SAML token security failure) org.apache.axis2.AxisFault: General security error (SAML token security failure) Caused by: org.apache.ws.security.WSSecurityException: General security error (SAML token security failure) at org.apache.ws.security.saml.SAMLUtil.getSAMLKeyInfo(SAMLUtil.java:169) at org.apache.ws.security.saml.SAMLUtil.getSAMLKeyInfo(SAMLUtil.java:73) at org.apache.ws.security.processor.DerivedKeyTokenProcessor.extractSecret(DerivedKeyTokenProcessor.java:170) at org.apache.ws.security.processor.DerivedKeyTokenProcessor.handleToken(DerivedKeyTokenProcessor.java:74) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:326) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:243) at org.apache.rampart.RampartEngine.process(RampartEngine.java:144) After building source versions for Rampart (just 1.4 so far) I've traced this problem to the following source code: SAMUtil.java Element e = samlSubj.getKeyInfo(); X509Certificate[] certs = null; try { KeyInfo ki = new KeyInfo(e, null); if (ki.containsX509Data()) { X509Data data = ki.itemX509Data(0); XMLX509Certificate certElem = null; if (data != null && data.containsCertificate()) { certElem = data.itemCertificate(0); } if (certElem != null) { X509Certificate cert = certElem.getX509Certificate(); certs = new X509Certificate[1]; certs[0] = cert; return new SAMLKeyInfo(assertion, certs); } } The line ki.containsX509Data() above return false and fails. The value from the Element e is as follows: CN=Root Agency -147027885241304943914470421251724308948 JMYzUkmrT13JoYj2pGN5o/vxpGq8bKFXI1m18iEFu+5rF0wA4MYURGIEWE9/zg1apgjElQHus5qb4ZRCzg7IHyENCGq7um2w1SXxPzstoMsZ7oZ83Uq08lDdNV51QGzCCOdCi+YizKT7AJ1B6gaplxMnFEJ8TlnzFBCavMxSCho= The attempt to obtain the X509 data above is failing even when it appears in the message? (IssuerSerial). All references I've seen so far indicate that the style of X509 reference is supported by Rampart and WSS4J (default?!). This key reference is the certificate that ADFS2 has used to encrypt the message. Any help at all would be greatly appreciated! Thanks Jason

    Read the article

  • How to get BinarySecurityToken into the wcf soap request

    - by Mr Bell
    I need to sign my soap request to a 3rd party. The provided an example what the call should look like. And I am trying, rather unsuccessfully to make this call with wcf. I need to make a wcf soap call where the header contains BinarySecurityToken, Signature, and SecurityTokenReference. Here is the example they sent me (with some of the values omitted) I have a certificate for signing, but I cant for the life of me figure out how to make this work <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:BinarySecurityToken EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" wsu:Id="SecurityToken-..omitted.." xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">..omitted..</wsse:BinarySecurityToken> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#Body"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>..omitted...</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> ..omitted.. </ds:SignatureValue> <ds:KeyInfo><wsse:SecurityTokenReference><wsse:Reference URI="#SecurityToken-..omitted.." ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/></wsse:SecurityTokenReference></ds:KeyInfo></ds:Signature></wsse:Security></soapenv:Header><soapenv:Body wsu:Id="Body"><in0 xmlns="http://test.3rdParty.com">123</in0></soapenv:Body></soapenv:Envelope>

    Read the article

  • the HTTP request is unauthorized with client authentication scheme 'Anonymous'

    - by user1246429
    I use firstdata webservice API.I use C# client call firstdata webservice API with WCF. But show error message: "But show error message "System.ServiceModel.Security.MessageSecurityException: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was ''. --- System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ValidateAuthentication(HttpWebRequest request, HttpWebResponse response, WebException responseException, HttpChannelFactory factory) at System.ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit(SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit (SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.SendAndCommit(Transaction SendAndCommitSource)" My web.config info below: <behaviors> <endpointBehaviors> <behavior name="FDGGBehavior"> <clientCredentials> <clientCertificate findValue="WS1909642825._.1" storeLocation="LocalMachine" x509FindType="FindBySubjectName" storeName="TrustedPeople" /> <serviceCertificate> <authentication certificateValidationMode="PeerTrust" /> </serviceCertificate> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <binding name="ServiceSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Certificate" proxyCredentialType="Ntlm" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <endpoint address="https://api.globalgatewaye4.firstdata.com/transaction/v11" binding="basicHttpBinding" bindingConfiguration="ServiceSoap" contract="com.firstdata.globalgatewaye4.api.ServiceSoap" name="ServiceSoap" behaviorConfiguration="FDGGBehavior" /> How can resolve question?

    Read the article

  • What is "Either the application has not called WSAStartup, or WSAStartup failed"

    - by Am1rr3zA
    I develop a program which connect to a web-server through network every thing works fine until I try to use some dongle to protect my software. the dongle has network feature too and it's API work under network infrastructure. when I added the Dongle checking code to my program I got this error: "Either the application has not called WSAStartup, or WSAStartup failed" I don't have any idea what is going on? I put the block of code which encounter exception here. the scenario I got the exception is I log in to the program (everything works fine) the plug out the dongle then the program stop and ask for dongle and I plug in the dongle again and try to log in bu I got exception on line response = (HttpWebResponse)request.GetResponse(); DongleService unikey = new DongleService(); checkDongle = unikey.isConnectedNow(); if (checkDongle) { isPass = true; this.username = txtbxUser.Text; this.pass = txtbxPass.Text; this.IP = combobxServer.Text; string uri = @"https://" + combobxServer.Text + ":5002num_events=1"; request = (HttpWebRequest)WebRequest.Create(uri); request.Proxy = null; request.Credentials = new NetworkCredential(this.username, this.pass); ServicePointManager.ServerCertificateValidationCallback = ((sender, certificate, chain, sslPolicyErrors) => true); response = (HttpWebResponse)request.GetResponse(); Properties.Settings.Default.User = txtbxUser.Text; int index = _servers.FindIndex(p => p == combobxServer.Text); if (index == -1) { _servers.Add(combobxServer.Text); Config_Save.SaveServers(_servers); _servers = Config_Save.LoadServers(); } Properties.Settings.Default.Server = combobxServer.Text; // also save the password if (checkBox1.CheckState.ToString() == "Checked") Properties.Settings.Default.Pass = txtbxPass.Text; Properties.Settings.Default.settingLoginUsername = this.username; Properties.Settings.Default.settingLoginPassword = this.pass; Properties.Settings.Default.settingLoginPort = "5002"; Properties.Settings.Default.settingLoginIP = this.IP; Properties.Settings.Default.isLogin = "guest"; Properties.Settings.Default.Save(); response.Close(); request.Abort(); this.isPass = true; this.Close(); } else { MessageBox.Show("Please Insert Correct Dongle!", "Dongle Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); }

    Read the article

  • storing session data in mysql using php is not retrieving the data properly from the tables.

    - by Ronedog
    I have a problem retrieving some data from the $_SESSION using php and mysql. I've commented out the line in php.ini that tells the server to use the "file" to store the session info so my database will be used. I have a class that I use to write the information to the database and its working fine. When the user passes their credentials the class gets instantiated and the $_SESSION vars get set, then the user gets redirected to the index page. The index.php page includes the file where the db session class is, which when instantiated calles session_start() and the session variables should be in $_SESSION, but when I do var_dump($_SESSION) there is nothing in the array. However, when I look at the data in mysql, all the session information is in there. Its acting like session_start() has not been called, but by instantiating the class it is. Any idea what could be wrong? Here's the HTML: <?php include_once "classes/phpsessions_db/class.dbsession.php"; //used for sessions var_dump($_SESSION); ?> <html> . . . </html> Here's the dbsession class: <?php error_reporting(E_ALL); class dbSession { function dbSession($gc_maxlifetime = "", $gc_probability = "", $gc_divisor = "") { // if $gc_maxlifetime is specified and is an integer number if ($gc_maxlifetime != "" && is_integer($gc_maxlifetime)) { // set the new value @ini_set('session.gc_maxlifetime', $gc_maxlifetime); } // if $gc_probability is specified and is an integer number if ($gc_probability != "" && is_integer($gc_probability)) { // set the new value @ini_set('session.gc_probability', $gc_probability); } // if $gc_divisor is specified and is an integer number if ($gc_divisor != "" && is_integer($gc_divisor)) { // set the new value @ini_set('session.gc_divisor', $gc_divisor); } // get session lifetime $this->sessionLifetime = ini_get("session.gc_maxlifetime"); //Added by AARON. cancel the session's auto start,important, without this the session var's don't show up on next pg. session_write_close(); // register the new handler session_set_save_handler( array(&$this, 'open'), array(&$this, 'close'), array(&$this, 'read'), array(&$this, 'write'), array(&$this, 'destroy'), array(&$this, 'gc') ); register_shutdown_function('session_write_close'); // start the session @session_start(); } function stop() { $new_sess_id = $this->regenerate_id(true); session_unset(); session_destroy(); return $new_sess_id; } function regenerate_id($return_val=false) { // saves the old session's id $oldSessionID = session_id(); // regenerates the id // this function will create a new session, with a new id and containing the data from the old session // but will not delete the old session session_regenerate_id(); // because the session_regenerate_id() function does not delete the old session, // we have to delete it manually //$this->destroy($oldSessionID); //ADDED by aaron // returns the new session id if($return_val) { return session_id(); } } function open($save_path, $session_name) { // global $gf; // $gf->debug_this($gf, "GF: Opening Session"); // change the next values to match the setting of your mySQL database $mySQLHost = "localhost"; $mySQLUsername = "user"; $mySQLPassword = "pass"; $mySQLDatabase = "sessions"; $link = mysql_connect($mySQLHost, $mySQLUsername, $mySQLPassword); if (!$link) { die ("Could not connect to database!"); } $dbc = mysql_select_db($mySQLDatabase, $link); if (!$dbc) { die ("Could not select database!"); } return true; } function close() { mysql_close(); return true; } function read($session_id) { $result = @mysql_query(" SELECT session_data FROM session_data WHERE session_id = '".$session_id."' AND http_user_agent = '".$_SERVER["HTTP_USER_AGENT"]."' AND session_expire > '".time()."' "); // if anything was found if (is_resource($result) && @mysql_num_rows($result) > 0) { // return found data $fields = @mysql_fetch_assoc($result); // don't bother with the unserialization - PHP handles this automatically return unserialize($fields["session_data"]); } // if there was an error return an empty string - this HAS to be an empty string return ""; } function write($session_id, $session_data) { // global $gf; // first checks if there is a session with this id $result = @mysql_query(" SELECT * FROM session_data WHERE session_id = '".$session_id."' "); // if there is if (@mysql_num_rows($result) > 0) { // update the existing session's data // and set new expiry time $result = @mysql_query(" UPDATE session_data SET session_data = '".serialize($session_data)."', session_expire = '".(time() + $this->sessionLifetime)."' WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } } else // if this session id is not in the database { // $gf->debug_this($gf, "inside dbSession, trying to write to db because session id was NOT in db"); $sql = " INSERT INTO session_data ( session_id, http_user_agent, session_data, session_expire ) VALUES ( '".serialize($session_id)."', '".$_SERVER["HTTP_USER_AGENT"]."', '".$session_data."', '".(time() + $this->sessionLifetime)."' ) "; // insert a new record $result = @mysql_query($sql); // if anything happened if (@mysql_affected_rows()) { // return an empty string return ""; } } // if something went wrong, return false return false; } function destroy($session_id) { // deletes the current session id from the database $result = @mysql_query(" DELETE FROM session_data WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } // if something went wrong, return false return false; } function gc($maxlifetime) { // it deletes expired sessions from database $result = @mysql_query(" DELETE FROM session_data WHERE session_expire < '".(time() - $maxlifetime)."' "); } } //End of Class $session = new dbsession(); ?>

    Read the article

  • WCF. Https on basicHttpBinding

    - by Andrew Kalashnikov
    Hello, colleagues. I've written wcf service. Unfortunality I have to use basicHttpBinding for php callers. But I need securtiy. So I've decided use transport security with https. I host it at my IIS 6.0. I enable ssl at IIS and assign Certificate. But when I try open it through browser i get standard error. What's wrong. Please help. I can't issue that for several hours. <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BindingConfiguration1" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <security mode="Transport"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> <services> <service name="RegistratorService.Registrator" behaviorConfiguration="RegistratorService.Service1Behavior"> <endpoint address="https://192.168.0.8/MyService.svc" binding="basicHttpBinding" contract="RegistratorService.IRegistrator" bindingConfiguration="BindingConfiguration1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="RegistratorService.Service1Behavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors>

    Read the article

  • Why can't I get my Azure, WCF, REST, SSL project working? What am I doing wrong?

    - by Mark E
    I'm trying to get SSL, WCF and REST under Azure, but the page won't even load. Here are the steps I followed: 1) I mapped the www.mydomain.com CNAME to my azuresite.cloudapp.net 2) I procured an SSL certificate for www.mydomain.com and properly installed it at my azuresite.cloudapp.net hosted service project 3) I deployed my WCF REST service to Azure and started it. Below is my web.config configuration. The http (non-https) binding version worked correctly. My service URL, http: //www.mydomain .com/service.svc/sessions worked just fine. When I deployed the project with the web.config below, enabling SSL, https: //www.mydomain .com/service.svc/sessions does not even pull up at all. What am I doing wrong? <system.serviceModel> <services> <service name="Service"> <!-- non-https worked just fine --> <!-- <endpoint address="" binding="webHttpBinding" contract="IService" behaviorConfiguration="RestFriendly"> </endpoint> --> <!-- This does not work, what am I doing wrong? --> <endpoint address="" binding="webHttpBinding" bindingConfiguration="TransportSecurity" contract="IService" behaviorConfiguration="RestFriendly"> </endpoint> </service> </services> <behaviors> <endpointBehaviors> <behavior name="RestFriendly"> <webHttp></webHttp> </behavior> </endpointBehaviors> </behaviors> <bindings> <webHttpBinding> <binding name="TransportSecurity"> <security mode="Transport"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> </system.serviceModel>

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • exchange web service C# code send email from home

    - by KK
    Is it possible to wrtie C# code as below .. and send email using my home network. I have a valid user name and password on that exchange server. Is there any configuration that i can set to achieve this. BY THE WAY ... this code blow works when i run it within office network .... i want this code to work when run from any network .... Thank you for your help guys ... String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • Java HTTP Request Occasionally Hangs

    - by behrk2
    Hello Everyone, For the majority of the time, my HTTP Requests work with no problem. However, occasionally they will hang. The code that I am using is set up so that if the request succeeds (with a response code of 200 or 201), then call screen.requestSucceeded(). If the request fails, then call screen.requestFailed(). When the request hangs, however, it does so before one of the above methods are called. Is there something wrong with my code? Should I be using some sort of best practice to prevent any hanging? The following is my code. I would appreciate any help. Thanks! HttpConnection connection = (HttpConnection) Connector.open(url + connectionParameters); connection.setRequestMethod(method); connection.setRequestProperty("WWW-Authenticate", "OAuth realm=api.netflix.com"); if (method.equals("POST")) { connection.setRequestProperty("Content-type", "application/x-www-form-urlencoded"); } int responseCode = connection.getResponseCode(); System.out.println("RESPONSE CODE: " + responseCode); if (connection instanceof HttpsConnection) { HttpsConnection secureConnection = (HttpsConnection) connection; String issuer = secureConnection.getSecurityInfo() .getServerCertificate().getIssuer(); UiApplication.getUiApplication().invokeLater( new DialogRunner( "Secure Connection! Certificate issued by: " + issuer)); } if (responseCode != 200 && responseCode != 201) { screen.requestFailed("Unexpected response code: " + responseCode); connection.close(); return; } String contentType = connection.getHeaderField("Content-type"); ByteArrayOutputStream baos = new ByteArrayOutputStream(); InputStream responseData = connection.openInputStream(); byte[] buffer = new byte[20000]; int bytesRead = 0; while ((bytesRead = responseData.read(buffer)) > 0) { baos.write(buffer, 0, bytesRead); } baos.close(); connection.close(); screen.requestSucceeded(baos.toByteArray(), contentType); } catch (IOException ex) { screen.requestFailed(ex.toString()); }

    Read the article

  • Which network protocol to use for lightweight notification of remote apps?

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Client uses Delphi (2005 + SOAP fixes from 2007), and the server is C#, but from an architecture/design standpoint, that shouldn't matter. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • Google Map only showing Grey Blocks on load - Debug Cert has been obtained

    - by Tom
    I am attempting to follow the Google Map View under the views tutorial for the Android. I have followed step by step but still only see grey blocks when viewed. First: I created a Virtual Device using "Google API's(Google Inc.) Platform 2.2 API Level 8" Second: When creating my project I selected "Google API's Google Inc. Platform 2.2 API Level 8". Third: I obtained the SDK Debug Certificate Fouth: Began Coding. Main.xml <?xml version="1.0" encoding="utf-8"?> <com.google.android.maps.MapView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/mapview" android:layout_width="fill_parent" android:layout_height="fill_parent" android:clickable="true" android:apiKey="0l4sCTTyRmXTNo7k8DREHvEaLar2UmHGwnhZVHQ" / HelloGoogleMaps.java package com.example.googlemap; import android.app.Activity; import android.os.Bundle; import com.google.android.maps.MapView; import com.google.android.maps.MapActivity; public class HelloGoogleMaps extends MapActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } @Override protected boolean isRouteDisplayed() { return false; } } HelloGoogleMaps Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.googlemap" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="com.google.android.maps" /> <activity android:name=".HelloGoogleMaps" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> </manifest> Any thoughts?? Thanks!

    Read the article

  • Android: Unable to access a local website over HTTPS

    - by user1253789
    I am trying to access a locally hosted website and get its HTML source to parse. I have few questions: 1) Can I use "https://An IP ADDRESS HERE" as a valid URL to try and access. I do not want to make changes in the /etc/hosts file so I want to do it this way. 2) I cannot get the html, since it is giving me Handshake exceptions and Certificate issues. I have tried a lot of help available over the web , but am not successful. Here is the code I am using: public class MainActivity extends Activity { private TextView textView; String response = ""; String finalresponse=""; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = (TextView) findViewById(R.id.TextView01); System.setProperty("javax.net.ssl.trustStore","C:\\User\\*" ); System.setProperty("javax.net.ssl.trustStorePassword", "" ); } private class DownloadWebPageTask extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... urls) { TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() { public java.security.cert.X509Certificate[] getAcceptedIssuers() { return null; } public void checkClientTrusted(java.security.cert.X509Certificate[] certs, String authType) { } public void checkServerTrusted(java.security.cert.X509Certificate[] certs, String authType) { } } }; try { SSLContext sc = SSLContext.getInstance("SSL"); sc.init(null, trustAllCerts, new java.security.SecureRandom()); HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory()); } catch (Exception e) { } try { URL url = new URL("https://172.27.224.133"); HttpsURLConnection con =(HttpsURLConnection)url.openConnection(); con.setHostnameVerifier(new AllowAllHostnameVerifier()); finalresponse=readStream(con.getInputStream()); } catch (Exception e) { e.printStackTrace(); } return finalresponse; } private String readStream(InputStream in) { BufferedReader reader = null; try { reader = new BufferedReader(new InputStreamReader(in)); String line = ""; while ((line = reader.readLine()) != null) { response+=line; } } catch (IOException e) { e.printStackTrace(); } finally { if (reader != null) { try { reader.close(); } catch (IOException e) { e.printStackTrace(); } } } return response; } @Override protected void onPostExecute(String result) { textView.setText(finalresponse); } } public void readWebpage(View view) { DownloadWebPageTask task = new DownloadWebPageTask(); task.execute(new String[] { "https://172.27.224.133" }); } }

    Read the article

  • Red Hat Yum not working out of the box?

    - by Tucker
    I have a server runnning Red Hat Enterprise Linux v5.6 in the cloud. My project constraints do not allow me to use another OS. When I created the cloud server, I was able to SSH into it and access the shell. I next ran the command: sudo yum update But the command failed. About a month ago I created another server with the same machine image and didn't have that error. Why is it failing now? The following is the terminal output sudo yum update Loaded plugins: security Repository rhel-server is listed more than once in the configuration Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 662, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 502, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 168, in populate if self._check_db_version(repo, mydbtype): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 226, in _check_db_version return repo._check_db_version(mdtype) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1233, in _check_db_version repoXML = self.repoXML File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1406, in <lambda> repoXML = property(fget=lambda self: self._getRepoXML(), File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1398, in _getRepoXML self._loadRepoXML(text=self) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1388, in _loadRepoXML return self._groupLoadRepoXML(text, ["primary"]) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1372, in _groupLoadRepoXML if self._commonLoadRepoXML(text): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1208, in _commonLoadRepoXML result = self._getFileRepoXML(local, text) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 989, in _getFileRepoXML cache=self.http_caching == 'all') File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 826, in _getFile http_headers=headers, File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 412, in urlgrab return self._mirror_try(func, url, kw) File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 398, in _mirror_try return func_ref( *(fullurl,), **kwargs ) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 936, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 854, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 922, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1010, in __init__ self._do_open() File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1093, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1202, in _make_request fo = opener.open(req) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/site-packages/M2Crypto/m2urllib2.py", line 82, in https_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib64/python2.4/httplib.py", line 810, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.4/httplib.py", line 833, in _send_request self.endheaders() File "/usr/lib64/python2.4/httplib.py", line 804, in endheaders self._send_output() File "/usr/lib64/python2.4/httplib.py", line 685, in _send_output self.send(msg) File "/usr/lib64/python2.4/httplib.py", line 652, in send self.connect() File "/usr/lib64/python2.4/site-packages/M2Crypto/httpslib.py", line 47, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 174, in connect ret = self.connect_ssl() File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl return m2.ssl_connect(self.ssl, self._timeout) M2Crypto.SSL.SSLError: certificate verify failed

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • Integrating HP Systems Insight Manager into an existing environment

    - by ewwhite
    I'm working with an environment that spans multiple data centers/sites and consists primarily of HP ProLiant servers (G5-G7) running Linux. The mix is 30% RHEL/CentOS, the rest are Gentoo :(. I also have a few dozen virtual machines running back-office and Windows servers on VMWare ESX hosts. I run OpenNMS to pull SNMP data from the various server nodes and networking devices. While OpenNMS works wonderfully for up/down, thresholds and notifications, it's native handling of traps is a little rough and the graphs are not particularly pretty. I use Orca/RRD graphs for performance trending and nice graphs. I'm tasked with inventorying the environment and wanted to come up with a clean way to organize server information. Since my environment is mostly HP, I've been playing with HP Systems Insight Manager as a way to extract server data and to deploy HP health/monitoring packages and firmware. The Gentoo systems eventually have to be converted to CentOS, so getting a quick assessment of what hardware is where would be great. Although I've read through a few hundred pages of HP manuals, I'm having a difficult time understanding how to get HP SIM to do what I want, though. My main problems are: I have about 40 subnets to deal with; 98% connected with private lines to facilities across the globe. I don't want to initiate an HP SIM discovery only to pull back every piece of intermediate networking hardware and equipment from all of the locations. I'd like this to focus on the servers. I have OpenNMS configured to accept traps. I don't want HP SIM to duplicate that effort. It seems like the built-in software deployment tool wants to overwrite the trapsink parameters for the systems it encounters during discovery. I have about 10 administrative username/password combinations in use across this infrastructure. Is there a more efficient way to get HP SIM to do the discovery or break discovery into manageable chunks? In terms of general workflow, do people typically install the HP Management Agents during the initial OS deployment (e.g. kickstart post script) or afterwards from HP SIM? Is HP SIM too thick/fat to be an inventory tool? I can't tell if it's meant to be used standalone or alongside other monitoring products. Since the majority of the systems I'm trying to track are those running Gentoo (in order to plan the move to CentOS), is there any way for HP SIM to extract system model information from them ( like dmidecode)? I have systems here where I may have an SSH key established, but not direct user or login access. Is there a way for me to import an SSH private/public key pair into HP SIM to reach out to the servers that can't accept standard credentials? There are a handful of sites where I have inconsistent access or have a double-NAT situation. I may be able to poke a server, but it may not be able to find its way back to the management system. Is there a workaround for this? The certificate configuration for HP SIM seems complicated. What is the preferred setup for trust between systems? I'd also appreciate any notes or recommendations to using this product. Or if there's a better way to do this, I'd like to know.

    Read the article

  • CakePhp on IIS: How can I Edit URL Rewrite module for SSL Redirects

    - by AdrianB
    I've not dealt much with IIS rewrites, but I was able to import (and edit) the rewrites found throughout the cake structure (.htaccess files). I'll explain my configuration a little, then get to the meat of the problem. So my Cake php framework is working well and made possible by the url rewrite module 2.0 which I have successfully installed and configured for the site. The way cake is set up, the webroot folder (for cake, not iis) is set as the default folder for the site and exists inside the following hierarchy inetpub -wwwroot --cakePhp root ---application ----models ----views ----controllers ----WEBROOT // *** HERE *** ---cake core --SomeOtherSite Folder For this implementation, the url rewrite module uses the following rules (from the web.config file) ... <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^$" ignoreCase="false" /> <action type="Rewrite" url="/" /> </rule> <rule name="Imported Rule 3" stopProcessing="true"> <match url="(.*)" ignoreCase="false" /> <action type="Rewrite" url="/{R:1}" /> </rule> <rule name="Imported Rule 4" stopProcessing="true"> <match url="^(.*)$" ignoreCase="false" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Rewrite" url="index.php?url={R:1}" appendQueryString="true" /> </rule> </rules> </rewrite> I've Installed my SSL certificate and created a site binding so that if i use the https:// protocol, everything is working fine within the site. I fear that attempts I have made at creating a rewrite are too far off base to understand results. The rules need to switch protocol without affecting the current set of rules which pass along url components to index.php (which is cake's entry point). My goal is this- Create a couple of rewrite rules that will [#1] redirect all user pages (in this general form http://domain.com/users/page/param/param/?querystring=value ) to use SSL and then [#2} direct all other https requests to use http (is this is even necessary?). [e.g. http://domain.com/users/login , http://domain.com/users/profile/uid:12345 , http://domain.com/users/payments?firsttime=true] ] to all use SSL [e.g. https://domain.com/users/login , https://domain.com/users/profile/uid:12345 , https://domain.com/users/payments?firsttime=true] ] Any help would be greatly appreciated.

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88  | Next Page >