Search Results

Search found 1499 results on 60 pages for 'wildcard certificates'.

Page 49/60 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • Dozer deep mapping not working

    - by user363900
    I am trying to use dozer to map between classes. I have a source class that looks like this: public class initRequest{ protected String id; protected String[] details } I have a destination class that looks like this: public class initResponse{ protected String id; protected DetailsObject detObj; } public class DetailsObject{ protected List<String> details; } So essentially i want the string in the details array to be populated into the List in the Details object. I have tried a mapping like this: <mapping wildcard="true" > <class-a>com.starwood.valhalla.sgpms.dto.InitiateSynchronizationDTO</class-a> <class-b>com.starwood.valhalla.psi.sgpms.dto.InitiateSynchronizationDTO</class-b> <field> <a is-accessible="true">reservations</a> <b is-accessible="true">reservations.confirmationNum</b> </field> </mapping> But I get this error: Exception in thread "main" net.sf.dozer.util.mapping.MappingException: java.lang.NoSuchFieldException: reservations.confirmationNum at net.sf.dozer.util.mapping.util.MappingUtils.throwMappingException(MappingUtils.java:91) at net.sf.dozer.util.mapping.propertydescriptor.FieldPropertyDescriptor.<init>(FieldPropertyDescriptor.java:43) at net.sf.dozer.util.mapping.propertydescriptor.PropertyDescriptorFactory.getPropertyDescriptor(PropertyDescriptorFactory.java:53) at net.sf.dozer.util.mapping.fieldmap.FieldMap.getDestPropertyDescriptor(FieldMap.java:370) at net.sf.dozer.util.mapping.fieldmap.FieldMap.getDestFieldType(FieldMap.java:103) at net.sf.dozer.util.mapping.util.MappingsParser.processMappings(MappingsParser.java:95) at net.sf.dozer.util.mapping.util.CustomMappingsLoader.load(CustomMappingsLoader.java:77) at net.sf.dozer.util.mapping.DozerBeanMapper.loadCustomMappings(DozerBeanMapper.java:149) at net.sf.dozer.util.mapping.DozerBeanMapper.getMappingProcessor(DozerBeanMapper.java:132) at net.sf.dozer.util.mapping.DozerBeanMapper.map(DozerBeanMapper.java:94) at com.starwood.valhalla.mte.translator.StarguestPMSIntegrationDozerAdapter.adaptInit(StarguestPMSIntegrationDozerAdapter.java:30) How can i map this so that it works?

    Read the article

  • Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    - by O.Shevchenko
    I am running SCEP client to enroll certificates on NDES server. If OpenSSL is not in FIPS mode - everything works fine. In FIPS mode i get the following error: pkcs7_unwrap():pkcs7.c:708] error decrypting inner PKCS#7 139968442623728:error:060A60A3:digital envelope routines:FIPS_CIPHERINIT:disabled for fips:fips_enc.c:142: 139968442623728:error:21072077:PKCS7 routines:PKCS7_decrypt:decrypt error:pk7_smime.c:557: That's because NDES server uses DES algorithm to encrypt returned PKCS#7 packet. I used the following debug code: /* Copy enveloped data from PKCS#7 */ bytes = BIO_read(pkcs7bio, buffer, sizeof(buffer)); BIO_write(outbio, buffer, bytes); p7enc = d2i_PKCS7_bio(outbio, NULL); /* Get encryption PKCS#7 algorithm */ enc_alg=p7enc->d.enveloped->enc_data->algorithm; evp_cipher=EVP_get_cipherbyobj(enc_alg->algorithm); printf("evp_cipher->nid = %d\n", evp_cipher->nid); The last string always prints: evp_cipher-nid = 31 defined in openssl-1.0.1c/include/openssl/objects.h #define SN_des_cbc "DES-CBC" #define LN_des_cbc "des-cbc" #define NID_des_cbc 31 I use 3DES algorithm for PKCS7 requests encryption in my code (pscep.enc_alg = (EVP_CIPHER *)EVP_des_ede3_cbc()) and NDES server accepts these requests, but it always returns answer encrypted with DES. Can I configure Wndows NDES server to use Triple DES (3DES) algorithm for PKCS#7 answer encryption?

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • squid ssl bump sslv3 enforce to allow old sites

    - by Shrey
    Important: I have this question on stackoverflow but somebody told me this is more relevant place for this question. Thanks I have configured squid(3.4.2) as ssl bumped proxy. I am setting proxy in firefox(29) to use squid for https/http. Now it works for most sites, but some sites which support old SSL proto(sslv3) break, and I see squid not employing any workarounds for those like browsers do. Sites which should work: https://usc-excel.officeapps.live.com/ , https://www.mahaconnect.in As a workaround I have set sslproxy_version=3 , which enforces SSLv3 and above sites work. My question: is there a better way to do this which does not involve enforcing SSLv3 for servers supporting TLS1 or better. Now I know openssl doesn't automatically handle that. But I imagined squid would. My squid conf snipper: http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/certs/SquidCA.pem always_direct allow all ssl_bump server-first all sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /usr/local/squid/var/lib/ssl_db -M 4MB client_persistent_connections on server_persistent_connections on sslproxy_version 3 sslproxy_options ALL cache_dir aufs /usr/local/squid/var/cache/squid 100 16 256 coredump_dir /usr/local/squid/var/cache/squid strip_query_terms off httpd_suppress_version_string on via off forwarded_for transparent vary_ignore_expire on refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 UPDATE: I have tried compiling squid 3.4.5 with openssl 1.0.1h . No improvements

    Read the article

  • Cisco ASA Act as a Hardware Security Module?

    - by Derek
    Hello, We have a partner that is requiring us to get a HSM for a web application that we host for them. This is something new for us, we've always installed our SSL certificates on our web servers and never needed a hardware device. We currently have 2 Cisco ASA 5510 firewalls in an active/standby configuration. Both ASAs have a ASA-SSM-10 security module installed in them. The web application is a standard HTTPS webpage with no authentication required. I was wondering if we could use our Cisco ASAs to meet this requirement or if we'll have to buy another device. I was doing some searching and read about Cisco's clientless webvpn feature. It sounds like it might work, but I'm not sure. We basically want the ASA to handle the SSL and proxy the connection to our web servers. We do not want to prompt for a username or password to connect or show any portals, just display the web page. If the ASA cannot do this, does any one have any recommendations for network attached hardware security modules? We are using VMware vCenter, so we'd rather have an external device attached to the network, rather than buying HSM cards for every ESXi host. Thanks, Derek

    Read the article

  • Disable Razors default .cshtml handler in a ASP.NET Web Application

    - by mythz
    Does anyone know how to disable the .cshtml extension completely from an ASP.NET Web Application? In essence I want to hijack the .cshtml extension and provide my own implementation based on a RazorEngine host, although when I try to access the page.cshtml directly it appears to be running under an existing WebPages razor host that I'm trying to disable. Note: it looks like its executing .cshtml pages under the System.Web.WebPages.Razor context as the Microsoft.Data Database is initialized. I don't even have any Mvc or WebPages dlls referenced, just System.Web.dll and a local copy of System.Web.Razor with RazorEngine.dll I've created a new ASP.NET Web .NET 4.0 Application and have tried to clear all buildProviders and handlers as seen below: <system.web> <httpModules> <clear/> </httpModules> <compilation debug="true" targetFramework="4.0"> <buildProviders> <clear/> </buildProviders> </compilation> <httpHandlers> <clear/> <add path="*" type="MyHandler" verb="*"/> </httpHandlers> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <clear/> </modules> <handlers> <clear/> <add path="*" name="MyHandler" type="MyHandler" verb="*" preCondition="integratedMode" resourceType="Unspecified" allowPathInfo="true" /> </handlers> </system.webServer> Although even with this, when I visit any page.cshtml page it still bypasses My wildcard handler and tries to execute the page itself. Basically I want to remove all traces of .cshtml handlers/buildProviders/preprocessing so I can serve the .cshtml pages myself, anyone know how I can do this?

    Read the article

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • Lion server profile manager, device enrollment doesn't work

    - by user964406
    I am in the process of setting up Lion Servers profile manager to manage iPads on our local school network. I don't need to manage them while they are outside the network. I have successfully had it working on my personal network. The school network is behind a proxy which we have no control over. I can get the iPads to view the mydevices page and install a trust cert. I have managed to get an iPad to successfully install the remote management profile. After this the profile manager bugs out. It will list the active task of 'new device (sending)' but it's unable to complete the task. If I click on the device on profile manager and try any of the actions out they will all fail to complete. I am using the auto generated certificates and this works if I bring the server and iPad outside of the school network. Shortly after device enrollment the system log on the Lion server reports the following Replaced the actual ip address with INTERNALIP Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini sandboxd[760] ([778]): applepushservice(778) deny network-outbound INTERNALIP:8080 Jun 4 08:40:53 mini applepushserviced[778]: Got connection error Error Domain=NSPOSIXErrorDomain Code=1 "The operation couldn\u2019t be completed. Operation not permitted" UserInfo=0x7fa483b1a340 {NSErrorFailingURLStringKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS, NSErrorFailingURLKey=https://albert.apple.com/WebObjects/ALUnbrick.woa/wa/deviceActivation?device=Mac OS} Jun 4 08:40:53 mini applepushserviced[778]: Failed to get client cert on attempt 2, will retry in 15 seconds Does anyone have any ideas on how to get past this stage? Thanks in advance.

    Read the article

  • Unable to install PEM/pkcs12 created by gnutls to Cisco ASA

    - by ACiD GRiM
    I've been pulling some hair out trying to figure out why cisco devices don't like my certificates. My primary need is to get a trustpoint set up with CA,cert,key on the ASA for VPN systems, however I'm having the same issues on my IOS devices. I created a pkcs12 with openssl a few months ago that imported with no issues, but now that I'm getting ready to move this lab to production I'm using gnutls certtool as I found it adds alt_dns and ip_address fields properly to the certificate, (which cost me a few more hairs trying to get to work with openssl's ca tool) I'm including the current test certs below, don't worry I'm not using these in production ;) The maddening thing is that after I thought gnutls was generating certs incorrectly, I tried making a pkcs12 for a printserver and it imported with no issues. Here's my command flow for creating these certs: certtool --generate-privkey --disable-quick-random --outfile nn-ca.key certtool --generate-self-signed --load-privkey nn-ca.key --outfile nn-ca.crt certtool --generate-privkey --disable-quick-random --outfile nn-g0.key certtool --generate-certificate --load-privkey nn-g0.key --outfile nn-g0.crt --load-ca-privkey nn-ca.key --load-ca-certificate nn-ca.crt openssl pkcs12 -export -certfile nn-ca.crt -in nn-g0.crt -inkey nn-g0.key -out nn-g0.p12 openssl enc -base64 -in nn-g0.p12 -out nn-g0.base64.p12 The password for the attatched pkcs12 is "ciscohelp" without quotes. Thanks for any help TestCerts

    Read the article

  • How do I renew a Web Server certificate in Windows Server 2008?

    - by Mark Seemann
    The SSL certificate for my web site just expired a few days ago, and I would like to renew it. I originally issued it two years ago using my Windows 2008 Certificate Authority, and it's worked without a hitch in all that time, so I would like to renew the certificate as simply as possible to make sure that all the applications relying on that certificate continue to work. I can open an MMC instance and add the Certificates snap-in for the Local Computer. I can find the relevant certificate under Personal, but I can't renew it. When I select Renew certificate with new key I get the following message: Web Server Status: Unavailable The permissions on the certificate template do not allow the current user to enroll for this type of certificate. You do not have permission to request this type of certificate. However, I can't understand this, as I'm logged on as a Domain Admin and I'm running the MMC instance in elevated mode. I've checked the Web Server certificate template, and Domain Admins have the Enroll permission on this template. FWIW, I also tried rebooting the server. How can I renew the certificate?

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • Using hibernate criteria, is there a way to escape special characters?

    - by Kevin Crowell
    For this question, we want to avoid having to write a special query since the query would have to be different across multiple databases. Using only hibernate criteria, we want to be able to escape special characters. This situation is the reason for needing the ability to escape special characters: Assume that we have table 'foo' in the database. Table 'foo' contains only 1 field, called 'name'. The 'name' field can contain characters that may be considered special in a database. Two examples of such a name are 'name_1' and 'name%1'. Both the '_' and '%' are special characters, at least in Oracle. If a user wants to search for one of these examples after they are entered in the database, problems may occur. criterion = Restrictions.ilike("name", searchValue, MatchMode.ANYWHERE); return findByCriteria(null, criterion); In this code, 'searchValue' is the value that the user has given the application to use for its search. If the user wants to search for '%', the user is going to be returned with every 'foo' entry in the database. This is because the '%' character represents the "any number of characters" wildcard for string matching and the SQL code that hibernate produces will look like: select * from foo where name like '%' Is there a way to tell hibernate to escape certain characters, or to create a workaround that is not database type sepecific?

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • running red5 on port 80

    - by ArneLovius
    I have a red5 application http://code.google.com/p/openmeetings that runs under red5, and is accessible on port 5080 and 8443 I've installed it on Ubuntu 10.04 The eventual aim is to have it accessible via https on 443 instead of 8443, but I thought I would initially try on 80 so that any issues were just down to the port configuration and not SSL certificates. I've tried changing the port from 5080 to 80 in the red5.properties file, but it fails to start. In the red5.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 In the error.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 and ERROR org.red5.server.tomcat.TomcatLoader - Error loading tomcat, unable to bind connector. You may not have permission to use the selected port org.apache.catalina.LifecycleException: Protocol handler initialization failed: java.net.BindException: Permission denied /0.0.0.0:80 There is nothing else installed or running on port 80, so I presume that this is a "needs to be root" situation. I would rather not run an Internet accessible web service as root. I know that Tomcat can run on port 80 by changing “#AUTHBIND=no” to “AUTHBIND=yes” in /etc/default/tomcat6 but I have not been able to find anything similar for red5. Am I on a hiding to nothing, or is there better way than running as root ? Thanks!

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • GoDaddy SSL on Shared Hosting

    - by Jon
    So I'm very new to using SSL certificates and I have been trying to install one on a site for a client. He is using shared hosting for multiple domains through GoDaddy, and the site we're working on is not the primary domain. He purchased a UCC certificate for multiple domains and I installed it on the shared hosting account. My thought was that since the domains were under the same hosting account, then they would each be protected under the certificate. This was not the case...apparently. I checked both domains with an SSL checker and the primary domain checked out. The domain that we wanted the SSL on showed the following errors: None of the common names in the certificate match the name that was entered (www.CLIENTDOMAIN.com). You may receive an error when accessing this site in a web browser. I'm not sure how to fix this. It was just purchased yesterday, so if necessary, I guess I could un-install it or re-key it (???). Is there a way to just change the common name to www.CLIENTDOMAIN.com (the correct domain)?

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • How do I compress a Json result from ASP.NET MVC with IIS 7.5

    - by Gareth Saul
    I'm having difficulty making IIS 7 correctly compress a Json result from ASP.NET MVC. I've enabled static and dynamic compression in IIS. I can verify with Fiddler that normal text/html and similar records are compressed. Viewing the request, the accept-encoding gzip header is present. The response has the mimetype "application/json", but is not compressed. I've identified that the issue appears to relate to the MimeType. When I include mimeType="*/*", I can see that the response is correctly gzipped. How can I get IIS to compress WITHOUT using a wildcard mimeType? I assume that this issue has something to do with the way that ASP.NET MVC generates content type headers. The CPU usage is well below the dynamic throttling threshold. When I examine the trace logs from IIS, I can see that it fails to compress due to not finding a matching mime type. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files" noCompressionForProxies="false"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/json" enabled="true" /> </staticTypes> </httpCompression>

    Read the article

  • Algorithm to match list of regular expressions

    - by DSII
    I have two algorithmic questions for a project I am working on. I have thought about these, and have some suspicions, but I would love to hear the community's input as well. Suppose I have a string, and a list of N regular expressions (actually they are wildcard patterns representing a subset of full regex functionality). I want to know whether the string matches at least one of the regular expressions in the list. Is there a data structure that can allow me to match the string against the list of regular expressions in sublinear (presumably logarithmic) time? This is an extension of the previous problem. Suppose I have the same situation: a string and a list of N regular expressions, only now each of the regular expressions is paired with an offset within the string at which the match must begin (or, if you prefer, each of the regular expressions must match a substring of the given string beginning at the given offset). To give an example, suppose I had the string: This is a test string and the regex patterns and offsets: (a) his.* at offset 0 (b) his.* at offset 1 The algorithm should return true. Although regex (a) does not match the string beginning at offset 0, regex (b) does match the substring beginning at offset 1 ("his is a test string"). Is there a data structure that can allow me to solve this problem in sublinear time? One possibly useful piece of information is that often, many of the offsets in the list of regular expressions are the same (i.e. often we are matching the substring at offset X many times). This may be useful to leverage the solution to problem #1 above. Thank you very much in advance for any suggestions you may have!

    Read the article

  • Exchange 2010 POP3/IMAP4/Transport services complaining that they can't find SSL certificate after blue screen

    - by Graeme Donaldson
    We have a single-server Exchange 2010 setup. In the early hours of this morning the server had a blue screen and rebooted. After coming back up the POP3/IMAP4 and Transport services are complaining that they cannot find the correct SSL certificate for mail.example.com. POP3: Log Name: Application Source: MSExchangePOP3 Date: 2012/04/23 11:45:15 AM Event ID: 2007 Task Category: (1) Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: A certificate for the host name "mail.example.com" couldn't be found. SSL or TLS encryption can't be made to the POP3 service. IMAP4: Log Name: Application Source: MSExchangeIMAP4 Date: 2012/04/23 08:30:44 AM Event ID: 2007 Task Category: (1) Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: A certificate for the host name "mail.example.com" couldn't be found. Neither SSL or TLS encryption can be made to the IMAP service. Transport: Log Name: Application Source: MSExchangeTransport Date: 2012/04/23 08:32:27 AM Event ID: 12014 Task Category: TransportService Level: Error Keywords: Classic User: N/A Computer: exch01.domain.local Description: Microsoft Exchange could not find a certificate that contains the domain name mail.example.com in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector Default EXCH01 with a FQDN parameter of mail.example.com. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. The odd part is that Get-ExchangeCertificate show the cert as enabled for all the relevant services, and OWA is working flawlessly using this certificate. [PS] C:\Users\graeme\Desktop>Get-ExchangeCertificate Thumbprint Services Subject ---------- -------- ------- XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ....S. CN=exch01 YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY ....S. CN=exch01 ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ IP.WS. CN=mail.example.com, OU=Domain Control Validated, O=mail.exa... Here's the certificate in the computer account's personal cert store: Does anyone have any pointers for getting POP3/IMAP4/SMTP to use the cert again?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >