Search Results

Search found 3266 results on 131 pages for 'san certificate'.

Page 23/131 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • fdisk -l only displays boot partition

    - by Franklin
    I have a SAN, and it's able to read and write to the 50TB RAID just fine, but when I run fdisk -l it only lists the boot partition of the SAN server, and doesn't display anything about the other partitions on the RAID. I've also tried using parted -l with the same result. Now when I type mount it shows that the partitions are mounted just fine. I've never seen this happen. The box is running Openfiler 2.3 (I know it's old, we're in the process of upgrading all our old equipment). We have another SAN that's configured almost identically, and it's able to display the partition info with either of the two commands I mentioned above.

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Java Embedded @ JavaOne: Q & A

    - by terrencebarr
    There has been a lot of interest in Java Embedded @ JavaOne since it was announced a short while ago (see my previous post). As this is a new conference we did get a number of questions regarding the conference. So we put together a brief Q & A on audience focus, dates, registrations, pricing, submissions, etc. Hope this helps and, remember, the Call for Papers ends next week, Jul 18th 2012! Cheers, – Terrence    Java Embedded @ JavaOne : Q & A  Q. Where can I learn more about “Java Embedded @ JavaOne”? A. Please visit: http://oracle.com/javaone/embedded Q. What is the purpose of “Java Embedded @ JavaOne”? A. This net-new event is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. Q. What broad audiences would benefit by attending “Java Embedded @ JavaOne”? A. Java licensees; Government agencies; ISVs, Device Manufacturers; Service Providers such as Telcos, Utilities, Healthcare, Energy, Smart Grid/Smart Metering; Automotive/Telematics; Home/Building Automation; Factory Automation; Media/TV; and Payment vendors. Q. What business titles would benefit by attending “Java Embedded @ JavaOne”? A. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads). Q. When is “Java Embedded @ JavaOne” taking place? A. The event takes place on Wednesday, Oct. 3th through Thursday, Oct. 4th. Q. Where is “Java Embedded @ JavaOne” taking place? A. The event takes place in the Hotel Nikko. Q. Won’t “Java Embedded @ JavaOne” impact the flagship JavaOne conference since the Hotel Nikko is one of the 3 flagship JavaOne conference’s venue hotels? A. No. Separate space in the Hotel Nikko will be used for “Java Embedded @ JavaOne” and will in no way impact scale and scope of the flagship JavaOne conference’s content mix. Q. Will there be a call for papers for “Java Embedded @ JavaOne”? A. Yes.  The call for papers has started but is ONLY for business focused submissions. Q. What type of business submissions can I make for “Java Embedded @ JavaOne”? A. We are accepting 3 types of business submissions: Best Practices: Java Embedded business solutions, methods, and techniques that consistently show results superior to those achieved with other means, as well as discussions on how Java Embedded can improve business operations, and increase competitive differentiation and profitability. Case Studies: Discussions with Oracle customers and partners that describe the unique business drivers that convinced them to implement Java Embedded as part of an infrastructure technology mix. The discussions will highlight the issues they faced, the decision making involved, and the implementation choices made to create value and improve business differentiation. Panel: Moderator-driven open discussion focused on the emerging opportunities Java Embedded offers businesses, as well as other topics such as strategy, overcoming common challenges, etc. Q. What is the call for papers timeline for “Java Embedded @ JavaOne”? A. The timeline is as follows: CFP Launched – June 18th Deadline for submissions – July 18th Notifications (Accepts/Declines) – week of July 29th Deadline for speakers to accept speaker invitation – August 10th Presentations due for review – August 31st Q. Where can I find more call for paper details for “Java Embedded @ JavaOne”? A. Please go to: http://www.oracle.com/javaone/embedded/call-for-papers/information/index.html Q. How much does it cost to attend “Java Embedded @ JavaOne”? A. The cost to attend is: $595.00 U.S. — Early Bird (Launch date – July 13, 2012) $795.00 U.S. — Pre-Registration (July 14 – September 28, 2012) $995.00 U.S. — Onsite Registration (September 29 – October 4, 2012) Q. Can an attendee of the flagship JavaOne event and Oracle OpenWorld attend “Java Embedded @ JavaOne”? ?A. Yes.  Attendees of both the flagship JavaOne event and Oracle OpenWorld can attend “Java Embedded @ JavaOne” by purchasing a $100.00 U.S. upgrade to their full conference pass. Filed under: Mobile & Embedded Tagged: Call for Papers, Java Embedded @ JavaOne, JavaOne San Francisco

    Read the article

  • How do you import CA certificates onto an Android phone?

    - by f50driver
    Hi all, I want to connect to my University's wireless using my Nexus One. When I go to "Add Wi-Fi network" in Wireless Settings I fill in the Network SSID and select 802.1x Enterprise for the security and fill everything out. The problem is that our university's wireless uses Thawte Premium Server CA certificate for certification. When I click the drop down list for CA certificate I get nothing in the list (just N/A) Now I have the certificate (Thawte Premium Server CA.pem) and have moved it to my SD card, but it doesn't look like Android automatically detects it. Where should I put the certificate so that the Android wireless manager recognizes it. In other words, how can I import a CA certificate so that Android recognizes that it is on the phone and displays it in the CA Certificate drop down list. Thanks for any help, Tomek P.S. My phone is not rooted EDIT: After doing some research it looks like you are able to install certificates by going to your phone's settings Location & Security Install from SD card Unfortunately it looks like the only accepted file extension is .p12. It does not look like there is a way to import .cer or .pem files (which are the only two files that come with the Thawte certificates) at this moment. It does look like you can use a converter to convert your .cer or .pem files to .p12, however a key file is needed. https://www.sslshopper.com/ssl-converter.html I do not know where to get this key file for the Thawte certificates.

    Read the article

  • apache renew ssl not working [on hold]

    - by Varun S
    Downloaded a new ssl cert from go daddy and installed the cert on apache2 server put the cert in /etc/ssl/certs/ folder put the gd_bundle.crt in the /etc/ssl/ folder private key is in /etc/ssl/private/private.key I just replaced the original files with the new files, did not replace the private key. I restarted the server but the website is still showing old certificated date. What am I doing wrong and how do i resolve it ? my httpd.conf file is empty, the certificated config is in the sites-enabled/default-ssl file the server is apache2 running ubuntu 14.04 os SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt SSLCertificateKeyFile /etc/ssl/private/private.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. SSLCertificateChainFile /etc/ssl/gd_bundle.crt -rwxr-xr-x 1 root root 1944 Aug 16 06:34 /etc/ssl/certs/2b1f6d308c2f9b.crt -rwxr-xr-x 1 root root 3197 Aug 16 06:10 /etc/ssl/gd_bundle.crt -rw-r--r-- 1 root root 1679 Oct 3 2013 /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-available/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-available/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-available/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-available/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt /etc/apache2/sites-enabled/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-enabled/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-enabled/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-enabled/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-enabled/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt

    Read the article

  • Ubuntu 12.04 LDAP SSL self-signed cert not accepted

    - by MaddHacker
    I'm working with Ubuntu 12.04, using OpenLDAP server. I've followed the instructions on the Ubuntu help pages and can happily connect without security. To test my connection, I'm using ldapsearch the command looks like: ldapsearch -xv -H ldap://ldap.[my host].local -b dc=[my domain],dc=local -d8 -ZZ I've also used: ldapsearch -xv -H ldaps://ldap.[my host].local -b dc=[my domain],dc=local -d8 As far as I can tell, I've setup my certificate correctly, but no matter why I try, I can't seem to get ldapsearch to accept my self-signed certificate. So far, I've tried: Updating my /etc/ldap/ldap.conf file to look like: BASE dc=[my domain],dc=local URI ldaps://ldap.[my host].local TLS_CACERT /etc/ssl/certs/cacert.crt TLS_REQCERT allow Updating my /etc/ldap.conf file to look like: base dc=[my domain],dc=local uri ldapi:///ldap.[my host].local uri ldaps:///ldap.[my host].local ldap_version 3 ssl start_tls ssl on tls_checkpeer no TLS_REQCERT allow Updating my /etc/default/slapd to include: SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" Several hours of Googling, most of which resulted in adding the TLS_REQCERT allow The exact error I'm seeing is: ldap_initialize( ldap://ldap.[my host].local ) request done: ld 0x20038710 msgid 1 TLS certificate verification: Error, self signed certificate in certificate chain TLS: can't connect. ldap_start_tls: Connect error (-11) additional info: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed After several hours of this, I was hoping someone else has seen this issue, and/or knows how to fix it. Please do let me know if I should add more information, or if you need further data.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • unable to destroy windows 2008 r2 failover cluster after SAN rebuild

    - by Zack
    I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource. Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server. Can someone advise on how to destroy this cluster so I can start over?

    Read the article

  • To what extent do code-signing certificates boost sales of your software?

    - by Dan W
    In the experiences of everyone here, have you found a certificate to boost sales of your (downloadable) program? I produce .NET software and upon clicking the installation file, Windows 7 pops up a message saying the software is from an "unknown publisher" and to proceed with caution. For Windows 8, this appears to be even more prominent, and may adversely affect the number of downloads, and therefore the number of sales. A certificate will help soften this 'warning' by (for example) changing the warning's colour from orange to blue, and give the publisher's name instead of 'unknown'. But I'd like more tangible evidence since many people are obviously used to that message, and may not care and download anyway. So has anyone noticed a jump in sales after the switch?

    Read the article

  • Exchange 2010 Internal Auto Discover Migrate away from current .local DNS name

    - by Bryan
    We have an Exchange 2010 Server, running within our Active Directory domain, with an internal hostname of server.example.local. The server is configured for Exchange anywhere, but currently has a self signed certificate with a name of server.example.local installed. Internally, clients connect and work fine, but externally, we are having certificate errors as you would expect. I'm about to purchase a UCC SSL Certificate to install on the server with all the relevant SANs on the certificate to correct this, but due to obvious problem obtaining a trusted cert with .local as a subject alternative name, I'm looking to configure clients on the internal network so that they don't use any reference to the .local hostname. I've configured our external DNS name for the server as exchange.example.com, and have created an CNAME for autodiscover.example.com which also (correctly) points to exchange.example.com. I've also configured internal DNS records for these two hostnames which point to the internal interface of the same server. I don't anticipate any problems here. I'm now trying to reconfigure Auto Discover internally, so that Outlook attempts to connect to exchange.example.com. I've followed the steps in KB940726 to prepare for this, and this appeared to work fine. No errors were generated and I was able to verify the CAS name in AD using ADSI edit. I've just tried testing this with a newly created test user account complete with a new Exchange mailbox, and Outlook 2007 connects fine on the internal network, but looking deeper in the Exchange profile, Outlook is still resolving the server name as server.example.local. Could it be the self signed cert, that is causing Outlook to display the server name as server.example.local, or is there still something wrong with my internal autodiscover configuration? Edit I've proven it isn't the certificate that is responsible for outlook returning server.example.local, by installing another self certified certificate with a name of test.example.com. When creating a new outlook profile, I get the mismatch error I'm expceting, but after accepting the cert, and finishing the config of the Outlook profile, again it still shows server.example.local as the server name. This means that if I were to purchase the UCC cert now, that external client would work fine, but internal clients would show a certificate name mismatch. Any ideas where to start diagnosing this?

    Read the article

  • x509 certificate Information

    - by sid
    Certificate: Data: Version: 3 (0x2) Serial Number: 95 (0x5f) Signature Algorithm: sha1WithRSAEncryption Issuer: C=, O=, CN= Validity Not Before: Apr 22 16:42:11 2008 GMT Not After : Apr 22 16:42:11 2009 GMT Subject: C=, O=, CN=, L=, ST= Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): ... ... ... Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: critical Code Signing X509v3 Authority Key Identifier: keyid: ... Signature Algorithm: sha1WithRSAEncryption a9:55:56:9b:9e:60:7a:57:fd:7:6b:1e:c0:79:1c:50:62:8f: ... ... -----BEGIN CERTIFICATE----- ... ... ... -----END CERTIFICATE----- In This Certificate, Which is the public key? is Modulus? what does the Signature Algorithm, a9:55:56:... represent (is it message digest)? And what is between -----BEGIN CERTIFICATE----- & -----END CERTIFICATE-----, is That the whole certificate? As I am novice, little bit confusing between the message digest and public key? Thanks in Advance-opensid

    Read the article

  • Dirty Cache Dell Equallogic Storage Array

    - by Jermal Smith
    has anyone ever run into a dirty cache issue with a Equallogic SAN. Even after replacement of the controller cards in the Equallogic Storage Array fails offline with a dirty cache. I have listed steps here on my blog to bring the SAN online again, however this is not the best solution as it continues to fail. http://jermsmit.com/dirty-cache-dell-equallogic-storage-array/ If you have any info on this please share. Thanks, Jermal

    Read the article

  • How to share a volume between VM in ESX 4 ?

    - by edomaur
    I want to access a single volume from vmware ESX4 vms, in three ESX hosts with datastores in an Equallogic PS6000 SAN. I know how to manage the datas, but I cannot seems to find a way to do this. How can I share VMDK accross hosts ? (the relevant files are on the SAN) Is this even possible ? Is there a mean to do this with RDM ?

    Read the article

  • SMB returns the entire file instead of header info

    - by billdlawson
    Starting a section of code checks for access to many data files (flat files so each table is a file) and when I do a packet capture, in our capture only the header info is sent by the server to the client. However I have one Customer who is using a SAN that gets the whole file instead of just the header info,and besides just being slower, this is causing file access issues. They have already turned off OPLOCKS at the server and at the workstations. This is not client server. The data files and the application reside on the server but the users run the application locally via a shortcut with a mapped drive or UNC. So when I simply select an option that prompts for a vehicle number, not tryng to select a record but rather simply verify the datafiles are accessible, that window opens in 1-2 seconds for me. When they do the same thing it takes 6-15 seconds after there several users are running the program. Maximum number of users is 15. The program has a lot of small modules, 800 .cob modules. So it is very chatty but these are datafiles. We have Wireshark captures that show he's pulling the whole file and we're just getting the header. Thier capture vs ours. We suspect the SAN. Has anyone ever heard of a SAN improperly interpreting runtime requests? So an SMB request. This is Acucobol-GT (now Microfocus). The application is written in COBOL. This is not a new program just a new problem. This is one customer of over a thousand who are otherwise running smoothly and we are totally stumped. All XP users, the server is Windows 2003 (with Virtual server) and I don't yet know the SAN info. Also we have many installations running virtual servers but only few on SANs or we just don't know it. This is not a network throught put issue, the load is less than 5% on the server and theer are no timeout or retransmits. PS If it wasn't for Wireshark I'd still be chasing my tail. An application trace file on thier installation just looks like they run slower. If you want the Wireshark trace file I can make it available. Thanks in advance - Please excuse my verbosity (word?) but I'm not sure what's relavent.

    Read the article

  • Using installed identity certificate from within an app on iPhone

    - by Sabi Tinterov
    Hi, My question is: is there a way to use the installed identity certificates on the phone from within my app. For example similar case like with saffary: if certain site requires client certificate, the user has to install it on the phone and then when authenticating saffary uses the installed certificate to authenticate. I need to do the same: 1.User installs certificate on the phone. 2. The user starts the application and authenticates using the installed certificate. Thanks

    Read the article

  • OpenSSL Handshake Failure (14094410) - Erroneous Client Certificate Check from Mobile Phone

    - by Clayton Sims
    I'm running a proxy server through Apache with modssl, which we're using to proxy POSTs from mobile devices to another internal server. This works successfully for most clients, but requests from a specific phone model (Nokia 2690) are showing a bizarre handshake failure. It looks as though OpenSSL is either requesting (or attempting to read an unsolicited) client certificate from the phone (which is especially bizarre because j2me's kssl implementation doesn't support client certs). I've disabled client certificates with the SSLVerifyClient none directive in both the virtual host conf and the modssl conf. The trace from error.log on debug level is (details redacted): [client 41.220.207.10] Connection to child 0 established (server www.myserver.org:443) [info] Seeding PRNG with 656 bytes of entropy [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/accept initialization [debug] ssl_engine_io.c(1882): OpenSSL: read 11/11 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 49/49 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90db] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 read client hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write certificate A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server done A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 flush data [debug] ssl_engine_io.c(1882): OpenSSL: read 5/5 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 2/2 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d5] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1879): OpenSSL: Read: SSLv3 read client certificate A [debug] ssl_engine_kernel.c(1898): OpenSSL: Exit: failed in SSLv3 read client certificate A [client 41.220.207.10] SSL library error 1 in handshake (server www.myserver.org:443) [info] SSL Library Error: 336151568 error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure [client 41.220.207.10] Connection closed to child 0 with abortive shutdown (server www.myserver.org:443) I've tried enabling all ciphers and all protocols temporarily with modssl, neither of which seemed to be the issue. The phone should be using RSA_RC4_128_MD5 and SSLv3, all of which are available. Am I missing something more fundamental about what's failing here? It seemed like the certificate request might have been part of a renegotiation failure. I tried enabling SSLInsecureRenegotiation On on the virtual host, in case it was an issue of the phone's SSL not supporting the new protocol, but to no avail. Currently running: Apache/2.2.16 (Ubuntu) mod_ssl/2.2.16 OpenSSL/0.9.8o Apache proxy_html/3.0.1

    Read the article

  • GeoTrust SSL brand name used by re-sellers

    - by Christopher
    I feel like a I got the bait-and-switch from my web host provider since they advertise "GeoTrust SSL" for $99. I purchased it, thinking the certificate is issued from geotrust.com, but then I get an email from Comodo saying they are providing it. My host provider says they get a discount by using Comodo. I purchased the certificate with the understanding it would be issued by GeoTrust. I called the host provider and they said they usually expect it from GeoTrust, but someone from email support responded saying the product name is "GeoTrust SSL" but they use Comodo to get a discount. I think this is bogus and unfair trade practice. However, searching for "GeoTrust" on google brings up a ton of websites selling "GeoTrust" certificates. How can companies get away with this? Since the host provider is part of BBB I plan to inform my host to update the purchase page on their website to state clearly that... "This certicate is provided at a discount and may be issued by a provider other than GeoTrust.com, such as Comodo.com" Any feedback on this is appreciated.

    Read the article

  • MS NPS denying access, can't validate server certificate

    - by Fred Weston
    At my office we use a Cisco WLC2504 wireless controller and starting about a week ago we started having problems with users connecting to one of our secure wireless network. We are running AD on Windows Server 2008 R2 and use network policy server to control access to our wireless network. When I look at the logs in event viewer after a failed connection attempt I see an access reject message: Reason Code: 262 Reason: The supplied message is incomplete. The signature was not verified. Looking this up on Google I found this article: http://support.microsoft.com/kb/838502 I tried disabling server certificate validation on my computer and as soon as I did that I was able to connect to the network, so it seems that there is some sort of certificate validation issue. I'm not sure which certificate is unable to be validated or how to fix it. This used to work and stopped suddenly by itself so I am thinking a certificate may have expired. When I go to NPS Policies Network Policies My policy Constraints Auth methods Microsoft PEAP and view the properties, the certificae specified here expires in 2016, so doesn't seem as though this could be the problem. Any suggestions on how to troubleshoot this issue?

    Read the article

  • Howo to get Multipath IO with Dell MD3600i into active/active setup?

    - by Disco
    I'm desperately trying to improve performance of my SAN connection. Here's what i have: [root@xnode1 dell]# multipath -ll mpath1 (36d4ae520009bd7cc0000030e4fe8230b) dm-2 DELL,MD36xxi [size=5.5T][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=200][active] \_ 18:0:0:0 sdb 8:16 [active][ready] \_ 19:0:0:0 sdd 8:48 [active][ghost] \_ 20:0:0:0 sdf 8:80 [active][ghost] \_ 21:0:0:0 sdh 8:112 [active][ready] And multipath.conf : defaults { udev_dir /dev polling_interval 5 prio_callout none rr_min_io 100 max_fds 8192 user_friendly_names yes path_grouping_policy multibus default_features "1 fail_if_no_path" } blacklist { device { vendor "*" product "Universal Xport" } } devices { device { vendor "DELL" product "MD36xxi" path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And sessions. [root@xnode1 dell]# iscsiadm -m session tcp: [13] 10.0.51.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [14] 10.0.50.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [15] 10.0.51.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [16] 10.0.50.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c I'm getting very poor read performance : dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=1000 The SAN is configured as follows: CTRL0,PORT0 : 10.0.50.220 CTRL0,PORT1 : 10.0.50.221 CTRL1,PORT0 : 10.0.51.220 CTRL1,PORT1 : 10.0.51.221 And on the host : IF0 : 10.0.50.1 IF1 : 10.0.51.1 (Dual 10GbE Ethernet Card Intel DA2) It's connected to a 10gbE switch dedicated for SAN traffic. My questions being; why the connection is set up as 'ghost' and not 'ready' like an active/active configuration ?

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >