Search Results

Search found 2436 results on 98 pages for 'verify'.

Page 18/98 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to avoid lftp Certificate verification error?

    - by pattulus
    I'm trying to get my Pelican blog working. It uses lftp to transfer the actual blog to ones server, but I always get an error: mirror: Fatal error: Certificate verification: subjectAltName does not match ‘blogname.com’ I think lftp is checking the SSL and the quick setup of Pelican just forgot to include that I don't have SSL on my FTP. This is the code in Pelican's Makefile: ftp_upload: $(OUTPUTDIR)/index.html lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" which renders in terminal as: lftp ftp://[email protected] -e "mirror -R /Volumes/HD/Users/me/Test/output /myblog_directory ; quit" What I managed so far is, denying the SSL check by changing the Makefile to: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no" "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" Due to my incorrect implementation I get logged in correctly (lftp [email protected]:~>) but the one line feature doesn't work anymore and I have to enter the mirror command by hand: mirror -R /Volumes/HD/Users/me/Test/output/ /myblog_directory This works without an error and timeout. The question is how to do this with a one liner. In addition I tried: set ssl:verify-certificate/ftp.myblog.com no This trick to disable certificate verification in lftp: $ cat ~/.lftp/rc set ssl:verify-certificate no However, it seems there is no "rc" folder in my lftp directory - so this prompt has no chance to work.

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • Kerberos authentication not working for one single domain

    - by Buddy Casino
    We have a strange problem regarding Kerberos authentication with Apache mod_auth_kerb. We use a very simple krb5.conf, where only a single (main) AD server is configured. There are many domains in the forest, and it seems that SSO is working for most of them, except one. I don't know what is special about that domain, the error message that I see in the Apache logs is "Server not found in Kerberos database": [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1025): [client xx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(714): [client xx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(625): [client xx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(640): [client xx.xxx.xxx.xxx] krb5_get_credentials() failed when verifying KDC [Wed Aug 31 14:56:02 2011] [error] [client xx.xxx.xxx.xxx] failed to verify krb5 credentials: Server not found in Kerberos database [Wed Aug 31 14:56:02 2011] [debug] src/mod_auth_kerb.c(1110): [client xx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=401 user=(NULL) authtype=(NULL) When I try to kinit that user on the machine on which Apache is running, it works. I also checked that DNS lookups work, including reverse lookup. Who can tell me whats going?

    Read the article

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Postfix Postscreen: how to use postscreen for smtp and smtps both

    - by petermolnar
    I'm trying to get postscreen work. I've followed the man page and it's already running correctly for smtp. But it I want to use it for smtps as well (adding the same line as smtp in master.cf but with smtps) i receive failure messages in syslog like: postfix/postscreen[8851]: fatal: btree:/var/lib/postfix/postscreen_cache: unable to get exclusive lock: Resource temporarily unavailable Some say that postscreen can only run once; that's ok. But can I use the same postscreen session for both smtp and smtps? If not, how to enable postscreen for smtps as well? Any help would be apprecieted! The parts of the configs: main.cf postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr postscreen_dnsbl_threshold = 8 postscreen_dnsbl_sites = dnsbl.ahbl.org*3 dnsbl.njabl.org*3 dnsbl.sorbs.net*3 pbl.spamhaus.org*3 cbl.abuseat.org*3 bl.spamcannibal.org*3 nsbl.inps.de*3 spamrbl.imp.ch*3 postscreen_dnsbl_action = enforce postscreen_greet_action = enforce master.cf (full) smtpd pass - - n - - smtpd smtp inet n - n - 1 postscreen tlsproxy unix - - n - 0 tlsproxy dnsblog unix - - n - 0 dnsblog ### the problematic line ### smtps inet n - - - - smtpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp relay unix - - - - - smtp showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache dovecot unix - n n - - pipe flags=DRhu user=virtuser:virtuser argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender}

    Read the article

  • GnuPG Command Line - Verifying KeePass Signature

    - by Stisfa
    I'm trying to verify the PGP Signature of the latest version of KeePass 2.14's setup file against this signature, but this is the output I receive: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --verify C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: no valid OpenPGP data found. gpg: the signature could not be verified. Please remember that the signature file (.sig or .asc) should be the first file given on the command line. C:\Program Files (x86)\GNU\GnuPG> I found this command here, but it made no mention about ".sig" or ".asc" files, so I figured I did something wrong. By reading (http://www.gnupg.org/documentation/manuals/gnupg/gpgv.html#gpgv), I further tried the following: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --pgpfile C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: Invalid option "--pgpfile" C:\Program Files (x86)\GNU\GnuPG> As you can see, the results are quite obfuscating... I took a look at this on SuperUser (http://superuser.com/questions/16160/short-easy-to-understand-explanation-of-gpg-pgp-for-nontechnical-people - I couldn't use "a href" due to the built in spam filter that discriminates against users with < 10 rep; this is the same reason for the link above this link), but none of the links seemed to really address my question, at least not directly enough for me to get any idea on how to move forward on this. Can anybody here help me with the esoteric technicality of OpenPGP & the associated use of the GnuPG program? I've felt pretty dumb learning VBS, but this is beyond humiliating: it's absolutely debilitating and maiming whatever confidence I had with my IT skills (then again, I have no justification for making any boast either, as I have yet to get my A+ Cert, lol).

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Coldfusion 9 losing connection to MySQL 5 database server a couple of weeks after the server is started

    - by user1503757
    We get the following Coldfusion error message after our server have been running for a couple of weeks: Error Executing Database Query.Could not create connection to database server. Attempted reconnect 3 times We run Coldfusion Enterprise 9 on a one year old XServer with Snow Leopard and MySQL 5 The server has about ten DSN set up in the Coldfusion Administrator All local, with default advanced settings, and host set to "localhost" The server is not under heavy load. The strange thing is that after a restart of the server, everything works fine. Then after a week or so, some databases will stop working, in the sense that Coldfusion cannot create a connection to them. If I then go to the Coldfusion Administrator and click "Verify all datasources", I will get that only 2 or 3 got verified, the other ones failed, and it is always the same datasources that can't be verified when the server starts to behave like this if I try to verify again, BUT NOT neccessary the same datasources that couldn't be verified the last time the server behaved like this. I know about the setting "max_connections" and we have included a line for that setting in the MySQL config file and set it to 2000, and when we read it by a query it says "2000", so that can't be the problem. Anyone?

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.123... * connected * Connected to api.paypal.com (66.211.168.123) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • L2TP over IPSec VPN with OpenSwan and XL2TPD can't connect, timeout on Centos 6

    - by Disco
    I'm setting up LT2p over IPSec on my Centos 6.3 fresh install. I have iptables flushed, permit all. Whenever I try to connect, i get a 'no reply from vpn' and nothi Here's my ipsec.conf file (Server is 1.2.3.4) : config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=1.2.3.4 leftprotoport=17/1701 right=%any rightprotoport=17/%any My /etc/ipsec.secrets 1.2.3.4 %any: PSK "password" My sysctl.conf (appened lines) net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 Here's what 'ipsec verify' gives: # ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.32/K2.6.32-279.19.1.el6.x86_64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] And I see xl2tpd is listening on 1701/udp : udp 0 0 1.2.3.4:1701 0.0.0.0:* 2096/xl2tpd

    Read the article

  • Checking partition alignment with PowerCLI

    - by Julian
    I'm trying to verify that the file system partitions within each of the servers I'm working on are aligned correctly. I've got the following script that when I've tried running will either claim that all virtual servers are aligned or not aligned based on which if statement I use (one is commented out): $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name foreach($vm in $vms){ $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm foreach ($partition in $wmi){ $Details = "" | Select-Object VMName, Partition, Status #if (($partition.startingoffset % 65536) -isnot [decimal]){ if ($partition.startingoffSet -eq "65536"){ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition aligned" } else{ $Details.VMName = $partition.SystemName $Details.Partition = $partition.Name $Details.Status = "Partition not aligned" } $myArr += $Details } } $myArr | Export-CSV -NoTypeInformation "C:\users\myself\Documents\Scripts\PartitionAlignment.csv" Would anyone know what is wrong with my code? I'm still learning about partitions so I'm not sure how I need to check the starting off-set number to verify alignment. EDIT: $myArr = @() $vms = get-vm | where {$_.PowerState -eq "PoweredOn" -and $_.Guest.OSFullName -match "Microsoft Windows*" } | sort name $wmi = get-wmiobject -class "win32_DiskPartition" -namespace "root\CIMV2" -ComputerName $vm #foreach ($_ In Get-WMIObject Win32_DiskPartition | Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_} foreach ($wmi| Select Name, BlockSize, NumberOfBlocks, StartingOffSet, @{n='Alignment'; e={$_.StartingOffSet/$_.BlockSize}}) {$_}

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >