Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 471/618 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • maillog "No route to host" error

    - by Sherwood Hu
    I have a CentOS server. It has sendmail installed but not used for a mail server. I forwarded the root email to another email address. However, I keep getting errors in maillog: Dec 6 08:49:16 server1 sm-msp-queue[16191]: qB6601et005433: to=root, ctladdr=root (0/0), delay=08:49:15, xdelay=00:00:00, mailer=relay, pri=883224, relay=[127.0.0.1], dsn=4.0.0, stat=Deferred: [127.0.0.1]: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfQ014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=6965048, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host Dec 6 08:49:16 server1 sendmail[16190]: qB39nDfR014062: to=<[email protected]>, delay=3+05:00:02, xdelay=00:00:00, mailer=esmtp, pri=7004959, relay=subdomain.example.com., dsn=4.0.0, stat=Deferred: subdomain.example.com.: No route to host In the forwarded email address, I received notification "it can't deliver email to [email protected]. subdoamin.example.com does have a MX record, and I do not want to add one. Is there any configuration that I can change to prevent this error? I want all emails to the root to be forwarded to the forward address.

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Ping: sendmsg: operation not permitted error after installing iptables on Arch GNU/Linux

    - by estol
    Yesterday I got a new computer as my homeserver, a HP Proliant Microserver. Installed Arch Linux on it, with kernel version 3.2.12. After installing iptables (1.4.12.2 - the current version afaik) and changing the net.ipv4.ip_forward key to 1, and enabling forwarding in the iptables configuration file (and rebooting), the system cannot use any of its network itnerfaces. Ping fails with Ping: sendmsg: operation not permitted If I remove iptables completely, networking is okay, but I need to share the Internet connection to the local network. eth0 - wan NIC integrated on the motherboard (no idea of vendor, probably HP). eth1 - lan NIC in a pci-express slot (Intel Gigabit CT Desktop http://www.intel.com/content/www/us/en/network-adapters/gigabit-network-adapters/gigabit-ct-desktop-adapter.html) Since it works without iptables(server can access the internet, and I can login with ssh from the internal network), I assume it has something to do with iptables. I do not have much experience with iptables, so I used these as reference (separate from each other of course...): wiki.archlinux.org/index.php/Simple_stateful_firewall#Setting_up_a_NAT_gateway revsys.com/writings/quicktips/nat.html howtoforge.com/nat_iptables On my previous server, I used the revsys guide to set up nat, worked like a charm. Anyone experienced anything like this before? What am I doing wrong? Thanks, estol

    Read the article

  • How to rip AVI from VOB using ffmpeg

    - by Linux Jedi
    I am trying to convert a VOB to an AVI. I have ripped an AVI from this VOB before using ffmpeg, but for some reason it's not working this time. This is what I tried: ffmpeg -sameq -acodec copy -i VTS_01_2.VOB output.avi This is the output I get: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 [mpeg2video @ 0x101014200]mpeg_decode_postinit() failure Last message repeated 6 times Input #0, mpeg, from 'VTS_01_2.VOB': Duration: 26:30:29.20, start: 140.171311, bitrate: 90 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x480 [PAR 32:27 DAR 16:9], 9800 kb/s, 31.44 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1[0xa0]: Audio: pcm_s16be, 48000 Hz, 2 channels, s16, 1536 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 32:27 DAR 16:9], q=2-31, 200 kb/s, 29.97 tbn, 29.97 tbc Stream #0.1: Audio: pcm_s16be, 48000 Hz, 2 channels, 1536 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Could not write header for output file #0 (incorrect codec parameters ?)

    Read the article

  • Server taking too long to respond error

    - by DCJones
    This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • KVM Guest with NAT + Bridged networking

    - by Daniel
    I currently have a few KVM Guests on a dedicated server with bridged networking (this works) and i can successfully ping the outside ips i assign via ifconfig (in the guest). However, due to the fact i only have 5 public ipv4 ip addresses, i would like to port forward services like so: hostip:port - kvm_guest:port UPDATE I found out KVM comes with a "default" NAT interface, so added the virtual NIC to the Guest virsh configuration then configured it in the Guest, it has the ip address: 192.168.122.112 I can successfully ping 192.168.122.112 and access all ports on 192.168.122.112 from the KVM Host, so i tried to port forward like so: iptables -t nat -I PREROUTING -p tcp --dport 5222 -j DNAT --to-destination 192.168.122.112:2521 iptables -I FORWARD -m state -d 192.168.122.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT telnet KVM_HOST_IP 5222 just hangs on "trying" telnet 192.168.122.112 2521 works [root@node1 ~]# tcpdump port 5222 tcpdump: WARNING: eth0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 23:43:47.216181 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445777813 ecr 0,sackOK,eol], length 0 23:43:48.315747 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445778912 ecr 0,sackOK,eol], length 0 23:43:49.415606 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445780010 ecr 0,sackOK,eol], length 0 7 packets received by filter 0 packets dropped by kernel [root@node1 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state NEW,RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination All help is appreciated. Thanks.

    Read the article

  • Does WebDAV even work on IIS 7? I say nay

    - by FlavorScape
    I've tried every configuration from the top 10 stack overflow and server fault results for WebDAV 405 on IIS (for verb PROPFIND and PUT). I'm running server 2008 SP2. Followed all the instructions here. I'm no stranger to configuring servers. This has gotten nowhere after 8 hours. Confirmed system.webserver in applicationhost.config: <add name="WebDAV" path="*" verb="PROPFIND,PROPPATCH,MKCOL,PUT,COPY,DELETE,MOVE,LOCK,UNLOCK" modules="WebDAVModule" resourceType="Unspecified" requireAccess="None" /> Port 443 with basic auth, same issue. Tried port 80 with windows auth. Broken. (405) Windows authentication. Check. Added authoring rules for default site and application. Check. Not the firewall. Check. added "Desktop Experience" role feature Tried HTTPS with Basic Authentication on port 443. Does not work. No other services are running like Sharepoint. Check. confirmed user has read/write NT level permissions for the folder/virtual dir tried net use * http://localhost /user:MYDOMAIN\me myPass get error 1920, if I don't authenticate I get error 67 confirmed I'm not applying filtering to WebDAV: <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. SHOULD I JUST GIVE UP? Other questions that helped none: 405 - ‘Method not Allowed’ adding service hosted in IIS7 webdav on iis7.5 - simply cannot make it work http://studentguru.gr/b/kingherc/archive/2009/11/21/webdav-for-iis-7-on-windows-server-2008-r2.aspx

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

  • Apache Derby running within Tomcat causes shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • Virtual bridged networking with VLAN, could not ping

    - by v.yegy
    I require a virtual network with VLAN be build between two virtual hosts - which can be (lxc/ vbox -ubuntu or win xp). I tried with lxc and vbox with Ubuntu and was finding difficult to make it work without vlan, but was successful with vbox with xp. vbox-xp1 --- br1 ---------------- br2 ---- vbox-xp2 The config is: brctl addbr br1; brctl addbr br2 ifconfig br1 up; ifconfig br2 up stp br1 off; stp br2 off ip link add name br1-br2-l0 type veth peer name br1-br2-l1 sudo brctl addif br1 br1-br2-l0 sudo brctl addif br2 br1-br2-l1 vbox - xp 1 and 2 with network ; bridged and br1 and br2 respectively. The adapter is intel PRO/1000 MT Server and driver installed in guests. Configured IPs and two hosts pinged! VLAN config: ip link add link br1 name br1-2.5 type vlan id 5 brctl addif br2 br1-2.5 create vlan 5 in xp 1 and 2 and assign ip address Ping on with this config does not work. Wireshark trace on interface br1-br2-l1 / br1-2.5 shows that one ping results in ~240 ping packets and each growing by 4 bytes - first one being correct and 60, ping does not reach other host as I see mac is not learnt[arp -a]. -- if br1-2.5 is not configured, I see untagged packets in br1-br2-l1/0, but still not reaching other host as mac is not learnt. if br1-br2-l0/1 is made down, even if br1-2.5 is up, I count not see any packets. I tried with ebtables, but still could not make a correct config to work. -- If any one here are aware of any configuration, please let me know. I need to make a network of switches. Seems I have a very long way. Sorry for a very long question. Thanks and regards, vy

    Read the article

  • PXE boot -- kernel not found on TFTP server

    - by user70523
    I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • Failed to open the Parallels networking module

    - by user49204
    I'm running Parallels Desktop 7 on OSX Lion and encounter the issue in the subject on every time I'm trying to launch Parallels VM. The error message contains a hint to restore to network configuration default which does not help. As been advised by some forums to ran the following script: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hypervisor.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_hid_hook.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_usb_connect.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_vnic.kext" The output of: sudo kextutil "/Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext" is: Diagnostics for /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext: Warnings: The booter does not recognize symbolic links; confirm these files/directories aren't needed for startup: /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeDirectory /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeRequirements /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeResources /Library/Parallels/Parallels Service.app/Contents/Kexts/10.6/prl_netbridge.kext/Contents/CodeSignature Dependency Resolution Failures: No kexts found for these libraries: com.parallels.kext.prl_hypervisor I've noticed that prl_netbridge is not being loaded (when I'm trying to unload it, I'm notified it is not loaded). Am I doing something wrong? What can be the reason for such behaviour?

    Read the article

  • What does this ssh error mean?

    - by kevin
    This is my last resort. I've been trying to figure out the problem here for hours. Here's the deal: I have copied my private key from machine #1 onto machine #2. Machine #1 is able to connect via ssh to a server with my public key just fine, but machine #2 gives the following output, when trying to connect to the server: $ ssh -vvv -i /home/kevin/.ssh/kev_rsa [email protected] -p 22312 OpenSSH_5.3p1 Debian-3ubuntu6, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.244 [192.168.1.244] port 22312. debug1: Connection established. debug3: Not a RSA1 key file /home/kevin/.ssh/kev_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace ... Permission denied (publickey). There is obviously more debug output that I have omitted, and I can provide upon request. I am convinced however that it doesn't like my private key file. I also had a suspicion that it has to do with how I copied it from machine #1 to machine #2. I copy/pasted the text from the private key onto a flash drive. This might be the problem, however, when I duplicated this method on another working private key file, and did a diff on the original, to the copy/pasted one, they are identical. I've been struggling with this. If I could just get a little more information on why it doesn't like my key, I could fix it I'm sure. Anyone have any ideas on this? Is there some meta-data somewhere that tells ssh that a file is in fact an RSA key?

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >