Search Results

Search found 29284 results on 1172 pages for 'weblogic 10 x'.

Page 294/1172 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Cisco ASA site-to-site vpn not initiating phase 1 (not sending udp 500 packets)

    - by Sean Steadman
    I am hoping someone here can help me with my problem. I am trying to setup an IPSEC site-to-site VPN between two cisco ASA 5520's in GNS3 (both using 8.4.2). I have been unsuccesful in getting the tunnel up and it appears neither ASA is sending packets out,in regards to phase 1 and phase 2 (tested by using wireshark and seeing NO udp 500 packets). Doing show ipsec sa and such shows nothing. CALIFORNIA(config)# show ipsec sa There are no ipsec sas FLA-ASA# show ipsec sa There are no ipsec sas I will attach both configurations in two different pastebin files as to keep this post a bit cleaner. Essentially California side has 172.20.1.0/24 and Florida side has 10.10.10.0/24. California ASA config: http://pastebin.com/v0pngYzF Florida ASA config: http://pastebin.com/E2geybta Please let me know if there is any other vital information that could help. I have gotten IPSEC tunnels to work using openSwan (linux) and cisco routers but cannot for the life of me get ASA IPSEC tunnels to work. The ASDM is out of the question I only use cli. Thanks for any useful help!

    Read the article

  • MX records set up

    - by andrei.troll
    I just want to know how other people set up their MX-entries for mail accounts used with google apps. I work at a local web-hosting firm and we get a lot of tickets from clients who want to set up these settings. I just set them up something like: example.com. 14400 IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. example.com. 14400 IN MX 10 ASPMX4.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX5.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX2.GOOGLEMAIL.COM. example.com. 14400 IN MX 30 ASPMX3.GOOGLEMAIL.COM. I see another firm (rival one) who sets up way more MX-records ? Roughly, around 10-15 entries. Am I doing something wrong ? More is better in this case ? Is there a secret that I'm not on too ?

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • scponly worked but didn't chroot the home folder, the user can still browse the entire server.

    - by Mint
    So I followed the "Chroot and Debian" tutorial in http://sublimation.org/scponly/wiki/index.php/FAQ Then when I log into user "upload" via ssh I have no access to the command line (this is what I wanted). But then when I SFTP into the upload user I can still see all the root files (/), it didn't chroot me to just /home/upload whats going on? …. I added this to the end of my /etc/ssh/sshd_config file, then done a restart Subsystem sftp internal-sftp UsePAM yes Match User upload ChrootDirectory /home/upload AllowTCPForwarding no X11Forwarding no ForceCommand internal-sftp Then when I log into sftp I can only see my upload folder (this is what I want), but now scp doesn't work :P SCP will accept my password then: debug1: Next authentication method: password upload@10.10.10.2's password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_NZ.UTF-8 debug1: Sending command: scp -v -t /test It will hang on that last debug message. Any help would be greatly appreciated. Note, running Debian Lenny

    Read the article

  • Windows Not Honoring DHCP Scope

    - by jerhinesmith
    Please bear with me as I'm not a networking person by trade. Our current configuration at work includes two Windows Servers serving as DHCP/Active Directory servers (if that makes sense) -- one replicating from the other. On both machines, the DNS resolution is set up as: Main Windows Box (10...* address) Public IP Address (for Verizon) Public IP Address (secondary Verizon) Secondary Windows Box (10...* address) Assuming our domain is foo.com, we maintain the foo.com website on a hosted VPS with it's own IP address. The problem is that even though bar.foo.com is an internal server and is defined in DNS on the Primary Windows machine, when I ping bar or even bar.foo.com it resolves to the hosted IP address instead of the 10.* address. I tried taking both of the Public IP addresses out of the DHCP scope, and that seemed to work, but it completely slowed down access to any external sites, so that wasn't acceptable. I also tried adding the two Windows machine as the DNS servers on my desktop. That too worked, but I'd rather not have everything enter their DNS servers, as the above setup should theoretically be working. Is there anything I could check to see why pinging bar.foo.com isn't resolving to the DNS entry on the Windows machines? Here's a summary of the ping results, if they help: Pinging from servers with static IP bar.foo.com resolves with correct IP address Pinging from linux machines not joined to the domain bar.foo.com resolves with correct IP address Pinging from user's desktop machines, joined to the domain, but dynamic IP bar.foo.com resolves with incorrect IP address This is driving me crazy!

    Read the article

  • MS Excel and Access - which is better for reports?

    - by Nat
    Where I work, staff have just started to use a basic table in excel (1 october) to record sales which has about 10 columns (name, client, renewed, discount, paid etc). I record the data (total sold etc) every hour and email it to the manager. Each staff has the their own file on the network which they use constantly for that day (eg. John 08-10.xlsx; John 09-10.xlsx etc) and have been told to save the file after they complete a row with client data. I can see the file (in read only mode) to update the report but I am sure there must be a way of doing an autoupdate of their worksheets in real time. I can link worksheets and workbooks to my main workbook but manually. Does anyone have suggestions on have to do this on Excel? Or would Access allow me to make a report which shows the sales total for that hour without the staff closing the file or constantly clicking save every few minutes? We use office 2010. thanks

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • krenew command not working : Permission Denied

    - by prathmesh.kallurkar
    I am using a Linux server to perform my simulations. The login and the file-system of the server are protected using kerberos. The file-system is supported using NFS. Since my simulations take a lot of time to run, my ssh sessions used to hang regularly. So, I have started running my simulations in byobu (similar to screen). In order to make sure that my kerberos session remains active, I am using the krenew command. I have entered the following command in my .bash_profile file. (I am sure that it is called for every login) killall -9 krenew 2> /dev/null krenew -b -t -K 10 So everytime I ssh to the server, I kill the existing krenew command. Then, I spawn a new krenew command -b (which runs in background), -t (I forgot why I was using this option !), and -K 10 (It must run after every 10 minutes and refresh the kerberos cache). When I run the simulations, It runs for 14 hours and then suddenly, I am getting error for reading file Permission Denied Is the command that I am running incorrect ??

    Read the article

  • implementing NGINX loadbalancer

    - by Alaa Alomari
    I have two servers (ServerA 192.168.1.10, ServerB 192,168.1.11) and DNS of test.mysite.com is pointing to ServerA #in serverA i have this upstream lb_units { server 192.168.1.10 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.11 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; # Listen on the external interface server_name test.mysite.com; # The server name root /var/www/test; index index.php; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/test/$fastcgi_script_name; } } and ServerB is apache and it has the following <VirtualHost *:80 RewriteEngine on <Directory "/var/www/test" AllowOverride all </Directory DocumentRoot "/var/www/test" ServerName test.mysite.com </VirtualHost but whenever i try to browse test.mysite.com, it serves me from ServerA. also i tried to mark serverA and down server 192.168.1.10 down; in lb_units and still the same, serving me from serverA. any idea what i have done wrong??

    Read the article

  • What could be wrong with my VLAN?

    - by Matt
    I've got a VLAN 10 setup as a management VLAN. The management VLAN comes off port 48 and links to another set of switches that do not support VLAN's so it was I believe set up as an untagged access port. In the past this was a different brand of switch and this worked fine. However, since changing to the HP V1910-48G series I can't seem to get this working. I must point out that as far as I'm aware it is wired up properly (I can't check this physically as I'm working remote and have asked the tech who's got access to double check for me). Now I don't have a huge amount of experience with VLAN environments but AFAIK this is right. I've set the port 48 (linked to the management switches) as an untagged port with PVID 10 and access link type. Is this all I'd need to do from a configuration perspective to ensure all devices connected to port 48 would end up being on VLAN 10 and not needing to tag their frames. i.e. the tag would be added by the switch before being forwarded.

    Read the article

  • Open an X application going through many hoops (SSH, vpn etc)

    - by ??O?????
    The players: my home computer, running Linux with an X server running. (Call it HOME.) a remote site, to which I can connect over the internet using a VPN. (SITE) a Linux computer at the remote site, to which I can connect with ssh -X and nicely have X clients displaying on my local server. (MIDDLE) a very old Irix machine (an Onyx) at the remote site, which has no SSH server (therefore I can't ssh -X to it), only an ssh client. (ONYX) Purpose I need to run an X11 application on the ONYX machine, and see the GUI on HOME. I think I stumble upon xauth issues. So far The current situation is: ? HOME connects to SITE ? A vncserver starts on MIDDLE:7 ? vncviewer on HOME connects to vncserver on MIDDLE ? ONYX starts a forwarding ssh session to MIDDLE: ssh -TfN -L 6007:127.0.0.1:6007 MIDDLE ? DISPLAY=localhost:7 xclient on ONYX fails with Xlib: connection to "127.0.0.1:7.0" refused by server I do know that the forwarding (6007:127.0.0.1:6007) succeeds. A previous attempt was: ? HOME connects to SITE ? HOME connects to MIDDLE: ssh -X MIDDLE (xclock displays on HOME, DISPLAY is 127.0.0.1:10) ? ONYX starts an SSH tunnel to MIDDLE: ssh -TfN -L 6010:127.0.0.1:6010 MIDDLE ? DISPLAY=127.0.0.1:10 xclient fails with X connection to 127.0.0.1:10.0 broken (explicit kill or server shutdown). while an error pops up in the MIDDLE session: X11 connection rejected because of wrong authentication. Despair How can I achieve my purpose?

    Read the article

  • Have to run auto-negotiate between clients and switch - "old" switch works fine - "new" switch results in "port flapping"?

    - by ConfusedAboutSwitching
    I need some help understanding a problem we're having at work: We run Altiris/Deployment Solution and have to use auto-negotiate between client systems and our switches (Altiris apparently requires this for imaging, PXE boot and other functions). We have several areas with old wiring (Cat 3 & Cat 5) that have old 10/100 Cisco switches in them - and we can set these systems up to "auto/auto" (auto-negotiate on both the NIC and the switch port), and everything has been working fine. But - our networking crew changed out a couple of old switches for 10/100/1000 Cisco switches, and now - they are claiming that "auto/auto" won't work because the switches can't auto-negotiate the way the old 10/100 switches did - and that if we try to set the new gig switches to auto-negotiate, the switch port starts "port flapping", and shuts the port down. But - if we put the old switch back in - they work using "auto/auto" just fine - no port flapping. The networking crew is telling me that the problem is that we're putting "new switches" on "old wire", and that the old cabling can't/won't support the auto-negotiation with these new switches....??? There's something about this that doesn't make sense to me - can someone explain this to me? Or is our networking crew just doing something wrong in the configuration of these new switches? While will the old switches work "auto/auto", but the new switches won't?? HELP!!....and Thanks!! M

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • How to boot XBMC 10.1 ISO on USB via grub?

    - by Shi
    I am trying to boot the XBMC Live image (http://xbmc.org/download/) as ISO from USB via grub 1.98. I have a Kubuntu 11.04 image there as well already and it works using the following configuration: menuentry "Kubuntu 11.04 64bit" { loopback loop /boot/iso/kubuntu-11.04-desktop-amd64.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/kubuntu-11.04-desktop-amd64.iso noeject noprompt initrd (loop)/casper/initrd.gz } However, if I try to boot XBMC in an analogue way, I always get an error "Unable to find a medium containing a live file system". I found different approaches to install XBMC, but they all are about installing the distribution on USB, or using grub4dos, or unetbootin. I already found out that XBMC 10.1 is based on Ubuntu 10.04.2 LTS, so I tried those settings - even though they are quite similar to Kubuntu 11.04. Finally, the ISO contains a grub configuration as well in boot/grub/grub.cfg, but even with those parameters, I get the error above. My current configuration is the following one: menuentry "xbmc 10.1" { loopback loop /boot/iso/xbmc-10.1-live.iso linux (loop)/live/vmlinuz video=vesafb boot=live iso-scan/filename=/boot/iso/xbmc-10.1-live.iso xbmc=autostart,nodiskmount splash quiet loglevel=0 persistent quickreboot quickusbmodules notimezone noaccessibility noapparmor noaptcdrom noautologin noxautologin noconsolekeyboard nofastboot nognomepanel nohosts nokpersonalizer nolanguageselector nolocales nonetworking nopowermanagement noprogramcrashes nojockey nosudo noupdatenotifier nouser nopolkitconf noxautoconfig noxscreensaver nopreseed union=aufs initrd (loop)/live/initrd.img } Any more ideas or any more information I should supply?

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Why does apache httpd tell me that my name-based virtualhosts only works with SNI enabled browers (RFC 4366)

    - by Arlukin
    Why does apache give me this error message in my logs? Is it a false positive? [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I have recently upgraded from Centos 5.7 to 6.3, and by that to a newer httpd version. I have always made my ssl virtualhost configurations like below. Where all domains that share the same certificate (mostly/always wildcard certs) share the same ip. But never got this error message before (or have I, maybe I haven't looked to enough in my logs?) From what I have learned this should work without SNI (Server Name Indication) Here is relevant parts of my httpd.conf file. Without this VirtualHost I don't get the error message. NameVirtualHost 10.101.0.135:443 <VirtualHost 10.101.0.135:443> ServerName sub1.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost> <VirtualHost 10.101.0.135:443> ServerName sub2.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost>

    Read the article

  • Trying to communicate between virtual servers on the same host through ipv6

    - by Daniele Testa
    I am running KVM on a host with 2 virtual servers. Each virtual server has a own bridge interface on the host VPS1 has br1 VPS2 has br2 Each virtual server has a own ipv4 and a ipv6. The virtual servers has no problem communicating with internet or with eachother through ipv4. However, with ipv6, they can only communicate with internet and NOT with eachother. The host can ping the 2 virtual servers without any problems, but they cannot ping eachother. iptables has been set to ACCEPT on all chains, so it is not the problem. VPS1 has ipv6 = 2a01:4f8:xxx:xxx::10 VPS2 has ipv6 = 2a01:4f8:xxx:xxx::5 the host has the following routes set: ip route add 2a01:4f8:xxx:xxx::10 dev br1 ip route add 2a01:4f8:xxx:xxx::5 dev br2 When I do a ping from VPS2 to VPS1, I see the following on the host: tcpdump -i br1 15:32:27.704404 IP6 2a01:4f8:xxx:xxx::10 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2a01:4f8:xxx:xxx::5, length 32 So it seems like the host is seeing the request coming from VPS1 on br1. But for some reason, it does not forward it to br2. Instead it is asking where the destination IP is through ipv6 multicast. Anyone has a clue what is going on? I find this very strange, as it is working fine with ipv4 with the exact same settings and routes.

    Read the article

  • Determine from where is "sh" being run under apache www-data user using using PF or NETSTAT

    - by Eugene van der Merwe
    I am working with a compromised Ubuntu 8.04 Plesk 9.5.4 server. It seems that a script on the server is continuously doing reverse lookups to random IPs on the Internet. I first spotted it during by using top and then noticed flashes of this coming up continuously: sh -c host -W 1 '198.204.241.10' I wrote a this script to interrogate ps every 1 second to see how frequently this script happens: #!/bin/bash while : do ps -ef | egrep -i "sh -c host" sleep 1 done The results are that this script runs often, every few seconds: www-data 17762 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17772 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' root 18031 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18078 16704 0 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 18125 17996 0 10:07 ? 00:00:00 sh -c host -W 1 '91.124.51.65' root 18131 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18137 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 18137 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' My theory is if I can see who is launching the sh process or form where it's launched I can isolate the problem further. Can somebody please guide me using netstat or ps to identify from where sh is being run? I might get many suggestions that the OS is out of date and so the Plesk, but please bear in mind there are some very concrete reasons why this server is running legacy software. My question is aimed at a advanced Linux systems administrators who have in depth experience with security compromises and using netstat and ps to get to the bottom of it.

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • VirtualBox with Ubuntu Server guest can't ping outside

    - by Danidan
    Here's my situation: an Ubuntu 12.04 Host running VirtualBox; two guest VMs running Ubuntu Server 12.04 home network, so my Host pc has a wireless connection to the router of my ISP. My problem is in one of the virtual machines: it has 3 NICs, one in NAT mode and the others in Host Only mode. My purpose is to use eth0 (NAT) for Internet access and eth1, eth2 (Host Only) for management of internal virtual network (eth1 uses a VBoxNet with this IP 192.168.69.254). Whenever I try to $ping 8.8.8.8 I get Destination Host Unreachable. While if I $ping 192.168.69.10, that is the IP of the other VM, it works. I can't also ping my Host nor my router My /etc/network/interfaces file is: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 192.168.69.1 netmask 255.255.255.0 auto eth2 iface ifconfig $IFACE 0.0.0.0 up up ip link set $IFACE promisc on down ip link set $IFASE promisc off down ifconfig $IFACE down $route -n returns: Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.69.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Forgetting for now what eth2 needs to do and its setup, why I can't go outside the Host box? What can I do to help you helping me? :-)

    Read the article

  • OpenVPN - Cannot browse ipv4 websites

    - by user1494428
    I have set up an openVPN tunnel on my VPS (OpenVZ - Ubuntu 12.04). The problem is I can only browse websites which support ipv6 like google. http://whatismyv6.com/ reports me that I've an ipv6 adress, so I guess this is the problem. Server configuration: dev tun server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem push "route 10.8.0.0 255.255.255.0" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" push "redirect-gateway def1" comp-lzo persist-tun persist-key status openvpn-status.log log /var/log/openvpn.log verb 3 Client configuration: client remote xx.xx.xx.xx 1194 dev tun comp-lzo ca ca.crt cert client1.crt key client1.key redirect-gateway def1 verb 3 I have configured NAT with this command: iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to xx.xx.xx.xx Can someone explain me how I can make it works (forcing ipv4?) I had the same problem with another vps and I also tried on another client (All Windows 7).

    Read the article

  • Exim, hot to route local mail to other adress

    - by kheraud
    I have setuped an Exim4 server on my debian wheezy server. This mail server only sends mail coming from localhost. The purpose is sending mail for my website. I have cron tasks and other services generating mails for root user. These mails are not stored in /var/mail as before, but sent by exim to [email protected]. I try to make exim send mails for root to [email protected] rather than [email protected]. I tried adding a .forward in /root with [email protected] as content. I tried also changing /etc/aliases with root: [email protected]. The fact is that routing works for root@localhost but not for root which is resolved as [email protected] I tested how routing is resolved with exim -bt : root@srv02:~# exim -bt root@localhost R: system_aliases for root@localhost R: dnslookup for [email protected] [email protected] <-- root@localhost router = dnslookup, transport = remote_smtp host gmail-smtp-in.l.google.com [173.194.67.27] MX=5 host alt1.gmail-smtp-in.l.google.com [74.125.143.27] MX=10 host alt2.gmail-smtp-in.l.google.com [74.125.25.27] MX=20 host alt3.gmail-smtp-in.l.google.com [173.194.64.27] MX=30 host alt4.gmail-smtp-in.l.google.com [74.125.142.27] MX=40 root@srv02:~# exim -bt root R: dnslookup for [email protected] [email protected] router = dnslookup, transport = remote_smtp host aspmx.l.google.com [173.194.78.27] MX=1 host alt1.aspmx.l.google.com [74.125.143.27] MX=5 host alt2.aspmx.l.google.com [74.125.25.27] MX=5 host alt4.aspmx.l.google.com [74.125.142.27] MX=10 host alt3.aspmx.l.google.com [173.194.64.27] MX=10 I bet this is a matter of how my server is configured (rather than how exim is configured). But to understand well I would like to have a solution for both : how to have root resolved as root@localhost ? how to have [email protected] routed to [email protected] ?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >