Search Results

Search found 4906 results on 197 pages for 'ssh tunnel'.

Page 190/197 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • iptables & allowed port refusing connection

    - by marfarma
    Can you see what I'm doing wrong? On Ubuntu Server 9.1, I'm attempting to allow traffic on port 1143 for a non-privileged IMAP host. Connection is refused when testing with telnet example.com 1143 but connection is allowed testing with telnet example.com 80 from my pc to remote internet hosted server. Both rules appear identical and are located near each other with no rules rejecting connections intervening in the rules file. I can't figure it out. iptables -L returns this: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere REJECT all -- anywhere 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:7070 ACCEPT tcp -- anywhere anywhere tcp dpt:1143 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere and my rules file contains this: # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *nat :PREROUTING ACCEPT [3556:217296] :POSTROUTING ACCEPT [6909:414847] :OUTPUT ACCEPT [6909:414847] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 COMMIT # Completed on Wed May 26 19:08:34 2010 # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *filter :INPUT ACCEPT [1:52] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:212] -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m tcp --dport 7070 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1143 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j ACCEPT COMMIT # Completed on Wed May 26 19:08:34 2010

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Apache Alias subfolder and starting with dot

    - by MauricioOtta
    I have a multi purpose server running ArchLinux that currently serves multiple virtual hosts from /var/www/domains/EXAMPLE.COM/html /var/www/domains/EXAMPLE2.COM/html I deploy those websites (mostly using Kohana framework) using a Jenkins job by checking out the project, removes the .git folder and ssh-copy the tar.gz to /var/www/domains/ on the server and untars it. Since I don't want to have to re-install phpMyAdmin after each deploy, I decided to use an alias. I would like the alias to be something like /.tools/phpMyAdmin/ so I could have more "tools" later if I wanted to. I have tried just changing the default httpd-phpmyadmin.conf that was installed by following the official WIKI: https://wiki.archlinux.org/index.php/Phpmyadmin Alias /.tools/phpMyAdmin/ "/usr/share/webapps/phpMyAdmin" <Directory "/usr/share/webapps/phpMyAdmin"> AllowOverride All Options FollowSymlinks Order allow,deny Allow from all php_admin_value open_basedir "/var/www/:/tmp/:/usr/share/webapps/:/etc/webapps:/usr/share/pear/" </Directory> Changing only that, doesn't seem to work with my current setup on the server, and apache forwards the request to the framework which 404s (as there's no route to handle /.tools/phpAdmin). I have Mass Virtual hosting enable and setup like this: # # Use name-based virtual hosting. # NameVirtualHost *:8000 # get the server name from the Host: header UseCanonicalName On # splittable logs LogFormat "%{Host}i %h %l %u %t \"%r\" %s %b" vcommon CustomLog logs/access_log vcommon <Directory /var/www/domains> # ExecCGI is needed here because we can't force # CGI execution in the way that ScriptAlias does Options FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> RewriteEngine On # a ServerName derived from a Host: header may be any case at all RewriteMap lowercase int:tolower ## deal with normal documents first: # allow Alias /icons/ to work - repeat for other aliases RewriteCond %{REQUEST_URI} !^/icons/ # allow CGIs to work RewriteCond %{REQUEST_URI} !^/cgi-bin/ # do the magic RewriteCond %{SERVER_NAME} ^(www\.|)(.*) RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%2}/html/$1 ## and now deal with CGIs - we have to force a MIME type RewriteCond %{REQUEST_URI} ^/cgi-bin/ RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%{SERVER_NAME}}/cgi-bin/$1 [T=application/x-httpd-cgi] There is also nginx running on this server on port 80 as a reverse proxy for Apache: location ~ \.php$ { proxy_pass http://127.0.0.1:8000; } Everything else was setup by following the official WIKI so I don't think those would cause trouble. Do I need to have the alias for phpMyAdmin setup along the mass virtual hosting or can it be in a separate include file for that alias to work?

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Transient network dropout for Xen DomU's

    - by Stephen C
    We've got a CentOS server running a cluster of virtuals. Occasionally the cluster's internal network drops out for a minute or so ... and then comes back. The problem is somehow related to the actual network traffic, but it is not a simple load issue. (The system is generally lightly loaded, and the problem occurs irrespective of actual load.) The setup: CentOS 5.6 on Dom0, various CentOS on the DomU's Hardware - a Dell R710 with a BroadCom NextXpress 2 NIC (sigh) using the latest drivers for the NIC from BroadCom Xen configured to use network-bridge and vif-bridge Some iptable tweaks to route an unrelated port to one of the virtuals. The system has one externally visible IP address, and Dom0 runs an Apache httpd configured with a number of virtual hosts each of which reverse proxies to web servers running on the virtuals. (The virtuals have to be NAT'ed, primarily because we don't have enough allocated public IP addresses.) The symptoms: Works fine most of the time. When someone tries to UPLOAD a large file to one virtuals, the internal network drops out ... for all virtuals: The Dom0 httpd sees a network timeout talking to the backend server on the virtual and reports a 502. A previously established ssh connection from Dom0 to any of the DomU's freezes. Our monitoring shows ping failures for traffic between virtuals. The Xen consoles to the DomU's do not freeze. No log messages in any log files that I can see, on either Dom0 or the DomU's ... apart from the Dom0 httpd logs. After a minute or so, the problem clears by itself. This is 100% reproducible. What we've tried: Downloading, building and installing the latest BNX2 driver on Dom0 Turning off MSI on the NIC - adding "options bnx2 disable_msi=1" to /etc/modprobe.conf Turning off tcp segmentation offload - "ethtool -K eth0 tso off". Sacrificing a black rooster at midnight. I've exhausted all my options apart from switching to KVM ... or slaughtering more roosters. Any suggestions?

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • graphic error on console

    - by Christian Elsner
    I have Linux on an embedded system. There is no graphic system, but I still have graphic errors. For example, if I type: ifconfig eth2 hw ether 00:0e:8c:d0:59:d2 I see: ifconfig eth hw ether 00:0e:8:2:2 If I type Enter, it accepts the command I typed, so it's just a matter of displaying. Everything is fine, when I log in via SSH. Anyone any ideas, what could be the cause or where to look at? Output of lspci: 00:00.0 Host bridge: Intel Corporation 3100 Chipset Memory I/O Controller Hub 00:00.1 Unassigned class [ff00]: Intel Corporation 3100 DRAM Controller Error Reporting Registers 00:01.0 System peripheral: Intel Corporation 3100 Chipset Enhanced DMA Controller 00:02.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A 00:03.0 PCI bridge: Intel Corporation 3100 Chipset PCI Express Port A1 00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2 (rev 01) 00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c9) 00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 631xESB/632xESB/3100 Chipset SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller (rev 01) 02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection 03:01.0 Network controller: Siemens Nixdorf AG Device 4003 (rev 02) 03:01.1 Unassigned class [ff00]: Siemens Nixdorf AG Device 4003 (rev 02) 03:02.0 Ethernet controller: Siemens Nixdorf AG Device 4047 (rev 01) 03:03.0 Ethernet controller: National Semiconductor Corporation DP83815 (MacPhyter) Ethernet Controller 03:04.0 Unassigned class [ff00]: Siemens Nixdorf AG Device 4057 (rev 01) 04:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 05:00.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 44) 06:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) 07:00.0 VGA compatible controller: Silicon Motion, Inc. SM720 Lynx3DM (rev c1) 07:01.0 USB Controller: NEC Corporation USB (rev 43) 07:01.1 USB Controller: NEC Corporation USB (rev 43) 07:01.2 USB Controller: NEC Corporation USB 2.0 (rev 04) The whole thing is running on an Intel Core 2 Duo U2500

    Read the article

  • Excessive Outbound DNS Traffic

    - by user1318414
    I have a VPS system which I have had for 3 years on one host without issue. Recently, the host started sending an extreme amount of outbound DNS traffic to 31.193.132.138. Due to the way that Linode responded to this, I have recently left Linode and moved to 6sync. The server was completely rebuilt on 6sync with the exception of postfix mail configurations. Currently, the daemons run are as follows: sshd nginx postfix dovecot php5-fpm (localhost only) spampd (localhost only) clamsmtpd (localhost only) Given that the server was 100% rebuilt, I can't find any serious exploits against the above stated daemons, passwords have changed, ssh keys don't even exist on the rebuild yet, etc... it seems extremely unlikely that this is a compromise which is being used to DoS the address. The provided IP is noted online as a known SPAM source. My initial assumption was that it was attempting to use my postfix server as a relay, and the bogus addresses it was providing were domains with that IP registered as their nameservers. I would imagine given my postfix configuration that DNS queries for things such as SPF information would come in with equal or greater amount than the number of attempted spam e-mails sent. Both Linode and 6Sync have said that the outbound traffic is extremely disproportionate. The following is all the information I received from Linode regarding the outbound traffic: 21:28:28.647263 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647264 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647264 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647265 IP 97.107.134.33 > 31.193.132.138: udp 21:28:28.647265 IP 97.107.134.33.32775 > 31.193.132.138.53: 28720 op8+% [b2&3=0x4134] [17267a] [30550q] [28773n] [14673au][|domain] 21:28:28.647266 IP 97.107.134.33 > 31.193.132.138: udp 6sync cannot confirm whether or not the recent spike in outbound traffic was to the same IP or over DNS, but I have presumed as such. For now my server is blocking the entire 31.0.0.0/8 subnet to help deter this while I figure it out. Anyone have any idea what is going on?

    Read the article

  • unable to access a NAT'ed IP via a VPN on Windows 7

    - by crmpicco
    I connect to a range of servers hosted by one provider via a VPN. I can connect to the VPN fine, however when I then go and try and connect to the server(s) it fails. A NAT'ed IP address that has worked up until today, has stopped working either via SSH/SFTP. As you can see below, if I try and ping the IP then it responds with Destination host unreachable, but, for some reason it says the reply is from 192.168.0.8? If it enter this IP address in my browser, I get nothing. Where is this IP coming from and is there any good reason why I cannot access the IP I am trying to ping? C:\Users\crmpicco>ping 172.26.100.x Pinging 172.26.100.x with 32 bytes of data: Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Reply from 192.168.0.8: Destination host unreachable. Ping statistics for 172.26.100.x: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), I have the VPN remote host address of 80.75.67.x, which shows me as being connected. But i'm unsure if there is a config issue at the server side or my end that has caused this issue? I have had some recent Win7 (automatic) updates, but it's hard to tell if that's caused this problem. This is my output from arp: C:\Users\cmorton>arp -a Interface: 192.168.0.8 --- 0xe Internet Address Physical Address Type 192.168.0.1 00-18-4d-b9-68-5e dynami 192.168.0.6 00-f4-b9-68-0c-9a dynami 192.168.0.7 08-00-27-f2-9f-d6 dynami 192.168.0.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.251 01-00-5e-00-00-fb static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 192.168.56.1 --- 0x15 Internet Address Physical Address Type 192.168.56.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.251 01-00-5e-00-00-fb static 224.0.0.252 01-00-5e-00-00-fc static 255.255.255.255 ff-ff-ff-ff-ff-ff static

    Read the article

  • Moving from VPS to Cloud

    - by GRIGORE-TURBODISEL
    ...and I have a few questions. I'm basically working on a MySQL+PHP based webapp. Since I don't have on-demand scaling with VPS, I'm planning to move from VPS to Cloud when I hit the 1000 subscribers barrier. I'm looking at Windows Azure but I'm ok with other suggestions. So here are my questions: Will it really cost me a kidney? Every subscriber needs to download around 4-5MB of static resources each day. Bandwidth is free on the VPS but here I see costs can easily get to $800.00/mo; this makes me very insecure about the whole thing, I mean VPS is just $2,000/yr. Do I need another VM or is PHP included in the Web Sites? I have basic sysadmin skills, I think I can handle setting up a PHP install, but will I have to do this? If yes, what other service do I need to setup manually? What about Memcached, MySQL, etc? What security protections does it include? For example I have some basic protection included, like directory traversals and executable files upload; I also have CloudFlare on my other websites for DDoS protection; will I need to do the same thing here too, can it even be installed, can I edit my DNS records, etc? How are e-mails, subdomains, add-on domains, parked domains, etc. handled? I haven't seen any references to e-mail boxes. On the VPS I simply add them from cPanel ([email protected] / whatever.mysite.com / ...); do I have a similar management interface here? Do I get SSH access? Or at least FTP, remote MySQL access and maybe some incremental back-ups or something? Can I see my quotas and advanced traffic info? I must mention that I really like the idea of the whole "cloud" concept, the added reliability and everything but I really need maybe a parallel to regular hosting or something so I know what to expect.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • What info is really useful in my iptables log and how do I disable the useless bits?

    - by anthony01
    In my iptables rules files, I entered this at the end: -A INPUT -j LOG --log-level 4 --log-ip-options --log-prefix "iptables: " I DROP everything besides INPUT for SSH (port 22) I have a web server and when I try to connect to it through my browser, through a forbidden port number (on purpose), I get something like that in my iptables.log Sep 24 14:05:57 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=64 TOS=0x00 PREC=0x00 TTL=54 ID=59351 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:01 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC= yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=63377 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:09 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=55025 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:25 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=48 TOS=0x00 PREC=0x00 TTL=54 ID=54521 DF PROTO=TCP SPT=63776 DPT=1999 WINDOW=65535 RES=0x00 SYN URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=100 TOS=0x00 PREC=0x00 TTL=54 ID=35050 PROTO=TCP SPT=63088 DPT=22 WINDOW=33304 RES=0x00 ACK PSH URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=52 TOS=0x00 PREC=0x00 TTL=54 ID=14076 PROTO=TCP SPT=63088 DPT=22 WINDOW=33264 RES=0x00 ACK URGP=0 Sep 24 14:06:55 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=52 TOS=0x00 PREC=0x00 TTL=54 ID=5277 PROTO=TCP SPT=63088 DPT=22 WINDOW=33248 RES=0x00 ACK URGP=0 Sep 24 14:06:56 myserver kernel: [xx.xx] iptables: IN=eth0 OUT= MAC=aa:bb:cc SRC=yy.yy.yy.yy DST=xx.xx.xx.xx LEN=100 TOS=0x00 PREC=0x00 TTL=54 ID=25501 PROTO=TCP SPT=63088 DPT=22 WINDOW=33304 RES=0x00 ACK PSH URGP=0 As you can see, I typed xx.xx.xx.xx:1999 in my browser, and it tried to connect until it timed out. 1) There are many similar lines for just one event. Do you think I need all of them? How would I avoid duplicates? 2) The last 4 lines are for my port 22. But since I allow port 22 INPUT for my web server, why are they here? 3) Do I need info like LEN,TOS,PREC and others? I'm trying to find a page that explains them one by one, by I can't find anything.

    Read the article

  • Port forwarding not working properly

    - by sudo work
    I'm trying to host a small web server from my home network; however, I have not been able to successfully port forward ports to the local server. My current network topology looks like this: Cable Modem/Router - Secondary Wireless Router - Many computers (including server) The modem/router I'm using is a Cisco (Scientific Atlantic) DPC2100, provided by my ISP. The wireless router that I'm using as the central hub to my home network is a Linksys E3000. The computer being used as a server is running Ubuntu 10.04 Server Edition. The main issue is that I can't access the server remotely, using my WAN IP address. I have port forwarded my wireless router; however, I believe that I need to somehow set my modem to bridge mode. As far as I can tell though, this isn't possible. Here are the various IP address settings: DPC2100 WAN: 69.xxx.xxx.xxx Internal IP: 192.168.100.1 Internal Network: 192.168.7.0 E3000 IP Address: 192.168.7.2 Gateway: 192.168.7.1 Internal IP: 192.168.1.1 Internal Network: 192.168.1.0 Server IP Address: 192.168.1.123 Gateway: 192.168.1.1 Now I can do an nmap at various nodes, and here are the results (from the server): nmap localhost: 22,25,53,80,110,139,143,445,631,993,995,3306,5432,8080 open nmap 192.168.7.2: 22,25,80 (filtered),110,139,445 open (ports I have forwarded in the E3000)* nmap 69.xxx.xxx.xxx: 1720 open *For some reason, I can SSH into the server at 192.168.7.2, but not view the website. Here are also some other settings: /etc/hosts/ 127.0.0.1 localhost 127.0.1.1 servername ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/apache2/sites-available/default snippet <VirtualHost *:80> DocumentRoot /srv/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> ... </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> ... </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> ... </Directory> </VirtualHost> Let me know if you need any other information; some stuff probably slipped my mind.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >