Search Results

Search found 3260 results on 131 pages for 'debian squeeze'.

Page 53/131 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

  • DKIM- Filter No Signature Data

    - by Vineet Sharma
    I have installed DKIM-Filter on Postfix after reading this tutorial http://www.unibia.com/unibianet/systems-networking/how-setup-domainkeys-identified-mail-dkim-postfix-and-ubuntu-server My email now has a DKIM signature but still it is landing in the SPAM folder. Here is the header Received-SPF: neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=69.164.193.167; Authentication-Results: mx.google.com; spf=neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=hardfail (test mode) [email protected] Received: from promote.a2labs.in (localhost [127.0.0.1]) by promote.a2labs.in (Postfix) with ESMTPA id 34858530E8 for <[email protected]>; Mon, 28 Feb 2011 12:23:07 +0530 (IST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=a2labs.in; s=mail; t=1298875987; bh=bo+H1VYPIHMja2u7i1lnzr4k/j4Pe8iSf79bVw94XpI=; h=To:Subject:Message-ID:Date:From:Reply-To:MIME-Version: Content-Type:Content-Transfer-Encoding; b=nhTdlnUwo0iUJ92ycQzKSRjw 5Pfya0DJcJrAc8Mr2hIv8OLpgzBCzdOMWTGqR5nuUmAzgCGYBhYAM2XZwVxo9JG/iz7 oYKysmNQnskFx0TRyW3UOkDWcfHcPnCL6Y7fGzZWinmsyjsg47k+mKZg/e8jqlwTAMO PYKkt5pBz7SM0= Also my mail.err file shows Feb 28 12:17:03 ivineet dkim-filter[32181]: 1F788530E1: no signature data Feb 28 12:18:02 ivineet dkim-filter[32181]: 432BA530E2: no signature data How to fix it

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Upgrade from Etch to Lenny, server no longer reboots

    - by Jack
    Hi, I just upgraded my (remote) webserver to Lenny from Etch. and no real issue occurs except that I cannot reboot that server. (I can only start in a rescue mode) I recall that a MBR change performed and I am wondering if that could be the reason for this failure. Would someone know how I can revert this MBR change ? I had switched back to my previous linux kernel, still cannot reboot. I would appreciate your help. thanks Jack

    Read the article

  • OOM-Killer called every now and then..

    - by SpyrosP
    Hello there, i have a dedicated server where i've installed apache2, as well as Rails Passenger. Although i have 2GBs of RAM and most times about 1,5GB is free, there are some random times when i lose ssh and generic connectivity because oom-killer is killing processes. I suppose there is a memory leak but i cannot find out where it comes from. oom-killer kills apache2, mysql, passenger, whatever. Yesterday, i did a "cat syslog | grep -c oom-killer" and got 57 occurences ! It seems that something seriously destroys the memory. Once i reboot, everything comes back to normal. I suspect that it can be related to Passenger, but i'm still trying to figure it out. Can you think of anything else, or do you have anything to suggest that will make the leak identification procedure easier ? (i was even thinking of writing a bash script, to be run with cron for like every 5 minutes).

    Read the article

  • Create .gitconfig for chrooted users

    - by Vincent LITUR
    I have several chrooted users on my server, and I want to install git for specific users. I block at the command : git config --global user.name "user_name" I use this command connected as the user, and I got this error : error: could not lock config file /home/username/.gitconfig: Permission denied I tried to create the file from root, and then put chmod 755 and chown username .gitconfig, but I get the error. Is there a way to do this ? Edit : This question http://stackoverflow.com/questions/17908386/unable-to-create-gitconfig-file-for-user answers mine

    Read the article

  • Apache, ISPConfig & Roundcube alias

    - by Jay Zus
    I'm using ISPConfig to setup all the websites on my server but I also like to try to fiddle with the files myself to see how it works. Like you guessed, yes, I've broken something. I can't access my webmail setup by default on the server with the alias /webmail (I access it via the http://xxx.xxx.xx.xx/webmail) Firefox tells me that The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. So I cleaned up my vhost files and the one of my websites work as intended, I think that the problem comes from my default.vhost. Here's the content of it <Directory /var/www/> AllowOverride None Order Deny,Allow Deny from all </Directory> <VirtualHost *:80> DocumentRoot /var/www/ ServerAdmin [email protected] <Directory /var/www/> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> This isn't a lot and I can't really see what's wrong with it, all I know is that it isn't the one that came with ISPConfig and I can't find an original one. Here's the roundcube.conf that loads with apache # Those aliases do not work properly with several hosts on your apache server # Uncomment them to use it or adapt them to your configuration # Alias /roundcube/program/js/tiny_mce/ /usr/share/tinymce/www/ # Alias /roundcube /var/lib/roundcube Alias /webmail /var/lib/roundcube/ # Access to tinymce files <Directory "/usr/share/tinymce/www/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Directory /var/lib/roundcube/> Options +FollowSymLinks # This is needed to parse /var/lib/roundcube/.htaccess. See its # content before setting AllowOverride to None. AllowOverride All order allow,deny allow from all </Directory> # Protecting basic directories: <Directory /var/lib/roundcube/config> Options -FollowSymLinks AllowOverride None </Directory> <Directory /var/lib/roundcube/temp> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> <Directory /var/lib/roundcube/logs> Options -FollowSymLinks AllowOverride None Order allow,deny Deny from all </Directory> I didn't touch that file, but I guess it has something to do with the problem. I just can't find why it doesn't work. EDIT: This is the errors in my apache's log [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 540 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0" [21/Jun/2013:21:10:54 +0200] "GET /webmail/ HTTP/1.1" 302 475 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0"

    Read the article

  • Is there any way to limit my Internet connection to a per program basis?

    - by Igoru
    My Linux connection is REALLY free. I live in Brazil, so where I live I can only have 1 Mbit/s. Yes I know it's sad, but it's not the point. Everytime I'm updating my Ubuntu 9.04 or downloading something, it does eat all my bandwidth. Like, while update-manager is downloading the packages, I can see by netspeed applet in my panel that the incoming traffic goes to 110 kB/s. And then, my Emesene suddenly goes disconnected, and I can't navigate. As you can imagine, I can't use my Internet connection again until the packages are all downloaded or I cancel the update in the middle. As I said, same thing happens when I'm dowloading something, but less intrusive and immediate. The question is: is there any way to limit that APT/downloads traffic to some way I can still use my other Internet services, or to reserve some bandwidth for common navigation tasks (like we have on Windows, but I forgot this thing's name, it's like "something packages".

    Read the article

  • Not getting gigbit from a gigabit link?

    - by marcusw
    I just upgraded my LAN to gigabit. This is what netperf has to say about things. Before: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.02 94.13 After: marcus@lt:~$ netperf -H 192.168.1.1 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 (192.168.1.1) port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.01 339.15 Only 340 Mbps? What's up with that? Background info: I'm connecting through a gigabit switch to a sheevaplug. I have Cat5e wiring in the walls and the run is maybe 30 feet. If you're not familiar with netperf, it has a tendency to give very stable results and never lie.

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • Best Postfix spam RBL policy weight daemon?

    - by TRS-80
    I just heard about policyd-weight so I did an apt-cache search policyd which returns three options: policyd-weight postfix-policyd postfwd Which one is the best, and do you have any tips on setting them up? Our current setup is whitelister plus postgrey to greylist RBLd hosts, then fail2ban them for 10 minutes if they have 10 failures, followed by content filtering (Kaspersky Anti-Spam). The content filtering is pretty good, but there's still a lot of spam that gets through the RBL greylisting.

    Read the article

  • Added autossh in rc.local, but the dynamic port forwarding won't work

    - by rankjie
    I am using Rasbian on my newly arrived Rasp.Pi, and decided to make it my own proxy server. Now I need to set up a ssh tunnel on the Pi to my Linode server, and make it auto start with the system. What did I do: Add this line to /etc/rc.local autossh -f theRemoteServer -N -D 5555 -L 1234:localhost:22 After I reboot, I found out that I can't use the localhost:5555 as a socks proxy. So I type the command ps -A | grep ssh then I can see the autossh and ssh all running: pi@raspberrypi ~ $ ps -A | grep ssh 2018 ? 00:00:00 sshd 2116 ? 00:00:00 autossh 2119 ? 00:00:00 sshd 2195 ? 00:00:00 sshd 3173 ? 00:00:00 ssh (I've installed autossh, and the command works if I type it manually.) (I use the passwordless key auth, so I don't have to enter password.) Much appreciated and sorry for my poor English.

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • Bacula vs. BackupPC

    - by chronoz
    I have been googling about the differences between them. Bacula has lots of roles BackupPC is easier to configure Bacula works with agent, not rsync (great for Windows backups) It seems that Bacula is most often compared to Amanda though, while BackupPC seems a perfectly lovely and popular backup distribution to. I currently backup my servers with rsnapshot, but I am looking for a professional scalable solution that could also back-up 50 hosts without problems. Preferably a solution that can offer bare metal restores for my Linux servers. I am not looking to reinstall the exact same version of Plesk, the software, etc...

    Read the article

  • Is there software that will help me convert my PST files into a searachable web archive?

    - by chronoz
    I have used POP3 for many and many years and always used PST files for back-up purposes. I'd like to be able to create a searchable mail archive of this 12GB worth of e-mail. I had used Horde + Qmail for a while for searching e-mail, but it was truly horrible and even extremely slow when searching into a few ten thousands of e-mails, let alone more than a million. I would prefer a free solution that would provide fast searching through historical e-mails. Also, preferably hosted on a server, so I don't have to worry about backing up any more crucial data on my desktop.

    Read the article

  • mount dev, proc, sys in a chroot environment?

    - by Patrick
    I'm trying to create a Linux image with custom picked packages. I followed the guide here http://www.olpcnews.com/forum/index.php?topic=4766.0 However, when I tried to install some packages, it failed to configure due to missing the proc, sys, dev directories. So, I learned from other places that I need to "mount" the host proc, ... directories to my chroot environment. Though, I saw two syntax and am not sure which one to use. In host machine: mount --bind /proc <chroot dir>/proc and another syntax (in chroot envrionment): mount -t proc none /proc Which one should I use, and what are the difference? Edit: What I'm trying to do is to hand craft the packages I'm going to use on an XO laptop, because compiling packages takes really long time on the real XO hardware, if I can build all the packages I need and just flash the image to the XO, I can save time and space.

    Read the article

  • FTP not listing files behind firewall (setsockopt (ignored): Permission denied)

    - by KennyDs
    We are developing a Magento application that has a module that works with FTP. Today we deployed this on the testing environment which is setup in the following way: Gateway server which has the following iptables rules: # iptables -L -n -v Chain INPUT (policy ACCEPT 2 packets, 130 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 165 13720 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 7 packets, 606 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 15 965 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- eth1 eth1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 126 packets, 31690 bytes) pkts bytes target prot opt in out source destination These are set at runtime via the following bash script: #!/bin/sh PATH=/usr/sbin:/sbin:/bin:/usr/bin # # delete all existing rules. # iptables -F iptables -t nat -F iptables -t mangle -F iptables -X # Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # Allow established connections, and those not coming from the outside iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # Don't forward from the outside to the inside. iptables -A FORWARD -i eth1 -o eth1 -j REJECT # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward The gateway server is connected to the WAN via eth1 and is connected to the internal network via eth0. One of the servers from eth1 has the following problem when trying to list files over ftp: $ ftp -vd myftpserver.com Connected to myftpserver.com 220 Welcome to MY FTP Server ftp: setsockopt: Bad file descriptor Name (myftpserver.com:magento): XXXXXXXX ---> USER XXXXXXXX 331 User XXXXXXXX, password please Password: ---> PASS XXXX 230 Password Ok, User logged in ---> SYST 215 UNIX Type: L8 Remote system type is UNIX. Using binary mode to transfer files. ftp> ls ftp: setsockopt (ignored): Permission denied ---> PORT 192,168,19,15,135,75 421 Service not available, remote server has closed connection When I try listing the files in passive mode, same result. When I run the same command on the gateway server, everything works fine so I believe that the issue is happening because of the iptables rules not forwarding properly. Does anyone have an idea which rule I need to add to make this work?

    Read the article

  • How to fix grub after moving root partition?

    - by Grzenio
    Hi, Because I am using one of the new WD disks I am trying to aling my root partition with the real sectors, as described here: http://community.wdc.com/t5/Desktop/Problem-with-WD-Advanced-Format-drive-in-LINUX-WD15EARS/m-p/10920#M631 So I copied all files to a temp location, deleted my partition (/dev/sda3), recreated it a few cylinders later (same name) and copied the files to the newly created partition. But now when I try to boot, I get my old grub menu but after selecting my kernel version it hangs... Any idea how I can fix it?

    Read the article

  • How to make VirtualBox headless answer on rdp port?

    - by stiv
    I'd like to run windows xp on RDP: $ VBoxManage modifyvm winxp32 --vrdeport 3389 $ VBoxHeadless -s winxp32 -v on Oracle VM VirtualBox Headless Interface 4.1.18_Debian (C) 2008-2012 Oracle Corporation All rights reserved. (waiting) in another window: $ telnet localhost 3389 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused Yes, I've read about extension: $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.20-80170.vbox-extpack 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to install "Oracle_VM_VirtualBox_Extension_Pack-4.1.20- 80170.vbox-extpack": Extension pack 'Oracle VM VirtualBox Extension Pack' is already installed. In case of a reinstallation, please uninstall it first Looked through all manuals and all help requests. No success. What's wrong? Any ideas?

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >