Search Results

Search found 29876 results on 1196 pages for 'ubuntu 8 04 lts'.

Page 73/1196 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Setting up vsftpd, hangs on list command

    - by Victor
    I installed vsftpd and configured it. When I try to connect to the ftp server using Transmit, it manages to connect but hangs on Listing "/" Then, I get a message stating: Could not retrieve file listing for “/”. Control connection timed out. Does it have anything to do with my iptables? My rules are as listed: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • Can't re-open "Group My Tabs" page after killing Firefox

    - by isomorphismes
    Due to memory problems I closed Firefox whilst the Group My Tabs (Ctrl+e) feature was displaying the multiple tab groups. After 20 minutes the process still hadn't finished so killall firefoxed it. When I restart Firefox, there is a Javascript process called tabview.js that hangs unless I click Stop at the warning screen. If I do hit Stop then I can no longer open the tabview, so I can't get to any of the tabs except in the subgroup I was last looking at. Any suggestions?

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • Apache not directing to correct VHost

    - by BANANENMANNFRAU
    I have setup the following virtual host ServerAdmin [email protected] ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /var/www/homepage/public_html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined When I hit my url Apache still shows the default page. Not the index Ive created in the give Document root. In my Domain i have set the A Record to the Ip of my VPS: apache2ctl -S: output: VirtualHost configuration: *:80 is a NameVirtualHost default server xxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost xxxxxxx.stratoserver.net (/etc/apache2/sites-enabled/000-default.conf:1) port 80 namevhost mysite.com (/etc/apache2/sites-enabled/homepage.conf:1) alias www.mysite.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults Mutex watchdog-callback: using_defaults PidFile: "/var/run/apache2/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used How would I need to setup my Virtual host so that apache shows the correct site depending on the Domain im redirecting from.

    Read the article

  • VMware postfix server drops connection

    - by nicoX
    Our physical server godzilla forwards mails to our virtuall VMware server b4. They are on the same net. Often connection drops, we can't ping godzilla with our b4. That means mails from godzilla won't reach b4 and the mails will be in handed into the mailq. Sometimes it takes some hours and the issue will auto fix itself, b4 will wake up and the mail will be delivered. Another thing if we remotely ssh into the b4, the b4 will wake up and and receive any mailq mails from godzilla and deliver them. netadmin@b4:/var/log$ arp -a ? (192.168.209.80) at 00:1E:C9:AE:79:9D [ether] on eth0 root@godzilla:/usr/local/bin# arp -a ? (192.168.209.20) at 00:50:56:91:7d:b2 [ether] on eth0

    Read the article

  • How to toggle location bar format in Nautilus?

    - by C.W.Holeman II
    In a previous version of Nautilus the location had a button for toggling between a text field for the directory path, and a row of buttons (one per directory in the path). With the install of Ubunbtu 10.04 Nautilus 2.30.1 has just the button bar and no button to toggle to the text field. How do I get the control that was available before?

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Java JRE: Setting default heap size

    - by AndiDog
    I'm having trouble with Java on a virtual server, it always gives me the following error: # java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I first solved this by using the CACAO virtual machine (of OpenJDK, by putting it first in jvm.cfg), but then I run into problems with my web application (Play! framework based, gives me nasty LinkageErrors). So I cannot use that VM. Instead I'd like to just use the normal server VM and set -Xmx128M by default. How can I do that? Related: this question

    Read the article

  • localhost as hostname confusion [duplicate]

    - by Baboon
    This question already has an answer here: localhost as hostname confusion 1 answer I have a basic understanding about hostname and FQDN. Now I am confused, do I really have to specify a name for my hostname? So for example: Hostname: somename Domain: mydomain.com FQDN: somename.mydomain.com Now, I see something that the hostname is localhost. What is the difference and impact of that? So my FQDN if localhost is my hostname would be localhost.mydomain.com, right?

    Read the article

  • Difference between key_buffers and recommendation

    - by Typeoneerror
    I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M. There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two? If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.

    Read the article

  • Print same text several time to one page

    - by RiaD
    I have a odt(or pdf, or ps) file. [Really I have odt, but I can easily convert it], it consist of 1 page. No I want to print it to another pdf 4 times to 1 page. There is an option pages per side, so If I copy-paste 4 times my document and set this option to 4 I'll have my expected result. But I want to do it without copy-paste because it's quite annoying to copy-and-paste before each printing. Is it simpler way?

    Read the article

  • Unable to get to remote samba share

    - by tubaguy50035
    I have a remote VPS that I would like to setup samba on and only allow my IP access to it. I currently have in my smb.conf: [global] netbios name = apollo security = user encrypt passwords = true socket options = TCP_NODELAY printing = bsd log level = 3 log file = /var/log/samba/log/%m debug timestamp = yes max log size = 100 [hosting] path = /hosting/ comment = Hosting Folder browseable = yes read only = yes guest account = yes valid users = nick I have the ports (137,138,139,445) open in iptables (they're open to everyone right now while I debug) and I see nothing in the syslog about iptables blocking my requests. When I try to open a file browser to my address \\ipaddress, it hangs for a good thirty seconds, and then opens a log in box. I enter my user name and password for the server, hit okay. It then opens the same box, I enter my credentials again and hit enter. Windows then tells me it could not connect. My user account is added to Samba already. Anybody have any suggestions what I can do to get this working?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Desktop directory disappears in gnome-terminal, then appears again, but all files in it are deleted

    - by Ingen
    I am able to see my Desktop and with all its various links and files. But in the terminal when I try to access the Desktop directory: cd ~/Desktop I get: bash: cd: /home/administrator/Desktop: No such file or directory Then I find I am unable to access any of the files on the Desktop when I click on them although the file icons are there. Then the icons disappear after my clicking on them. Then I am able to access the Desktop directory in the terminal but the directory is empty i.e. all the files/folders have been deleted. What's going on? How can I fix this?

    Read the article

  • how to reduce time of git pulling each time when you do a make world on Xen source

    - by Registered User
    I am compiling xen from source and each time I do a make world it basically gives some or the other error my problem are not those errors ( I am trying to debug them) but the problem is each time when I do a make world Xen basically pulls things from git repository + rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp + mkdir linux-2.6-pvops.git.tmp + rmdir linux-2.6-pvops.git.tmp + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /usr/src/xen-4.0.1/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1941611, done. remote: Compressing objects: 100% (319127/319127), done. remote: Total 1941611 (delta 1614302), reused 1930655 (delta 1604595) **Receiving objects: 20% (1941611/1941611), 98.17 MiB | 87 KiB/s, done.** and if you notice the last line it is still consuming my bandwidth pulling things from internet.How can I stop this step each time and use existing git repository?

    Read the article

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • RESOLVED Why does IPtables's NAT stop working when I enable the firewall's third interface?

    - by Kronick
    On my firewall I've three interfaces : eth0 : public IP (46.X.X.X.) eth0:0 public IP (46.X.X.Y.) eth1 : public IP (88.X.X.X.) eth2 : private LAN (172.X.X.X) I've setup a basic NAT which works great until I turn on the eth1 interface, I basically loose the connectivity. When I turn off the interface (ifconfig eth1 down) then the NAT re-work. I've added some policy routing via iproute, which makes my three public IP's available. I don't understand why turning on eth1 on makes the LAN unavailable. PS : weirder ; when I turn on eth1 BUT remove the NAT, then the firewall is accessible by using the public IPS. So to me it's exclusively a NAT issue, since without the NAT the network works while with the NAT without the second public interface, the NAT does work. Regards EDIT : I've been able to make it work by using iproute2 rules. That was definitely a routing issue. Here is what I did : ip rule add prio 50 table main ip rule add prio 201 from ip1/netmask table 201 ip rule add prio 202 from ip2/netmask table 202 ip route add default via gateway1 dev interface1 src ip1 proto static table 201 ip route append prohibit default table 201 metric 1 proto static ip route add default via gateway2 dev interface2 src ip2 proto static table 202 ip route append prohibit default table 202 metric 1 proto static # mutipath ip rule add prio 221 table 221 ip route add default table 221 proto static \ nexthop via gateway1 dev interface1 weight 2\ nexthop via gateway2 dev interface2 weight 3

    Read the article

  • Ubuntu 10.04/CURL: How do I fix/update the CA Bundle?

    - by Nick
    I recently upgraded our server from 8.04 to 10.04, and all the software along with it. From what I've found online, it seems that the new version of CURL doesn't include a CA bundle, and, as a result, fails to verify that the certificate of the server you're connecting to is signed by a valid authority. The actual error is: CURL error: SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE: certificate verify failed Some palces I've found suggest manually specifying a CA file or disabling the check altogether by setting an option when you call CURL, but I'd much rather fix the issue globally, rather than having to modify each application's CURL calls. Is there a way to fix CURL's CA problem server-wide so that all of the existing application code works as is without needing to be modified?

    Read the article

  • Is it possible to extend a 504 timeout in nginx on a per location basis

    - by codecowboy
    Is it possible to set timeout directives within a location block to prevent nginx returning a 504 from a long running PHP script (PHP-FPM? location /myurlsegment/ { client_body_timeout 1000000; send_timeout 1000000; fastcgi_read_timeout 1000000; } This has no effect when making a request to example.com/myurlsegment. The timeout occurs after approximately 60 seconds. PHP is configured to allow the script to run until completion (set_time_limit(0)) I don't want to set a global timeout for all scripts.

    Read the article

  • Problems bringing up a second virtual network interface

    - by tubaguy50035
    I'm having issues adding a second IP address to one interface. Below is my /etc/networking/interfaces # The loopback network interface auto lo iface lo inet loopback #eth0 is our main IP address auto eth0 iface eth0 inet static address 198.58.103.* netmask 255.255.255.0 gateway 198.58.103.1 #eth0:0 is our private address auto eth0:0 iface eth0:0 inet static address 192.168.129.134 netmask 255.255.128.0 #eth0:1 is for www.site.com auto eth0:1 iface eth0:1 inet static address 198.58.104.* netmask 255.255.255.0 gateway 198.58.104.1 When I run /etc/init.d/networking restart, I get a fail error about bringing up eth0:1: RTNETLINK answers: File exists Failed to bring up eth0:1. Any reason this would be? I didn't have any problems with I first set up eth0 and eth0:0.

    Read the article

  • Can't get samba to see other PCs in Kubuntu 10.04 (Lucid)

    - by MaurizioPz
    I'm new to networking. I'm trying to share a folder between to computers (both have kubuntu 10.04 installed). I'm able to share a folder with samba and can see that folder through samba on the same computer. But if I try to go on the other PC I can't see the first one. Both PCs are on the "workgroup" workgroup. I've tried disabling the firewall with firestarter can somebody help me? thanks update: here's my samba.conf http://pastebin.com/SpuES468

    Read the article

  • Authentication error in LTSP client

    - by sat
    I am building a LTSP server with LDAP authentication for LTSP Clients. I have configured LDAP server also. When I try to login from LTSP client in GUI, I am getting No response from server, restarting. Then, It's restarting the GUI and comes to the login screen again. I thought that there could be a problem with LDAP authentication. But, When I try to login from Alt+Ctrl+F1 terminal in LTSP client, it is logged in successfully with LDAP user. LDAP Server and authentication is working fine. Even, after executing the below commands, still I am getting the same error. ltsp-update-sshkeys ltsp-update-kernels ltsp-update-image --arch i386 Whether I need to configure anything for GUI login from LTSP Client? How to fix this issue?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >