Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 172/1831 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Active node stops resources when pasive node is shutdown

    - by Wakaru44
    2 nodes, active/pasive. 2 resources, a virtual ip, openLdap, and the nfs mount where openldap saves the data. When both nodes are up, things worked fine. You could move resources away and put the active in stanby. But when i rebooted the passive node, ( with the resources in the active node), and the passive node loses conectivity, all the resources in the active where stopped by pacemaker. I'm reading the documentation right now, but I just need a little quick tip to figure what could be hapenning here. Im using: corosync pacemaker RHEL 6

    Read the article

  • impossible to connect.. days trying..

    - by dany
    I have a problem: I am on Debian. I configured my nic with a static ip (192.168.1.56). When I try to connect to a network, initially with ifconfig eth2 I get (correctly): eth2 inet addr:192.168.1.56 .... inet6 addr: fe80:221:ff:fe96:4598/64 but after a few seconds the 102.168.1.56 disappears and after some other seconds disappears the inet6 address too. When I press in the nm-applet it requires me the password but in the meantime it try to connect. At uni, the connection is a DHCP one. It works for the first few seconds but after it doesn't. Any possible solution? Here it is the relevant part of the syslog: (static ip configuration) http://pastebin.com/u3BPAsda

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • Lubuntu LiveCD disabling auto-mount.

    - by PxE Booter
    In cooperation with my IT teacher we want to boot all PC's in IT class with Lubuntu. I've successfully set up PXE server, but there is one thing that worries us. Harddrives shouldn't be accessible from booted Lubuntu(normal user only). Would adding to fstab something like: /dev/sda1 /Idk/What auto noauto work? I'd like to add that I can uncompress squashfs livecd filesystem. If no, what other solution is there, to block auto-mounting /dev/sda drive?

    Read the article

  • how do I set up a virtual host (it's not working, and I've done everything right)

    - by piratepartypumpkin
    My router redirects port 80 to port 8080. My router works fine and my domain name is routed properly. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Does the mplayer-mozilla plugin have a problem with long DIVX videos or is this a problem with the U

    - by creamcheese
    Pretty consistently when I'm watching a long DIVX movie in the browser on Karmic - say 2 hours or so - the mplayer-mozilla player stops after a half-hour or an hour and resets back to 0, so I have to reload the whole movie again over the web. It happens repeatedly so I never get past the half-way point in the DIVX version of the movie - I have to watch a flash version instead. I don't know if this is an mplayer issue or a Firefox memory issue. Anyone have any idea how to resolve this?

    Read the article

  • Ubuntu 10.10 - PC shutdown before boot shortly after BIOS loads

    - by clem
    Since installing Ubuntu 10.10 from Karmic I've started getting problems with starting up the PC. I've done a complete wipe (Boot and Nuke) of the hard drive and reinstalled Ubuntu 10.10 but the problem still occurs. There is no dual boot on the PC, just Ubuntu. Here is the problem: Each morning, when I turn the PC on from being off overnight, the PC starts up and loads the BIOS. I get the following message Verifying DMI Pool Data... K8 NPT Data Change...Update New Data to DMI!....... Then poof the computer shuts off. However, after switching the computer back on around 6 or 7 times after it's turned itself off, it will eventually boot up without any problem. Also, once up and running for a while, I can shutdown and restart the PC first time, without any issues. I have also noticed a problem with the USB mouse being recognised and once I finally get the computer booted up, I need to unplug and then plug the mouse back in to get it working. I've opened the PC up and checked the connections (cables, cards and memory) and it all seems fine. The main issue with troubleshooting this problem is I cannot test any suggestions or fixes until the next morning because once the computer is up and running it will remain so! I do not leave the computer on overnight to save energy. So.. Is this a hardware / boot software issue? This is a very odd problem and I have googled to no avail. Any suggestions?

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • Best practices to avoid Jenkins error: sudo: no tty present and no askpass program specified

    - by s g
    When running any sudo command from Jenkins I get the following error: sudo: no tty present and no askpass program specified I understand that I can solve this by adding a NOPASSWD entry to my /etc/sudoers file which will allow user jenkins to run commands without needing a password. I can add an entry like this: %jenkins ALL=(ALL)NOPASSWD:/home/vts_share/test/sudotest.sh ...but this leads to the following issue: how to avoid specifying full path in sudoers file? I can add an entry like this: %jenkins ALL=NOPASSWD: ALL ...but this allows user jenkins to avoid the password prompt for all commands, which seems a bit unsafe. I'm just curious what my options are here, and if there are any best practices I should consider.

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

  • Unable to mount root fs over NFS [on hold]

    - by johnmadrak
    I am attempting to set up a Raspberry Pi running Pidora to boot from an NFS share. My configuration in cmdline.txt is: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs nfsroot=<serverip>:/fake/path,nfsvers=3,rw,nolock nfsrootdebug ip=dhcp elevator=deadline rootwait On the Pi, the output I see is: IP-Config: Got DHCP answer from <router>, my address is <clientip> IP-Config: Complete: device=eth0, hwaddr=<macaddress>, ipaddr=<clientip>, mask=255.255.255.0, gw=<routerip> host=<clientip>, domain=, nis-domain=(none) bootserver=<routerip>, rootserver=<serverip>, rootpath= nameserver0=<routerip> (It pauses for a bit here) VFS: Unable to mount root fs via NFS, trying floppy VFS: Cannot open root device "nfs" or unknown-block(2,0); error -6 Please append a correct "root=" boot option; here are the available partitions: ..... On the NFS Server (an OpenVZ Container), the output I see in the /var/log/messages is: Aug 22 23:24:01 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:783 for /fake/path (/fake/path) Aug 22 23:24:38 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:741 for /fake/path (/fake/path) Aug 22 23:25:25 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:752 for /fake/path (/fake/path) Aug 22 23:26:12 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:876 for /fake/path (/fake/path) To test, I've made sure I can mount (non-root) from both the Pi and another machine and it worked. Does anyone have an idea on what could be wrong or how to narrow it down? Thank you in advanced for your help.

    Read the article

  • some novice questions about VPS

    - by Camran
    I have ordered a VPS today, and I am still setting it up. I have some Q though: Also, I have just installed apache, php and mysql. 1- I don't understand which folder my website should be in. Is it the 'www' folder or the htdocs folder I should upload my site to? Because they are in entirely different directories. 2- How do I even upload files? FTP program? How do I install one, and which one? Thanks

    Read the article

  • netmask: command not found

    - by Ian R.
    I purchased a new server with a few ip's so I modified the /etc/network/interfaces file recently so that my ip's can go live. While editing that file I created a backup and deleted the original file. I recreated the interfaces file using the touch command and gave +x permissions but now, when trying to restart the interface (/etc/network/interfaces restart) I get all sorts of errors: /etc/network/interfaces: line 10: iface: command not found /etc/network/interfaces: line 11: address: command not found /etc/network/interfaces: line 12: netmask: command not found /etc/network/interfaces: line 13: auto: command not found Can any1 point what I forgot to do? Thanks.

    Read the article

  • Does anyone know why rsync would keep sending the files over and over again?

    - by beagleguy
    I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example: rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt I then verify those files are copied over... then the next time it runs it does the same thing rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck. thanks,

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • IBM Server searching for secondary server

    - by user1241438
    I just bought the following server IBM System x3950 Server, 4 x 3.0GHz Dual Core, 32GB, 6 x 73.4GB 10K SAS RAID, 256MB BBWC, 2x Power, CD-RW/DVD When i boot it up, it says "Searching for secondary server" and hangs their for almost 10 mins. After 10 mins, it says timeout on searching chassis 2. But after this it proceed to boot the OS properly. But my frustration, i need to wait for almost 15 mins to boot everytime. How do i prevent this error message.

    Read the article

  • After upgrade to Lucid, computer doesn't respond to clicks or pressed keys until X restart

    - by Victor Stanciu
    I upgraded to Lucid today, and after the update the display is "frozen". I can move the cursor, but I cannot click on anything, nor does it respond to any keys. The only way to fix it is to SSH into the machine (which, by the way, works just fine), and kill and start X. Then I'm taken back to the login screen and everything works. This happens every time I boot. Let me know if there are more details that I can provide.

    Read the article

  • Print from Linux to Windows networked printer

    - by wonkothenoob
    I want to print from a Debian (Lenny) workstation to a Windows networked printer. I'm not even sure what type of Windows network this is. Our tech-support is friendly but doesn't want to get involved with supporting Linux. I need to use it for a variety of reasons and am completely stumped because I know nothing about Windows networking. They gave me URI smb://msprint.ourorg.edu as the "address" of the printer and further confirmed that the domain is "OURORG" and the share is "PHYS-PRI". I've installed CUPS and made sure that it's running as a daemon, I've clicked on the system-config-printer[1] icon, selected the printer as a Windows printer shared via SAMBA and entered the above URI. Attempting to print a testpage just sees it sit in the queue. I attempted to see if I could access the share using two other methods. Method 1. First I tried the "smbclient" from the CLI: $ smbclient -L //msprint.ourorg.edu -U user23 timeout connecting to 192.168.44.3:445 timeout connecting to 192.168.44.3:139 Connection to msprint.ourorg.edu failed (Error NT_STATUS_ACCESS_DENIED) Method 2. I tried to use the GUI tool Smb4K. This shows me four other toplevel (I'm assuming they're domains?) groupings one of which is the one which our IT department supplied to me. Clicking them shows a bunch of other machines with (what I assume are NetBIOS names?) including my own. I see all sorts of other networked printers belonging to other departments but none within mine. Certainly not the PHYS-PRI one suggested to me by the IT folks. I realize that I'm probably using the wrong terminology for the windows network, but can anyone help me with this? What steps should I be taking in debugging this? Do I need to actually run my machine as a SAMBA server to authenticate to the printer or should I just be able to communicate using CUPS? It's a GUI to CUPS configuration http://cyberelk.net/tim/software/system-config-printer/

    Read the article

  • How do I restore the default applets to Gnome's notification area?

    - by gbacon
    I have a fresh install of Karmic Koala. In a botched attempt at trying to change my default window manager, I somehow removed at least three applets from the notification area: network manager (nm-applet), volume control (gnome-volume-control-applet), and the battery meter (???). Now if I logout and back in, these applets don't run, but I can start them from the command line. Because it's a fresh install, I completely removed my luser account and home directory. After recreating my account, I was frustrated to find that the applets are still missing and no obvious way to add them back. How can I restore the default configuration?

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >