Search Results

Search found 24666 results on 987 pages for 'cooperative linux'.

Page 445/987 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • Subversion Permission Denied when adding or committing

    - by Rungano
    Hi guys I am running subversion 1.4 on Centos 5.2 and my clients are using tortoise to do their check out, commit etc. I think I have permissions problems but I have configured the folder to accessible to everyone with 777 attribute but I seem not to be getting anywhere. Its generating this error on tortoise "svn: Can't open file 'PATH/TO/MY/FILES/entries': Permission denied". Some guy was suggesting some indexing software installed on the client machine like google desktop, any suggestions?

    Read the article

  • What is excessive swapping.

    - by amateur barista
    This post led me to ask that question. Cache contention On a large site, if you are using MyISAM, contention occurs in the database tables when the cache is forced to clear after a node or a comment is added. With tens of thousands of filter text snippets needing to be deleted, the table will be locked for a long period, and any accesses to it will be queued pending the purge of the data in it. The same is true for the page cache as well. This often causes a "site hang" for a minute or two. During that time new requests keep piling up, and if you do not have the MaxClients parameter in Apache setup correctly, the system can go into thrashing because of excessive swapping.

    Read the article

  • Ubuntu reset network configuration

    - by user1103294
    When I boot up my ubuntu server, it cannot connect to my wireless network anymore. It says "waiting for network configuration" for 60 seconds, boots up, but no wireless. I suspect it's because of the following reasons. I used to connect to a wireless connection named 2WIRE555, password: 123abc But then I upgraded my connection and my new wireless connection was named 2WIRE444, password:111111 Being lazy, I simply renamed 2WIRE555 to 2WIRE444 and changed the password accordingly. I was hoping this would work but ever since then my network configurations is messed up. So back to the issue, how do I reset my network configurations for my Ubuntu 11.10 server?

    Read the article

  • I've two swap partitions, how can i delete one?

    - by Gp2mv3
    I've installed Ubuntu on my computer but I did a mistake during the installation and it created two swap's. In fact I had tree partitions (system, home, swap) but the installator crashed so I restarted the installation and it has installed everything in the system partition. So now I separated the home in the appropriate partition but I've two swap partitions. How can I delete one ? If I delete one, how will it go ?

    Read the article

  • How to auto detect text file encoding?

    - by ???
    There are many plain text files which were encoded in variant charsets. I want to convert them all to UTF-8, but before running iconv, I need to know its original encoding. Most browsers have an Auto Detect option in encodings, however, I can't check those text files one by one because there are too many. Only having known the original encoding, I then can convert the texts by iconv -f DETECTED_CHARSET -t utf-8. Is there any utility to detect the encoding of plain text files? It doesn't have to be a 100% perfect correct, but it should recognize most of them.

    Read the article

  • How do I set default group ownership for files in a directory?

    - by tnichols
    I am running a cakephp webapp on Linode LAMP. I am finding that my temp files are created with root:root ownership. But the webapp is running with Apache's permissions (www-data). This causes warnings any time there is a new file created because it is not writable for user www-data. How do I change the default ownership to www-data on any new files created in the temp folder? Thanks for your help!

    Read the article

  • Creating full, global clang+llvm environment

    - by Griwes
    What is the easiest way to setup full Clang, libc++ and LLVM as default global toolchain? All of my attempts to build it, in most of the configurations I could think of, resulted in working Clang, but it didn't use libc++ headers, but default GCC's libstd++'s ones, resulting in numerous faults in incompatible pieces of library code. I would like it working out of the box, without having to do magic in .bashrc or passing all those -stdlib=libc++ and -lc++ to compiler and linker.

    Read the article

  • Why does my CentOS logrotate run at random times?

    - by Mike Pennington
    I put a logrotate configuration file in /etc/logrotate.d/ and expected the logs to rotate at a consistent time; however, they do not... log rotation times are seemingly random +/- one hour. Why are the log rotation start times random, and how can I change this? Informational: my logrotate config file looks like this... /opt/backups/network/*.conf { copytruncate rotate 30 daily create 644 root root dateext maxage 30 missingok notifempty compress delaycompress postrotate ## Create symbolic links in daily/ PATH=`/usr/bin/dirname $1`; FILE=`/bin/basename $1`; /bin/ln -s $1 $PATH/daily/$FILE endscript }

    Read the article

  • nfs mount fails in Ubuntu 10, but not with -v

    - by stuartreynolds
    (1) mount -t nfs remotehost:/remotedir localmountpoint -o owner,rw (2) mount -v -t nfs remotehost:/remotedir localmountpoint -o owner,rw (1) Used to work with Ubuntu 9 and now fails with Ubuntu 10 (2.6.32-21-generic kernel) with the error: mount.nfs: an incorrect mount option was specified Strangely, adding -v (verbose) in (2) makes the problem go away. This is currently a blocker for me because the fstab line: remotehost:/remotedir localmountpoint nfs owner,rw 0 0 causes the same error (I don't believe I can specify verbose in fstab). Is this a bug in mount or my options really incorrect?

    Read the article

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • How can I make zsh completion behave like Bash completion?

    - by Nate
    I switched to zsh, but I dislike the completion. If I have 20 files, each with a shared prefix, on pressing tab, zsh will fully complete the first file, then continue going through the list with each press of tab. If I want one near the end, I would have to press tab many times. In bash, this was simple - press tab and I would get the prefix. If I continued typing (and pressing tab), bash would complete as far as it could be certain of. I find this behavior to be much more intuitive but prefer the other features of zsh to bash. Is there a way to get this style of completion? Google suggested setopt bash_autolist, but this had no effect for me (and no error message was printed upon starting my shell). Thanks.

    Read the article

  • How do I reduce RAM usage on my server?

    - by Abs
    I have recently launched a site that is very popular but I am having trouble with scalability. My site makes heavy use of FFmpeg and at peak times RAM usage hits the 2 GB point quickly and the swap file starts getting used. CPU usage starts rising too. Users complain that the site is slow. They say this because all FFmpeg instances run very slow because of the number running at the same time. Users make use of FFmpeg on my server in real time. Is there anything I can consider or do to ease down the usage of the server and RAM just shooting up? Maybe there is something better than FFmpeg (!). Is the only solution "throwing some cash" at a more powerful server? I have given little information, please ask for more, so this problem can be solved.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • Is there a BSD equivalent to "!!"?

    - by CT
    I often find myself issuing a command that I do not have the proper elevated privileges for. On Ubuntu I could use sudo !! This would issue the same command with sudo privlidges. Is there an equivalent on OpenBSD? Edit: I should have been more specific on what version of OpenBSD. I am using OpenBSD 4.8 where sudo seems to be installed by default. I have already created a user besides root and edited my sudoers file to allow for that user to use sudo. My question is, is there already a built-in shortcut for the "!!" to use previous command.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >