Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 435/993 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Rebuilding RAID1 in Ubuntu

    - by John Utech
    I had my second HD in my RAID1 come up with bad sectors. So I got another drive and pulled out the bad sector drive and put the new drive in. With the original working RAID1 drive in the computer it failed to boot. I manually copied everything from the old drive over via a Gparted Live CD. Still no booting. Kind of scratching my head here as I can see that both of the drives have data on them but are unable to get either of them to boot. I used a Ubuntu live CD and couldn't even manually mount either of the drives, which I thought was really the odd part. Not sure where to go from here.

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • I wanna save some terminal commands in a file

    - by Jakob Abfalter
    I am using Opensuse 12.3 What I wanna do is, create a link on my desktop for some specific terminal commandos. The backround is, that I do some backup via rsync and don`t wanna type the commandos everytime new. I also dont wanna use a cronjob, since my computer isnt running everytime. Perfect would be some desktop icons, which on clicking execute the command(s). Could somebody tell me how to do this?

    Read the article

  • What will happen if on my DB server I'll run out of space?

    - by Noam
    I'm seeing a hugh difference of free disk space between df -h and du -sxh / I've understood in my question Resolving unix server disk space not adding up that du -sxh / is a better estimation as to when I will run out of disk space. Having said that, assuming in my case the above sentence will prove to be wrong and I will run soon out of disk space, what will happen? I assume the MySQL will fail INSERT queries, but other than that, will I just need to delete some files or will it be a problematic situation?

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • crontab still sending emails even with > /dev/null

    - by user2344668
    I have a crontab (root) that runs a script and output is set to /dev/null but I always get the emails whenever it runs. I only want to receive error emails. # Rackspace driveclient update (12pm MST) 0 12 * * * /root/scripts/driveclient-update > /dev/null The only way I can get it to turn off is to use /dev/null 2&1 but then I won't get error emails. This is happening on three different CentOS servers, two are 6.3 and one is 6.4. NOTE: I have read over and over that /dev/null is supposed to send stdout there and prevent the email if there is nothing but stdout from the script, so at works for at least some people; I cannot figure out why it is not working on these servers. Here's an example of where /dev/null is supposed to work: http://www.alphadevx.com/a/384-Suppressing-Cron-Job-Email-Notifications

    Read the article

  • Separated virtual networks with same subnet range with 2 interface

    - by Coolpet
    I'm having some problems with routing with the following: I have a server with 2 interfaces. It has 1-1 alias contains the same subnet. the 2 interface is connected to 2 switch, which are separated from each other. Infrastructure: Eth0 192.168.16.2/20 Eth0:eth0 192.168.1.222/20 Eth1 192.168.32.3/20 Eth1:eth1 192.168.1.223/20 I have a PC which has the IP address: 192.168.1.3/24 The problem is the next: If PC is on subnet 1, I can ping it. If PC is on subnet 2, I can't ping it. traceroute shows the route is across 192.168.1.222 ping -I 192.168.1.223 192.168.1.3 is not working on subnet 2. arp entries show the MAC address belonging to the correct interface (eth1 on subnet 2) How can I force the server to look on both interface same ranged subnet for specific IP? It searches only in the first subnet. The routing table has these 2 entries: 192.168.0.0/20 dev eth0 proto kernel scope link src 192.168.1.222 192.168.0.0/20 dev eth1 proto kernel scope link src 192.168.1.223

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • After my laptop wakes up from sleeping/hibernating, the LCD/brightness is very low. How can I set it to default?

    - by meder
    In Power Management Preferences, On AC Power tab, I have brightness to 100%. "Dim display when idle" is not checked. I know for sure my LCD brightness is capable of going higher, because if I hit Fn and F7 then it resets the monitor brightness and settings for a few seconds, but the resolution breaks and then the brightness goes back. PS: OS is Debian Lenny ( I set the tags but for clarification ) and laptop is a Thinkpad.

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • setting up a samba PDC -error with testparm

    - by Rungano
    Hi guys I have installed a samba PDC but when I test the samba configurations file I am getting errors like these, "Invalid combination of parameters for service homes. Map system can only work if create mask includes octal 010 (S_IXGRP)." My Configuration file is as follows [homes] comment = Home Directories path = /home_srv1/%u valid users = %S read only = No create mask = 0660 directory mask = 0770 browseable = No I tried to google but with no luck, Serverfault is always my best hope. Thanks for helping out.

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • memtest86+ crashing on server

    - by user148723
    we have a few DELL 1950 servers. 1 of this servers has CentOS6.3 and its randomly rebooting so I suspected it was hardware. (no log generated) the other 4 servers do not randomly reboot. We passed memtest86+ on the 5 servers, and on 3 of them memtest86+ crashes (displaying an odd and colorful screen, like if a video card failed) Although I tested old memtest86 (not +), and all servers did not crash. I also tested other RAM testers utilities, no tool failing. have any of you guys experience this? thanks

    Read the article

  • Raid 5 GPT Partitioning

    - by user39325
    I have a Dell Poweredge r710 server with five 1 TB disks. All of them are in RAID 5. I was trying to install Centos but it says "Your boot partition is on disk using GPT Partition..." I read somewhere that centos can't install on a disk larger than 2TB, so I made some partitions smaller, but it's not working. PS, I am going to install Proxmox on that, but Proxmox also won't accept disks larger than 2TB.

    Read the article

  • mutt isn't sending large messages

    - by Guy
    I'm using mutt in the following way: echo <MESSAGE> | mutt -s <SUBJECT> -- <TO-ADDR> This usually works when I try small message (messages with ~10 lines in the body). But when trying very large message (a message with ~200 lines) the email just isn't received. Any ideas?

    Read the article

  • Help remembering name of boot CD (maybe it wasn't a live CD)

    - by daneee
    I am struggling to remember the name of a boot CD I have since lost the disc for. It was great for cloning discs, and resetting passwords. It's NOT the UBCD4Win, and it defos wasn't Knoppix. I have checked the LiveCD list and can't seem to find it there by doing a 'sort by'. I seem to remember it was called something like GS Tools but that might be more or less completely wrong. It had an unusual but memorable name. Makes me wonder how I came across it in the first place!

    Read the article

  • Can't create LVM due to: not found (or ignored by filtering)

    - by James
    I'm planning to use LVM for KVM, and when I try to create a VG it fails, so how can I create my VG and LV ? Thanks [root@server ~]# vgcreate virtual-machines /dev/sda Device /dev/sda not found (or ignored by filtering). Unable to add physical volume '/dev/sda' to volume group 'virtual-machines'. [root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 2.0T 929G 976G 49% / tmpfs 3.9G 124K 3.9G 1% /dev/shm /dev/sda1 194M 57M 128M 31% /boot [root@server ~]# pvscan No matching physical volumes found

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • Set LD_LIBRARY_PATH and CLASSPATH on cluster nodes before running a hadoop job

    - by Ashish Sharma
    I need to set LD_LIBRARY_PATH and CLASSPATH before running a job a cluster. In LD_LIBRARY_PATH i need to add location of some jars which are required while running the job, As these jars are avaiable at my cluster, similar with CLASSPATH. I have a 3 NODE cluster, I need to set this LD_LIBRARY_PATH and CLASSPATH for all the 3 data nodes so that the following jar are available while running the job

    Read the article

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • What could be causing LVM errors on first boot after install in Debian?

    - by ianfuture
    Hi, I've installed Debian (lenny) on a machine at home. It was set up during install to have a /boot partition, then the rest was encrypted, then had an LVM ontop of that, then all the other partitons inside LVM. After install completed and on first boot it asked for password to un-encrypt(same password for both drives) then it showed an error which said LVM could not find a physical device with a particular UUID or something similar. LVM install is over two HDs. One is 120GB and one 40GB. 120GB is Master on its IDE cable and this has /boot on it. 40GB is slave on the other IDE cable. Is there anything that could be done to rescue this install? Or diagnose problem? It took ages to get installed due to time spent enrypting drives and I'd rather not go through that again. :( Thanks.. Ian

    Read the article

  • How to set the laptop screen brightness programatically?

    - by zls
    I'm currently migrating to openbox without gnome session. In unity i can use the vendor keys to set the screen brightness, but in openbox I'm on my own. /sys/class/backlight/acpi_video0/brightness works fine, the problem is that I need sudo to set the brightness and that wouldn't work with keyboard mappings. xbacklight -get/set doesn't do or output anything. I don't really want to use xrandr --brightness. Are there any other options or a way to fix the problems with xbacklight or acpi_video0 ?

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >