Search Results

Search found 36698 results on 1468 pages for 'old linux fan'.

Page 83/1468 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • How to force a resolution in linux?

    - by hi
    I have a HP LP3065 that requires a dual link dvi cable to go up to 2560x1600 resolution. However, I do not have that cable, but I want to use 1920x1200 resolution. However, my nVidia Display Settings (also system display settings) will only go up to 1280x1024, which looks horribly pixelated on the 30". How do I force the 1920x1200 resolution? I tried adding the mode into my xorg.conf file, but it still would not take it. I know my vid card can do 1920x1200 since it works on the 24" monitor with the same dvi cable. Here are my specs: Fedora 12 Nvidia Quadro NVS 420 Intel Xenon E5530 cpu 6gig memory Thanks

    Read the article

  • linux need to discover local sata mirror before hba attached scsi

    - by Ryan
    (none of the machines mentioned are in production) Hello, I'm trying to install Centos 5.4, which wants to put the boot loader on either the boot sector of the boot drive (a local SATA mirror, recognized second as sdb) or the mba of a hba-attached SCSI array (recognized first as sda). There's a LILO install already on the mba of sdb, which keeps trying to boot first. If I zero out the MBA of sdb, would the boot loader at sdb1 be found and booted? I was thinking of that as a plan B, as I was mostly thinking of coaxing CentOS to find the local mirror first and bring that up as sda, but I haven't found info on how to do this anywhere.

    Read the article

  • find command in Linux

    - by Martin
    My goal is to find all pdf files on a remote machine, so I resort to the useful command find. So I type find ~ *.pdf or find ~ "*.pdf" and I get nothing. I do the same on my machine and I get nothing. I do a regular search from the menu on my machine and I find quite a few pdf files. Would somebody please tell me what am I doing wrong?

    Read the article

  • Linux Scheduler (not using all cores on multi-core machine) RHEL6

    - by User512
    I'm seeing strange behavior on one of my servers (running RHEL 6). There seems to be something wrong with the scheduler. Here's the test program I'm using: #include <stdio.h> #include <unistd.h> #include <stdlib.h> void RunClient(int i) { printf("Starting client %d\n", i); while (true) { } } int main(int argc, char** argv) { for (int i = 0; i < 4; ++i) { pid_t p_id = fork(); if (p_id == -1) { perror("fork"); } else if (p_id == 0) { RunClient(i); exit(0); } } return 0; } This machine has a lot more than 4 cores so we'd expect all processes to be running at 100%. When I check on top, the cpu usage varies. Sometimes it's split (100%, 33%, 33%, 33%), other times it's split (100%, 100%, 50%, 50%). When I try this test on another server of ours (running RHEL 5), there are no issues (it's 100%, 100%, 100%, 100%) as expected. What's causing this and how can I fix it? Thanks

    Read the article

  • Krusader on Linux

    - by Guus
    In Krusader you can create a New Network Connection (CTRL-N) How can I save those connections? Now I have to enter all details every time.

    Read the article

  • Linux DD command partition -to- partition

    - by Ben Jackson
    I just used the DD command to copy the contents of one partition over to another partition on another drive, like this: dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror sda2 partition was 66GB and sdb2 was 250GB. I read that by doing this the extra space on the drive I am copying to will be wasted, is this true? I wasn't worried about loosing the extra space for the time being however, I just ran: sudo kill -USR1 (PID) to view the current status of DD and it has written over 66GB of data, will it continue to write data until it gets to 250GB? If so, is there a way to stop the process without corrupting it as waiting for it to write blank space seems like a waste of time.

    Read the article

  • Force spin-down of external hard-drive on linux (raspberry pi)

    - by user258346
    I'm currently setting up a home-server using a Raspberry Pi with an external hard-disk connected via usb. However, my hard-drive will never spin down when being idle. I tried already the hints provided at raspberrypi.org ... without any success. 1.) sudo hdparm -S5 /dev/sda returns /dev/sda: setting standby to 5 (25 seconds) SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2.) sudo hdparm -y /dev/sda returns /dev/sda: issuing standby command SG_IO: bad/missing sense data, sb[]: 70 00 04 00 00 00 00 0a 00 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...and 3.) sudo sdparm --flexible --command=stop /dev/sda returns /dev/sda: HDD 1234 ... without spin-down of the drive. I use the following hardware: Inateck FDU3C-2 dual Ports USB 3.0 HDD docking station Western Digital WD10EZRX Green 1TB Is it possible, that the sent spin-down-signals are somewhere overwritten/lost/ignored?

    Read the article

  • Lazy umount or Unmounting a busy disk in Linux

    - by deed02392
    I have read that it is possible to 'umount' a disk that is otherwise busy by using the 'lazy' option. The manpage has this to say about it: umount - unmount file systems -l Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option allows a "busy" filesystem to be unmounted. (Requires kernel 2.4.11 or later.) But what would be the point in that? I considered why we dismount partitions at all: To remove the hardware To perform operations on the filesystem that would be unsafe to do while mounted In either of these cases, all a 'lazy' unmount serves IMHO is to make it more difficult to determine if the disk really is dismounted and you can actually proceed with these actions. The only application for umount -l seems to be for inexperienced users to 'feel' like they've achieved something they haven't. Why would you use a lazy unmount?

    Read the article

  • Conditional dev|nfs mount in Linux

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • Looking for Linux text editor

    - by Daniel
    I'm looking for VIM replacement. My key points are: Extensible in sane language (such as Python, Ruby, or even Lua, after vimscript everything will do). Also GUI part should be extensible too, so no SublimeText2. GUI. Preferrably GTK+. Lightweight. I don't understand IDEs like Eclipse/NetBeans consuming up to 1G of RAM. File browser panel. Splits, tabs and windows. There should be ability to split views tabs infinite number of times (or while they fit to screen). VCS support (optional: especially Git) Snippets & autocompletion (not mandatory, but I would very love to have those) Any ideas?

    Read the article

  • Monitoring and terminating a hanged process in Linux

    - by Yoav
    Hi, I'm writing a script that runs many simultaneous processes that run the "dig" command. Once in a while (relatively rare, but happens in every run since I run dig many times) the dig command hangs with 0% CPU. Therefore, my script never terminates. I've created a monitor process for each dig command I run, which terminates it after a while, but I was wondering if there isn't a simpler and more efficient way to run a process with a pre-determined "expiration date", i.e. if the process runs more then X seconds it gets a signal that terminates it. Thanks!

    Read the article

  • linux log memory hogging issue

    - by helpmhost
    Hi, We have a VPS server (it's using Virtuozzo). On a few occasions now, our VPS memory was fully used up and no new connections could be made to the server on SSH, SMTP, or POP. The only thing that works is connecting to the web service. Luckily, plesk is running on the VPS and we have been able to reboot it through plesk (as well as see that the RAM is 100% used). I would like to find what process is causing this. I have a feeling it's MySQL, but don't really know. Is there some sort of logging I could implement that would help me find out what was the cause of this next time it happens? Thanks.

    Read the article

  • WPA2 and the linux wireless tools

    - by Bill Grey
    I would like to know a distribution independent way to connect to WPA2 wireless networks. Do the wireless tools support wpa2? iwconfig and such? Or is it necessary to use wpa_supplicant? Having to edit a config file every time if changing between many networks is quickly frustrating. I am aware of tools like wicd, but would like to know if there is a standard way to do this on all distributions without requiring third party software.

    Read the article

  • Run Linux command as predefined user

    - by vijay.shad
    Hi all, I have created a shell script to start a server program. startup.sh start When above command will executes, it will try starts the server as adminuser. To achieve this my script has written like this. SUBIT="su - adminuser -c " SERVER_BOX_COMMAND_A="Server" ############## # Function to start cluster function start(){ $SUBIT "$SERVER_BOX_COMMAND_A" } When i execute the command it asks for password. Is there any other way to do this so, it will not ask for password. I have seen this behavior in Jboss startup script provided on jboss. That script changes the user to jboss and then starts the jboss server. I wanted my script to behave same way.

    Read the article

  • cpu load measure with hyperthreading on linux

    - by dronus
    How can I get the true usage of a multicore hyperthreading enabled cpu? For example lets consider a 2 core CPU, expressing 4 virtual cores. A single threaded workload would now show up as 100% in top, as one core of the virtual cores is completely used. The CPU and top work as expected, like there would be 4 real cores. With two threads however, the things get arkward: If all works well, they are balanced to the two real cores, so we got 200% usage: Two times 100% and two idle virtual cores, and are using all of the available CPU power. Seems ok to me. However, if the two threads would run on a single real core, they would show up as using two times 100%, that makes 200% virtual core usage. But on the real side, that would be one core sharing its power on the two threads, which are then using only one half of the total CPU power. So the usage numbers shown by top can not be used to measure the total CPU workload. I also wonder how hyperthreading balances two virtual on a real core. If two threads take a different amount of cycles, would the virtual cores 'adapt' so that both show a 100% load even if the real load differ?

    Read the article

  • Linux cron spamming me then stops about php/suhosin

    - by acidzombie24
    My server emails me when any messages goes to root. cron sends me messages. Today I got over 300 emails from my server all of which are PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/suhosin.so' - /usr/lib/php5/20090626+lfs/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 I have no idea why. I went to debug it however 5hours ago it stopped so theres nothing i could look at except maybe logs. Why did this maybe happen? Disk isn't full and I have enough ram available.

    Read the article

  • How to make the specified directory as FTP home directory {linux}

    - by Mirage
    I have a directory called /backups where all backups are stored for all users with dated folder Now i want to make one FTP user so that when it connect via ftp then he should go straight into that folder to download those backups In my whm/cpanel i have pure pure-ftpd installed. I don't want to make a account for that user like i have website for each user but something by which that user cna download those files Any ideas

    Read the article

  • linux audit - exclude a process that updates the time

    - by user185704
    I have set my auditd rules to log when the system time is changed However, our servers are VMs and thus have problems with the time drifting out. We needed to solve this issue so we used a VMware tool to regularly synchronize the time. My problem now is that my audit logs are overwhelmed with time change entries like this: Jun 1 15:08:39 ***** audispd: node=****** type=SYSCALL msg=audit(1338559719.053:344291): arch=c000003e syscall=159 success=yes exit=5 a0=7ffff2084050 a1=0 a2=144b a3=485449575f4c4c55 items=0 ppid=1 pid=1348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="vmtoolsd" exe="/usr/lib/vmware-tools/bin64/appLoader" key="time_change" How can I exclude this vmware tool from the audit, but still capture a user changing the time? Here are my current audit rules to capture time changes: -a always,exit -F arch=b32 -S adjtimex -S settimeofday -k time_change -a always,exit -F arch=b32 -S clock_settime -k time_change

    Read the article

  • Linux Bluetooth [closed]

    - by Chris
    Not sure if this is the proper forum; please forgive me if it's not... I have a Lenovo S10 'netbook' that I've installed Fedora 17 ("LXDE spin") on. So far pretty much everything works great, except, the on-board Bluetooth. lsusb shows the controller present (0a5c:2101 Broadcom Corp. Bluetooth Controller), hcitool dev shows hci0 present, but when I put my mouse ("Lenovo Bluetooth Laser Mouse," which works perfectly paired with a MacBook Pro, MacBook Air, Mac mini, and Lenovo SL500 (with a USB dongle; running Windows 7)) into pairing mode and run either hcitool scan (reports "Scanning ..." and, without further information or error message, returns to the shell prompt) or bluetooth-wizard (from the gnome-bluetooth package) and try to detect the mouse, I get nothing... Frustrating! Thanks anyone who can point me in the right direction!

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >