Search Results

Search found 24933 results on 998 pages for 'arch linux'.

Page 62/998 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • dynamic routing between openvpn tunnels

    - by pQd
    i'm thinking about using dynamic routing [ OSPF or RIP ] via OpenVPN tunnels. right now i have few offices connected in full mesh, but this is not scalable solution as we add more locations. i would like to avoid situation when plenty of internal traffic is affected if one of two vpn termination points that i plan to use is down. do you have similar configuration working in production? if so - what routing daemon did you use - quagga? something else? did you encounter any problems? thanks!

    Read the article

  • Understanding Linux SCSI queue depths

    - by Troels Arvin
    I'm experimenting with the effects of different SCSI queue depth values on a Dell server running CentOS Linux 5.4 (x86_64). The server has two QLogic QLE2560 FC HBAs connected via multipathing to a storage system. The storage system has allocated two LUNs to the server, each connected through four paths in an active-active-active-active round-robin configuration. All in all, the two LUNs exist as eight /dev/sdX devices, represented by two devices in /dev/mpath. I currently adjust the queue depth values in /etc/modprobe.conf and check the result (after rebooting) by looking in the seventh column of /proc/scsi/sg/devices. Two questions related to that: Is there a way to adjust queue depths without rebooting or unloading the qla2xxx kernel module? E.g., can I echo a new queue depth value into some /proc or /sys-like file to update the queue depth? If I set the queue depth to 128, is that 128 in total for all devices handled by the qla2xxx module?, or 128 for each HBA? (256 in total), or 128 for each of the eight /dev/sdX devices (1024 in toal)?, or 128 for each of the two /dev/mpath/... devices (256 in total)? This is important for me to know so that my server doesn't flood the storage system, affecting other servers connected to it.

    Read the article

  • How to roll-your-own live CD for safe home browsing

    - by user36533
    Hi, I'm interested in booting-off-flash (i.e. like livecd) for more secure online banking at home. -I like system rescue CD, but AFAIK it doesn't have the wifi drivers. (These are convenient) -ubuntu live cd has the wi-fi drivers, but also has a lot of stuff I don't need -I'd like a way to save some basic config settings (e.g. wifi SSID and passphrase), so that wifi works upon startup, i.e. without having to re-enter the settings. What's the best way to 'roll my own slightly-customized boot-from-flash live cd? thanks, bill

    Read the article

  • Attach radeon driver to specific PCI devices?

    - by genpfault
    I have two Radeon cards in this machine, a 6570 and a 6950: lspci | grep VGA: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570] 02:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cayman PRO [Radeon HD 6950] I'm trying to get VGA passthrough to work with KVM on Debian Wheezy, passing through the 6950 as a secondary video card to a Windows 7 guest. This works fine if I blacklist the radeon kernel module via /etc/modprobe.d/. If I remove the blacklist to run X11 (or even just a KMS console) on the 6570 the radeon module seems to attach to both cards: dmesg | egrep "01:00.0|02:00.0|radeon": pci 0000:01:00.0: [1002:6759] type 0 class 0x000300 pci 0000:01:00.0: reg 10: [mem 0xe0000000-0xefffffff 64bit pref] pci 0000:01:00.0: reg 18: [mem 0xf7e20000-0xf7e3ffff 64bit] pci 0000:01:00.0: reg 20: [io 0xe000-0xe0ff] pci 0000:01:00.0: reg 30: [mem 0xf7e00000-0xf7e1ffff pref] pci 0000:01:00.0: supports D1 D2 pci 0000:02:00.0: [1002:6719] type 0 class 0x000300 pci 0000:02:00.0: reg 10: [mem 0xd0000000-0xdfffffff 64bit pref] pci 0000:02:00.0: reg 18: [mem 0xf7d20000-0xf7d3ffff 64bit] pci 0000:02:00.0: reg 20: [io 0xd000-0xd0ff] pci 0000:02:00.0: reg 30: [mem 0xf7d00000-0xf7d1ffff pref] pci 0000:02:00.0: supports D1 D2 vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none vgaarb: device added: PCI:0000:02:00.0,decodes=io+mem,owns=none,locks=none vgaarb: bridge control possible 0000:02:00.0 vgaarb: bridge control possible 0000:01:00.0 pci 0000:01:00.0: Boot video device [drm] radeon kernel modesetting enabled. radeon 0000:01:00.0: setting latency timer to 64 radeon 0000:01:00.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used) radeon 0000:01:00.0: GTT: 512M 0x0000000040000000 - 0x000000005FFFFFFF [drm] radeon: 1024M of VRAM memory ready [drm] radeon: 512M of GTT memory ready. radeon 0000:01:00.0: irq 46 for MSI/MSI-X radeon 0000:01:00.0: radeon: using MSI. [drm] radeon: irq initialized. radeon 0000:01:00.0: WB enabled [drm] radeon: ib pool ready. [drm] radeon: power management initialized fbcon: radeondrmfb (fb0) is primary device fb0: radeondrmfb frame buffer device [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 radeon 0000:02:00.0: enabling device (0000 -> 0003) radeon 0000:02:00.0: setting latency timer to 64 radeon 0000:02:00.0: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used) radeon 0000:02:00.0: GTT: 512M 0x0000000080000000 - 0x000000009FFFFFFF [drm] radeon: 2048M of VRAM memory ready [drm] radeon: 512M of GTT memory ready. radeon 0000:02:00.0: irq 49 for MSI/MSI-X radeon 0000:02:00.0: radeon: using MSI. [drm] radeon: irq initialized. radeon 0000:02:00.0: WB enabled [drm] radeon: ib pool ready. [drm] radeon: power management initialized fb1: radeondrmfb frame buffer device [drm] Initialized radeon 2.12.0 20080528 for 0000:02:00.0 on minor 1 [drm] radeon: finishing device. radeon 0000:02:00.0: ffff88041a941800 unpin not necessary [drm] radeon: ttm finalized pci-stub 0000:02:00.0: claimed by stub pci-stub 0000:02:00.0: irq 49 for MSI/MSI-X This causes the Win7 VM to bluescreen on boot. How can I configure things so that the radeon module only attaches to the 6570 and not the 6950?

    Read the article

  • The People Who Support Linux

    <b>Linux.com: </b>"The Linux Foundation's individual members help to support the work of Linux creator Linus Torvalds and other important activities that advance Linux, while getting a variety of other fun and valuable benefits. The series begins with Matthew Fernandez, a senior application developer based in Sydney, Australia. Matthew has been using Linux since 2001 and just recently became a Linux Foundation member."

    Read the article

  • SMB from fedora 12 to windows network

    - by Jean
    Hello, I got Fedora 12 samab and samba client and permitted the same via the iptables. For some reason, I cannot seem to browse my windows network. What did I do wrong, and how can I achieve this. Thanks Jean [edit] - I want to browse the windows workgroup [edit] - Set the workgroup name, restarted the smb in the services, can browse network, but only my fedora 12 system is being shown

    Read the article

  • SMB from fedora 12 to windows network

    - by Jean
    Hello, I got Fedora 12 samab and samba client and permitted the same via the iptables. For some reason, I cannot seem to browse my windows network. What did I do wrong, and how can I achieve this. Thanks Jean [edit] - I want to browse the windows workgroup [edit] - Set the workgroup name, restarted the smb in the services, can browse network, but only my fedora 12 system is being shown

    Read the article

  • Is it possible to be a Linux professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • How to configure wpa_supplicant on RHEL6?

    - by Yang Jy
    I am running a version of RHCE6 on my laptop. I uninstalled the default NetworkManager in order that I could configure the network all in the command line. The Ethernet part is okay, but I have problem bringing up the wireless interface. What I got is: Bringing up interface wlan0: Determining IP information for wlan0... failed; no link present. Check cable? I did exactly what this article says. I am not sure if it is because the article is obsolete or something else. Please help.

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Linux on Sony Vaio VPCEB1S1E

    - by Jaakko
    I bought Sony Vaio VPCEB1S1E and I was able to surf on net. Then I tried to install Ubuntu 9.04 and Linux Mint on it but neither allows me an access to the Internet. How can I configure Mint so that I can go to net and get updates via apt-get? jaakko@jaakko-laptop ~ $ ifconfig -a lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:720 (720.0 B) TX bytes:720 (720.0 B) pan0 Link encap:Ethernet HWaddr 46:83:d4:f4:36:bc BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 78:dd:08:c5:61:88 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster0 Link encap:UNSPEC HWaddr 78-DD-08-C5-61-88-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) jaakko@jaakko-laptop ~ $ ping 8.8.8.8 connect: Network is unreachable

    Read the article

  • SVN and WebSVN with different users access restriction on multiple repositories on linux

    - by user55658
    and first of all sorry for my english. I've installed an ubuntu server 10.04.1 with apache2, subversion, svn_dav and websvn. (and others services of course, like php5, mysql 5.1, etc). I've configured my svn with multiple repositories, and each one with differents groups and users, like: /var/myrepos/repo1 group: mygroup1 /var/myrepos/repo2 group: mygroup2 /var/myrepos/repo3 user: johndoe With an easy access on svn_dav, works perfectly, ie: http://myserver/svnrepo1 accesibly only for users on mygroup1 with theirs users of linux and passwords of svn. Also works for the other repos with their users and groups. But when i tried with websvn, shows all repos without take care than if user on mygroup1 can view repo2 (that's i dont want do). You can login as any user on mygroup1, mygroup2, or johndoe, and you login into all repositories. I'll try to find a solution and I'll post the news, if anyone can helpme with this I'll preciated so much!!! Thanks for all I show my files: /etc/apache2/mods-available/dav_svn.conf <Location /svnrepo1> DAV svn SVNPath /var/myrepos/repo1 AuthType Basic AuthName "Repositorio Subversion de MD" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> <Location /websvn/> Options FollowSymLinks order allow,deny allow from all AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location>

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • Video card not detected on Lenovo T410 in Linux

    - by wich
    I have a T410 with an nVidia NVS 3100M, this is not a hybrid system, there is no Optimus. (No option in the BIOS for Optimus, lspci in linux as well as the Windows device manager only show the nVidia) Using lspci I see the GPU as a present device, however, I can not, for the life of me, get any video driver to work that will let me start an X session, every time X craps out with the error (EE) No devices detected. I have tried the nVidia binary blob, (with nvidia-config, made sure no nvidia support in the kernel), I have tried nouveau, I have tried nv, I have even tried generic vesa, nothing will work. When I compare the dmesg that I get when loading the nvidia kernel module, I see that it is missing some lines compared with another system that also has an nvidia card, specifically the line mentioning the GPU name (3100M) is not there. I have checked every option in the BIOS, there is nothing to control except for the BIOS video output port, which is set to the LCD panel. I have no idea anymore what the problem may be, or even how I can diagnose this problem further. Any help will be appreciated.

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything. I am already having cwRsync installed on my widows and running it in daemon mode using --damemon option. And I am able to login using ssh from windows machine to linux machine. When I issue bellow command, it just blocks for 120 seconds (timeout I specified in command) and exits saying there is timeout. rsync -e "ssh -l piyush" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" After starting rsync on widows, I checked, rsyc is running. And widows firewall setting are set to minimal, and on Linux machine stopped iptables service so that port 873 (default rsync port) is not blocked. What can be the possible reason that Linux machine is not able to connect to rsync-daemon on windows machine?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Moving domain and keeping IMAP email - Linux Evolution, Mac Mail

    - by Douglas Squirrel
    This question is about keeping email during a server move, where the clients are Linux (me) and Mac (my wife) using IMAP. I receive email at [email protected] using a webmail service that my hosting company (1and1) provides. I read it via IMAP in evolution, so I should have copies of all the emails on my local machine. I have just moved mydomain.com from one type of account to another, and the hosting company don't move my existing email on the server when I do this - I assume they move my account to a different mailserver, and don't choose to provide a migration path for the email to move too (yes, this is annoying). Before migrating, I backed up Evolution (File - Backup settings) and did a spot-check in the evolution-backup.tar.gz file to be sure that my mail was in there. After migrating, I restored (File - Restore settings) and had hoped that I would see all my mail again. Unfortunately, Evolution just shows me new mail sent to the account, not the old mail. Is there a way to get the old mail back in the mailserver, or at least displaying in Evolution, as it was before the move? If not, can I read it in some convenient way, e.g. in Evolution offline or in a text file (then I can pick the mails I really want to keep and resend them to myself)? Also, I am about to do a similar move for my wife's domain, [email protected]. She reads her mail on a Mac using IMAP to Apple Mail. Is there anything I can do to make the move smooth for her? (I have backed up [her user]/Library/Mail already, but not sure what to do once the move is done.)

    Read the article

  • Oracle RDBMS Server 11gR2 Pre-Install RPM for Oracle Linux 6 has been released

    - by Lenz Grimmer
    Now that the certification of the Oracle Database 11g R2 with Oracle Linux 6 and the Unbreakable Enterprise Kernel has been announced, we are glad to announce the availability of oracle-rdbms-server-11gR2-preinstall, the Oracle RDBMS Server 11gR2 Pre-install RPM package (formerly known as oracle-validated). Designed specifically for Oracle Linux 6, this RPM aids in the installation of the Oracle Database. In order to install the Oracle Database 11g R2 on Oracle Linux 6, your system needs to meet a few prerequisites, as outlined in the Linux Installation Guides. Using the Oracle RDBMS Server 11gR2 Pre-install RPM, you can complete most of the pre-installation configuration tasks. which is now available from the Unbreakable Linux Network, or via the Oracle public yum repository. The pre-install package is available for x86_64 only. Specifically, the package: Causes the download and installation of various software packages and specific versions needed for database installation, with package dependencies resolved via yum Creates the user oracle and the groups oinstall and dba, which are the defaults used during database installation Modifies kernel parameters in /etc/sysctl.conf to change settings for shared memory, semaphores, the maximum number of file descriptors, and so on Sets hard and soft shell resource limits in /etc/security/limits.conf, such as the number of open files, the number of processes, and stack size to the minimum required based on the Oracle Database 11g Release 2 Server installation requirements Sets numa=off in the kernel boot parameters for x86_64 machines Please see the release announcement for further details and instructions. Also take a look at Ginny Henningsen's "How I Simplified Oracle Database Installation on Oracle Linux" article on the Oracle Technology Network for a general description on how to perform the installation of the Oracle Database on Oracle Linux. While the article refers to Oracle Linux 5 and the former "oracle-validated" package, the steps for Oracle Linux 6 are still very similar (we're looking into updating that article for Oracle Linux 6).

    Read the article

  • Linux: Managing users, groups and applications

    - by RN
    I am fairly new to linux admin so this may sound quite a noob question. I have a VPS account with a root access I need to install Tomcat, Java on it and later other open source applications as well. Installation for all of these is as simple as unzipping the .gz in a folder. My questions are A) Where should I keep all these programs? In Windows, I typically have a folder called programs under c:\ where I unzip all applications. I plan to have something similar here as well. Currently, I have all these under apps folder under/root- which I am guessing is a bad idea B) To what group should Tom belong to ? I would need a user - say Tom who can simply execute these programs. Do I need to create a new group? or just add Tom to some existing group ? C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them? I mean an alternate way would be to use Yup or RPM or something like that to install these applications. Given my familiarity and (tight budget) that seems too much to me. I feel uncomfortable running commands which i don't understand too well

    Read the article

  • What do "Unknown SSAP" and "Unknown DSAP" mean in tcpdump?

    - by lacker
    While trying to fix a problem with intermittently losing internet connection on a machine with a wireless connection to a router, I ran tcpdump and noticed packets with "Unknown SSAP" and "Unknown DSAP" errors coming at a rate of a few per second. 20:27:21.703178 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 171 20:27:21.724726 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 104 20:27:21.746449 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe4 Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:21.970963 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe8 Information, send seq 0, rcv seq 16, Flags [Response], length 76 20:27:22.016565 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:22.038471 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 171 What does the "Unknown SSAP" and "Unknown DSAP" mean, and does it indicate a problem?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >