Search Results

Search found 24666 results on 987 pages for 'cooperative linux'.

Page 61/987 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything. I am already having cwRsync installed on my widows and running it in daemon mode using --damemon option. And I am able to login using ssh from windows machine to linux machine. When I issue bellow command, it just blocks for 120 seconds (timeout I specified in command) and exits saying there is timeout. rsync -e "ssh -l piyush" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" After starting rsync on widows, I checked, rsyc is running. And widows firewall setting are set to minimal, and on Linux machine stopped iptables service so that port 873 (default rsync port) is not blocked. What can be the possible reason that Linux machine is not able to connect to rsync-daemon on windows machine?

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • Oracle RDBMS Server 11gR2 Pre-Install RPM for Oracle Linux 6 has been released

    - by Lenz Grimmer
    Now that the certification of the Oracle Database 11g R2 with Oracle Linux 6 and the Unbreakable Enterprise Kernel has been announced, we are glad to announce the availability of oracle-rdbms-server-11gR2-preinstall, the Oracle RDBMS Server 11gR2 Pre-install RPM package (formerly known as oracle-validated). Designed specifically for Oracle Linux 6, this RPM aids in the installation of the Oracle Database. In order to install the Oracle Database 11g R2 on Oracle Linux 6, your system needs to meet a few prerequisites, as outlined in the Linux Installation Guides. Using the Oracle RDBMS Server 11gR2 Pre-install RPM, you can complete most of the pre-installation configuration tasks. which is now available from the Unbreakable Linux Network, or via the Oracle public yum repository. The pre-install package is available for x86_64 only. Specifically, the package: Causes the download and installation of various software packages and specific versions needed for database installation, with package dependencies resolved via yum Creates the user oracle and the groups oinstall and dba, which are the defaults used during database installation Modifies kernel parameters in /etc/sysctl.conf to change settings for shared memory, semaphores, the maximum number of file descriptors, and so on Sets hard and soft shell resource limits in /etc/security/limits.conf, such as the number of open files, the number of processes, and stack size to the minimum required based on the Oracle Database 11g Release 2 Server installation requirements Sets numa=off in the kernel boot parameters for x86_64 machines Please see the release announcement for further details and instructions. Also take a look at Ginny Henningsen's "How I Simplified Oracle Database Installation on Oracle Linux" article on the Oracle Technology Network for a general description on how to perform the installation of the Oracle Database on Oracle Linux. While the article refers to Oracle Linux 5 and the former "oracle-validated" package, the steps for Oracle Linux 6 are still very similar (we're looking into updating that article for Oracle Linux 6).

    Read the article

  • Linux: Managing users, groups and applications

    - by RN
    I am fairly new to linux admin so this may sound quite a noob question. I have a VPS account with a root access I need to install Tomcat, Java on it and later other open source applications as well. Installation for all of these is as simple as unzipping the .gz in a folder. My questions are A) Where should I keep all these programs? In Windows, I typically have a folder called programs under c:\ where I unzip all applications. I plan to have something similar here as well. Currently, I have all these under apps folder under/root- which I am guessing is a bad idea B) To what group should Tom belong to ? I would need a user - say Tom who can simply execute these programs. Do I need to create a new group? or just add Tom to some existing group ? C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them? I mean an alternate way would be to use Yup or RPM or something like that to install these applications. Given my familiarity and (tight budget) that seems too much to me. I feel uncomfortable running commands which i don't understand too well

    Read the article

  • Moving domain and keeping IMAP email - Linux Evolution, Mac Mail

    - by Douglas Squirrel
    This question is about keeping email during a server move, where the clients are Linux (me) and Mac (my wife) using IMAP. I receive email at [email protected] using a webmail service that my hosting company (1and1) provides. I read it via IMAP in evolution, so I should have copies of all the emails on my local machine. I have just moved mydomain.com from one type of account to another, and the hosting company don't move my existing email on the server when I do this - I assume they move my account to a different mailserver, and don't choose to provide a migration path for the email to move too (yes, this is annoying). Before migrating, I backed up Evolution (File - Backup settings) and did a spot-check in the evolution-backup.tar.gz file to be sure that my mail was in there. After migrating, I restored (File - Restore settings) and had hoped that I would see all my mail again. Unfortunately, Evolution just shows me new mail sent to the account, not the old mail. Is there a way to get the old mail back in the mailserver, or at least displaying in Evolution, as it was before the move? If not, can I read it in some convenient way, e.g. in Evolution offline or in a text file (then I can pick the mails I really want to keep and resend them to myself)? Also, I am about to do a similar move for my wife's domain, [email protected]. She reads her mail on a Mac using IMAP to Apple Mail. Is there anything I can do to make the move smooth for her? (I have backed up [her user]/Library/Mail already, but not sure what to do once the move is done.)

    Read the article

  • What do "Unknown SSAP" and "Unknown DSAP" mean in tcpdump?

    - by lacker
    While trying to fix a problem with intermittently losing internet connection on a machine with a wireless connection to a router, I ran tcpdump and noticed packets with "Unknown SSAP" and "Unknown DSAP" errors coming at a rate of a few per second. 20:27:21.703178 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 171 20:27:21.724726 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 104 20:27:21.746449 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe4 Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:21.970963 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe8 Information, send seq 0, rcv seq 16, Flags [Response], length 76 20:27:22.016565 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:22.038471 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 171 What does the "Unknown SSAP" and "Unknown DSAP" mean, and does it indicate a problem?

    Read the article

  • Oracle E-Business Suite 12 Certified on Oracle Linux 6 (x86-64)

    - by John Abraham
    Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on 64-bit Oracle Linux 6 with the Unbreakable Enterprise Kernel (UEK). New installations of the E-Business Suite R12 on this OS require version 12.1.1 or higher. Cloning of existing 12.1 Linux environments to this new OS is also certified using the standard Rapid Clone process. There are specific requirements to upgrade technology components such as the Oracle Database (to 11.2.0.3) and Fusion Middleware as necessary for use on Oracle Linux 6. These and other requirements are noted in the Installation and Upgrade Notes (IUN) below. Certification for other Linux distros still underway Certifications of Release 12 with 32-bit Oracle Linux 6, 32-bit and 64-bit Red Hat Enterprise Linux (RHEL) 6 and the Red Hat default kernel are in progress. References Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 (My Oracle Support Document 761566.1) Cloning Oracle Applications Release 12 with Rapid Clone (My Oracle Support Document 406982.1) Interoperability Notes Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (My Oracle Support Document 1058763.1) Oracle Linux website

    Read the article

  • Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

    - by Chris
    Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. <!DOCTYPE plugin [ <!ENTITY process-metrics SYSTEM "/pdk/plugins/process-metrics.xml">]> <plugin> <server name="ABCStats"> <config> <option name="process.query" description="Process Query" default="State.Name.sw=XYZ"/> </config> <metric name="Availability" alias="Availability" template="sigar:Type=ProcState,Arg=%process.query%:State" category="AVAILABILITY" indicator="true" units="percentage" collectionType="dynamic"/> &process-metrics; <plugin type="autoinventory"/> <plugin type="measurement" class="org.hyperic.hq.product.MeasurementPlugin"/> </server> </plugin>

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • ClamAV eating up all available disk space

    - by Ra
    Today I found that my Redhat server has run out of hard disk space. The culprit seems to be a program called Clamav that fills /tmp directory with thousands of subfolders with names like clamav-004adb870cd79534. All these folders contain this: drwx------ 2 root root 4.0K Apr 21 07:56 . drwxrwxrwt 68 root root 64K Apr 21 08:03 .. -rw------- 1 root root 18K Apr 21 07:56 COPYING -rw------- 1 root root 4.6M Apr 21 07:56 main.db -rw------- 1 root root 14K Apr 21 07:56 main.fp -rw------- 1 root root 1.5M Apr 21 07:56 main.hdb -rw------- 1 root root 901 Apr 21 07:56 main.info -rw------- 1 root root 33M Apr 21 07:56 main.mdb -rw------- 1 root root 16M Apr 21 07:56 main.ndb -rw------- 1 root root 217 Apr 21 07:56 main.zmd When I deleted them they got back and filled my hard drive in about an hour again. How do I go about this? Can I safely stop Clamav? It seems to me that Clamav is trying to upgrade unsuccessfully.

    Read the article

  • MySQL 5.5 Available on Oracle Linux 6 and RHEL 6

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }@font-face { font-family: "MS Minngs"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Following the availability of MySQL 5.5 on Oracle Linux 6 with the Unbreakable Enterprise Kernel, MySQL 5.5 is now also available on Red Hat Enterprise Linux 6 (RHEL 6) and Oracle Linux 6 with the Red Hat compatible kernel. MySQL users can download MySQL 5.5 Community Edition binaries for Oracle Linux and Red Hat Linux 6 here. MySQL customers can rely on Oracle Premier Support for MySQL when using the MySQL database on either Oracle Linux or Red Hat Enterprise Linux 6. In addition to offering direct Linux support to customers running RHEL6, Oracle Linux 6, or a combination of both, Oracle also provides Oracle Linux 6 binaries, update and erratas for free via http://public-yum.oracle.com.

    Read the article

  • DON'T MISS THE ORACLE LINUX GENERAL SESSION @ORACLE OPENWORLD

    - by Zeynep Koch
    We have had great sessions today at Openworld but tomorrow will be even better. The session that you should not miss is : Tuesday, Oct 2nd : General Session: Oracle Linux Strategy and Roadmap   10:15am, Moscone South #103   Wim Coekaerts, Sr.VP, Oracle Linux and Virtualization Engineering will talk about what Oracle Linux strategy and what is coming in the next 12 months. This is one session you should not miss and people are already registering. Stop by to hear Wim and ask questions about Linux development Top Technical Tips for Automatic and Secure Oracle Linux Deployments,  11:45am, Moscone South # 270 In this session, you will hear about deployment best practices and tips from Lenz Grimmer from Oracle and two Linux customers, Martin Breslin from SEI and Ed Bailey from Transunion talk about their experiences and insights Why Switch to Oracle Linux?, 3:30pm, Moscone South #270 In this session you will learn why Oracle Linux is best for your enterprise. There will be an Oracle speaker and Mike Radomski from SUNY talk about why they chose Oracle Linux. Please also visit the Oracle Linux Pavilion. If you stop by in one of our Partners booth you can be in the drawing for this beautiful, plush penguin. See you all tomorrow.

    Read the article

  • Linux (DUP!) ping packages

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Dropped connections between Linux Servers in Data Center

    - by Emil H
    I have a number of linux servers at a us-based datacenter. The servers were installed by the hosting company, and are running fedora core. We're experiencing problems with dropped connections. The issue seems to be that when we attempt to connect to one of the other servers after a period of inactivity, the first connection attempt will fail, and sometimes the second. However, after that the connection succeds and it works for a while. This happens for both mysql connections and raw socket connections, but only seems to occur when connecting to some of our servers. The confusing part is that it some of the servers for which we see different behaviors have identical hardware configuration and software. For example, it happens when connecting to a server called mysql2, but not for a server called mysql3. These servers were installed at the same time, but the same specifications. The problem can be reproduced somewhat reliably, but only after waiting fifteen minutes to half an hour. This makes it hard to diagnose, and even harder since I'm not really sure what to look for. I realize that connections sometimes failed, and that we should write our applications to compensate for this but these servers all in the same data center. Why would it matter if two servers haven't communicated for a while? Does anybody have an idea what might be causing this? Is it a server configuration problem or a network problem that I should contact the hosting company about. What do I tell them to look for? Unfortunately our experience has been that the support staff doesn't investigate problems in depth unless we give them detailed directions.

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • Move files and resize partition automatically?

    - by Rob
    I'm in a bit of an odd situation. I've recently been working on switching from debian to arch, and I've got my home partition for both pointing to the same partition (different usernames, so that's not an issue). What I want to do is one of two things, either: Set up user on arch with same username and group as debian, and have everything just sort of work! OR Move files I'd like to share between home folders to their own partition, and mount it with fstab. For the second one, I have around 150gb of files that would need moved to their own partition, and i've got about 15gb of free space on my home partition. So what I'd want to do is somehow make a 10gb ext4 partition, move 10gb-ish of files, expand the partition again, move files again, etc until all the files are moved to their own partition. I can do it manually, but it'd be easier if I could say "Move 10GB-ish of files from here to there, and then resize it and repeat until I'm out of files". Is that even possible?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >