Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 64/993 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Linux (DUP!) ping packages

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Dropped connections between Linux Servers in Data Center

    - by Emil H
    I have a number of linux servers at a us-based datacenter. The servers were installed by the hosting company, and are running fedora core. We're experiencing problems with dropped connections. The issue seems to be that when we attempt to connect to one of the other servers after a period of inactivity, the first connection attempt will fail, and sometimes the second. However, after that the connection succeds and it works for a while. This happens for both mysql connections and raw socket connections, but only seems to occur when connecting to some of our servers. The confusing part is that it some of the servers for which we see different behaviors have identical hardware configuration and software. For example, it happens when connecting to a server called mysql2, but not for a server called mysql3. These servers were installed at the same time, but the same specifications. The problem can be reproduced somewhat reliably, but only after waiting fifteen minutes to half an hour. This makes it hard to diagnose, and even harder since I'm not really sure what to look for. I realize that connections sometimes failed, and that we should write our applications to compensate for this but these servers all in the same data center. Why would it matter if two servers haven't communicated for a while? Does anybody have an idea what might be causing this? Is it a server configuration problem or a network problem that I should contact the hosting company about. What do I tell them to look for? Unfortunately our experience has been that the support staff doesn't investigate problems in depth unless we give them detailed directions.

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • Move files and resize partition automatically?

    - by Rob
    I'm in a bit of an odd situation. I've recently been working on switching from debian to arch, and I've got my home partition for both pointing to the same partition (different usernames, so that's not an issue). What I want to do is one of two things, either: Set up user on arch with same username and group as debian, and have everything just sort of work! OR Move files I'd like to share between home folders to their own partition, and mount it with fstab. For the second one, I have around 150gb of files that would need moved to their own partition, and i've got about 15gb of free space on my home partition. So what I'd want to do is somehow make a 10gb ext4 partition, move 10gb-ish of files, expand the partition again, move files again, etc until all the files are moved to their own partition. I can do it manually, but it'd be easier if I could say "Move 10GB-ish of files from here to there, and then resize it and repeat until I'm out of files". Is that even possible?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • rfkill unblock all does not activate a certain wireless card

    - by Davidos
    With an intel 1000 wireless card; rfkill list 0: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: tpacpi_bluetooth_sw: Bluetooth Soft blocked: yes Hard blocked: no rfkill unblock all 0: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no Why does my wireless card not turn on?

    Read the article

  • CentOS 6.5 proxy bypass/no_proxy not working

    - by Naruto Uzumaki
    I am running CentOS 6.5 on my desktop. I've set the Network Proxy using the network proxy application provided under Preferences. I've also set the following exceptions: localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16 But whenever I am using wget (I'm testing the proxy settings using using wget) then wget tries to connect to the proxy for private addresses, but wget localhost works fine and doesn't use the proxy. I also removed all the proxy settings and set the proxy in the shell: export http_proxy="<proxy_url>:<port>" export https_proxy="<proxy_url>:<port>" export no_proxy="localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16" It work when I use the command wget <external_url> or wget localhost but fails when I use the command wget <private address from the $no_proxy variable>. I also tried setting the variables in Ubuntu 14.04 also and facing the same issue. Regards,

    Read the article

  • Wine and Kernel Access

    - by Kyle Rozendo
    My knowledge on the topic is rather limited, but does one have Kernel access/the general ability to change programs at run time whilst running Wine? For Clarification: Can the user of the computer access any information they want via the Kernel on the underlying system running Wine, or does normal Windows security still apply?

    Read the article

  • Broken fonts in Konsole KDE 4.3.4

    - by depesz
    I have a strange situation - after some upgrades a couple of days ago fonts in KDE Konsole broke. To make it more specific - standard fonts look more or less OK, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in GNOME Terminal. I usually use the Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: GNOME Terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • How to simply remove everything from a directory on Linux

    - by Tometzky
    How to simply remove everything from a current or specified directory on Linux? Several approaches: rm -fr * rm -fr dirname/* Does not work — it will leave hidden files — the one's that start with a dot, and files starting with a dash in current dir, and will not work with too many files rm -fr -- * rm -fr -- dirname/* Does not work — it will leave hidden files and will not work with too many files rm -fr -- * .* rm -fr -- dirname/* dirname/.* Don't try this — it will also remove a parent directory, because ".." also starts with a "." rm -fr * .??* rm -fr dirname/* dirname/.??* Does not work — it will leave files like ".a", ".b" etc., and will not work with too many files find -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr find dirname -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr As far as I know correct but not simple. find -delete find dirname -delete AFAIK correct for current directory, but used with specified directory will delete that directory also. find -mindepth 1 -delete find dirname -mindeph 1 -delete AFAIK correct, but is it the simplest way?

    Read the article

  • ubuntu 12.04 server and tftp access violation issue on put command

    - by SMYERS
    I installed tftp as per this document: http://icesquare.com/wordpress/solvedtftp-error-code-2-access-violation/ I followed this to the letter 3 times and every time I put a file I get: root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation tftp root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation If I touch the file name chmod 777 the file then do a put it works perfectly fine. My config is as follows: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /svr/tftp disable = no } the directory /svr/tftp permissions are 777: drwxrwxrwx 3 nobody nobody 4096 Nov 14 10:32 svr This thing should have full permissions as would anyone who wanted to write or read from that directory. I see nothing in the logs im really stumped on this. If the file is already in the directory I can read it all day long, I just cant make NEW files, can not put them, but I can do get's, I can only put to an existing file with permissions @777. Thanks

    Read the article

  • Error while bringing up eth1

    - by mhay
    I'm getting this error while bringing up my network card: (process:2550): WARNING **: _nm_object_get_property: Error getting 'State' for /org/freedesktop/NetworkManager/ActiveConnection/3: (19) Method "Get" with signature "ss" on interface "org.freedesktop.DBus.Properties" doesn't exist I'm using the following commands: 1. ifup eth1 2. /etc/init.d/network restart I have installed a fresh copy of Centos 6.2 and configured the network card.

    Read the article

  • Coloring The Z Shell[closed]

    - by Richard
    Because I have to stare at my command prompt all the time on my computer, it should look at least half-decent, so I am trying to get it colored. The expected outcome is as seen on this site. I have the colors I want set in my .Xdefaults file, but they of course do not color my prompt. My .zshrc is Phil's Prompt. My .Xdefaults is: *background: #121212 !black xterm*color0: #353535 xterm*color8: #666666 !red xterm*color1: #AE4747 xterm*color9: #EE6363 !green xterm*color2: #556B2F xterm*color10: #9ACD32 !brown/yellow xterm*color3: #DAA520 xterm*color11: #FFC125 !blue xterm*color4: #6F99B4 xterm*color12: #7C96B0 !magenta xterm*color5: #8B7B8B xterm*color13: #D8BFD8 !cyan xterm*color6: #A7A15E xterm*color14: #F0E68C !white xterm*color7: #DDDDDD xterm*color15: #FFFFFF *foreground: #DDDDDD Help will be appreciated.

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • Selective Pointer device remapping in linux

    - by user6368
    I just got an HP 2710p (hp tablet, with digitizer), and I've played around with linux for a while now, and thought I would go ahead and install it. Everything works fine, excepting normal tablet functions, which is to be expected. I'm working on the screen rotation, and there are on-screen keyboards, etc, but I'm having issues with the stylus. I can tap and left click with the stylus as normal, but the side button (which in windows functions as a right mouse button) appears as a 'button 2' to xev (a middle/scroll wheel button). I can switch 'button 2' and 'button 3' universally using xmodmap, but I'd like to do so exclusively for stylus so I don't screw up regular pointing devices. Altering xorg.conf (which is surprisingly bare) with the recommended sections (adding sections for each of the stylus buttons) does nothing. I'm running crunchbang, which is an ubuntu/debian varient with openbox as the windows manager. Thanks Also, as a seperate note, does anybody know how to detect when I rotate and/or latch the lid shut? I was thinking maybe I could run a script to switch the buttons when I close it, but I can't find any information.

    Read the article

  • How to install Pear Linux's shell in Ubuntu?

    - by Emerson Hsieh
    For people who doesn't know what Pear Linux is: Pear Linux is a French Ubuntu-based desktop Linux distribution. Some of its features include ease-of-use, custom user interface with a Mac OS X-style dockbar, and out-of-the-box support for many popular multimedia codecs. Excerpt from Distrowatch. When this Linux Distribution came out, I immediately went to the website and found out that Pear Linux is actually Mac OSX with a pear. I was going to download it and install Pear Linux as a triple-boot on my computer (Windows and Ubuntu installed). Then I remembered that Pear Linux is Ubuntu based. So I thought of a better Idea of installing only the Comice OS Shell in Ubuntu(the Desktop environment of Pear Linux), so that I can select that in the login screen. Is that possible? EDIt: Found this.

    Read the article

  • Static Network configuration with bridge in CentOS 6.2

    - by Kyle
    I have a server with CentOS 6.2 installed, I want to use it as a VM host to run some windows installations for development purposes. I wanted to be able to directly RDP and serve websites from IIS on these windows server installations, so I figured I would set it up as bridged networking. I have been struggling with this all morning, usually the result being that when I brought up the bridge interface all network connectivity to the CentOS would go away, however, I think I finally have that all figured out. However, here's what happens. The eth0 and br0 interfaces are defined in /etc/sysconfig/network-scripts with ifcfg-eth0 and ifcfg-br0. I DO NOT have ifup or ifdown or any other files for these interfaces, I have not found if they are needed. I can login and use firefox to browse the web, however, running ifconfig reveals that my eth0 does not have an IPAddress, but the br0 does. I can actually RDP into the Windows installation, and browse the internet from there as well, but I cannot directly connect(via putty, vnc, nor viewing web pages) to the CentOS box. Any idea what's up? ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none IPADDR=192.168.1.20 GATEWAY=192.168.1.1 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes BRIDGE=br0 ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static DNS1=192.168.1.1 DNS2=8.8.8.8 GATEWAY=192.168.1.1 IPADDR=192.168.1.2 NETMAS=255.255.255.0 ONBOOT=yes I know some of the options are inconsistent (DNS and BOOTPROTO) because I tried changing those in the eth0 file to make it work, and the changes haven't adversly affected web browsing or the other functionality Thank you!

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • Weird fluctuating time on a XEN linux guest

    - by Vin-G
    I have a weird problem with some servers here at work. We have a few XEN guests who's current time fluctuates. # date;date;date;date;date;date;date Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 As seen above, the time fluctuates between 16:00:48 and 16:00:40, which is problematic for us since computing for time differences in some of our scripts becomes inaccurate (ex. what should be a few ms differences becomes some few second differences, and even sometimes, negative differences). The problematic servers are linux guests on a XEN host. The time fluctuates on the guest systems, but it is okay in the host itself. I've ruled out ntpd since this happens irregardless of whether ntpd is running or not on the guest systems. Guest is on full virtualisation. The time on both the host and the guest does match except that the time in the guest fluctuates at about a few seconds from the host's time, and the host time does not fluctuate. /proc/sys/xen/independent_wallclock is 0 in the host and does not exist in the guest. Ntpd service was stopped and disabled. Setting independent_wallclock to 1 in the host has no effect (that is, time still fluctuates in the guest). Though I was not able to restart the guest as it is a production server. Might be able to do that over the weekend. Any ideas on what to check and how to resolve this problem?

    Read the article

  • server and user directly connected no pinging...

    - by jtzero
    I have a server(fedora 12) with two nics on it, directly connected to say 192.168.1.0 and 192.168.2.0 the route table looks like this Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 192.168.1.1 255.255.255.0 U 0 0 0 eth0 192.168.2.0 * 255.255.255.0 U 0 0 0 eth1 eth0 = 192.168.1.15 eth1 = 192.168.2.1 and a directly connected user (Mythdora) on the 192.168.2.0 network with ip 192.168.2.2 and route table like so Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 0 0 0 eth0 the cable is a crossover and it works all three nics work -- connected my laptop to either end and assign it a valid 192.168.2.0 ip the pings will work. In fact if I disconnect the server side and plug the eth cable into the laptop and have thte box ping the laptop continually remove the eth cable and plug it back into the server both sides ping... unfortunately the box realizing it's connected to a different pc wipes its route table after say ten minutes or so. if I do a trace route from a box on the 1.0 network to the servers 192.168.2.1 interface never get a reply from it. as a note at one point I could ping the server from the 192.168.2.2 box but the server couldnt ping the 192.168.2.2 box.

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • Rewrite URL based off of IP on OpenWRT

    - by Scott
    We are running OpenWRT on a WRT54GL. I have been looking for an answer to this, but I can't seem to figure out what to search for, if its possible, or what combination of programs to use. I want to be able to redirect a HTTP request from a WiFi device based off of their MAC address. This should all be transparent to the device. Basically we are trying to redirect any non-registered devices to a website to register the device (at this point, we would push a new config to the router that would allow this MAC address "full access"). Once a device is registered, it will be redirected to a transparent squid proxy server on another machine for caching/blocking certain sites. I looked at tinyproxy - popilo which redirects but I won't have the MAC address to know if its registered or not. Any help (google suggestions, programs, anything!) would be very much appreciated!

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >