Search Results

Search found 747 results on 30 pages for 'tintu mon'.

Page 19/30 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • How do I access the web server on my desktop from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • Getting Error while running RED5 server - class path resource [red5.xml] cannot be opened because it does not exist

    - by sunil221
    HI , I have installed java version "1.6.0_14" and Ant version 1.8.2 for red5 Server. when i am trying to run red5 server i am getting the following error please help Root: /usr/local/red5 Deploy type: bootstrap Logback selector: org.red5.logging.LoggingContextSelector Setting default logging context: default 11:27:39.838 [main] INFO org.red5.server.Launcher - Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) Red5 Server 1.0.0 RC1 $Rev: 4171 $ (http://code.google.com/p/red5/) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/red5/red5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/red5/lib/logback-classic-0.9.26.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:27:39.994 [main] INFO o.s.c.s.FileSystemXmlApplicationContext - Refreshing org.springframework.context.support.FileSystemXmlApplicationContext@39d85f79: startup date [Mon Dec 21 11:27:39 EST 2009]; root of context hierarchy 11:27:40.149 [main] INFO o.s.b.f.xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [red5.xml] Exception org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [red5.xml]; nested exception is java.io.FileNotFoundException: class path resource [red5.xml] cannot be opened because it does not exist Bootstrap complete

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • HTML Redirect issue with Apache2

    - by Vijit Jain
    I am facing an issue with the ProxyPass on my Apache server on Ubuntu. I have configured Apache to deal with Virtual Hosts on my server. There is an application with runs on the server and uses ports 8001 8002. I need to do something like www.example.com/demo/origin to display the contents that I would see when I visit www.example.com:8000. The contents to be displayed are a host of HTML pages. This is the section of the virtual host config that has issues ProxyPass /demo/vader http://www.example.com:8001/ ProxyPassReverse /demo/vader http://www.example:8001/ ProxyPass /demo/skywalker http://www.example.com:8002/ ProxyPassReverse /demo/skywalker http://www.example.com:8002/ Now when I visit example.com/demo/skywalker, I see the first page of port 8002, say the login.html page. The second should have been www.example.com/demo/skywalker/userAction.html, instead the server shows www.example.com:8000/login.html. In the error logs I see something like: [Mon Nov 11 18:01:20 2013] [debug] mod_proxy_http.c(1850): proxy: HTTP: FILE NOT FOUND /htdocs/js/demo.72fbff3c9a97f15a4fff28e19b0de909.min.js I do not have any folder htdocs in the system. This is only an issue while viewing .html pages. Otherwise, no such issue occurs. When I visit localhost:8001 it will show any and all contents without any errors or issues. www.example.com/demo/skywalker displays a separate webpage www.example.com/demo/origin displays a different webpage and www.example.com/demo/vader displays a different webpage. I have also tried to use one more type of combination, <Location /demo/origin/> ProxyPass http://localhost:8000/ ProxyPassReverse http://localhost:8000/ ProxyHTMLURLMap http://localhost:8000/ / </Location> This fails as well. I would greatly appreciate if anyone can help me resolve this issue.

    Read the article

  • How to deal with job that stop and cannot continue unless made foreground?

    - by Vi
    Recent example: mountlo (using UML): vi@vi-notebook:~/b$ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other& [1] 32561 vi@vi-notebook:~/b$ Checking that ptrace can change system call numbers...OK Checking syscall emulation patch for ptrace...OK Checking advanced syscall emulation patch for ptrace...OK Checking PROT_EXEC mmap in /tmp...OK Checking for the skas3 patch in the host: - /proc/mm...not found - PTRACE_FAULTINFO...not found - PTRACE_LDT...not found UML running in SKAS0 mode [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ bg [1]+ mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other & [1]+ Stopped mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other vi@vi-notebook:~/b$ fg mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8,allow_other Linux version 2.6.15 (miko@dorka) (gcc version 3.3.5 (Debian 1:3.3.5-13)) #1 Mon Feb 27 13:27:52 CET 2006 (normal output) ... vi@vi-notebook:~/b$ socat - exec:'mountlo -m 16 -d /dev/uba1 /home/vi/mnt/usb -t vfat -o iocharset=utf8\,allow_other',pty,ctty fusermount: waitpid: No child processes vi@vi-notebook:~/b$ Also happens with Gimp (when it does run it's plug-ins). Parts of Gimp started by `gimp q.jpg&' freeze and cannot continue unless "killall -CONT" or made foreground. Is it a bug? How to reliably start things in a background?

    Read the article

  • Problems sending email using .Net's SmtpClient

    - by Jason Haley
    I've been looking through questions on Stackoverflow and Serverfault but haven't found the same problem mentioned - though that may be because I just don't know enough about how email works to understand that some of the questions are really the same as mine ... here's my situation: I have a web application that uses .Net's SmtpClient to send email. The configuration of the SmtpClient uses a smtp server, username and password. The SmtpClient code executes on a server that has an ip address not in the domain the smtp server is in. In most cases the emails go without a problem - but not AOL (and maybe others - but that is one we know for sure right now). When I look at the headers in the message that was kicked back from AOL it has one less line than the successful messages hotmail gets: AOL Bad Message: Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Mon, 18 Jun 2012 09:48:24 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Good Hotmail Message: Received: from mail.domainofsmtp.com ([###.###.###.###]) by subdomainsof.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Thu, 21 Jun 2012 09:29:13 -0700 Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Thu, 21 Jun 2012 11:29:03 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Notice the hotmail message headers has an additional line. I'm confused as to why the Web server's name and ip address are even in the headers since I thought I was using the SmtpClient to go through the smtp server (hence the need for the username and password of a valid email box). I've read about SPFs, DKIM and SenderID's but at this point I'm not sure if I would need to do something with the web server (and its ip/domain) or the domain the smtp is coming from. Has anyone had to do anything similiar before? Am I using the smtp server as a relay? Any help on how to describe what I'm doing would also help.

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • How to mount vfat drive on Linux with ownership other than root?

    - by Norman Ramsey
    I'm running into trouble mounting an iPod on a newly upgraded Debian Squeeze. I suspect either a protocol has changed or I've tickled a bug, which I don't know where to report. I'm trying to mount the iPod so that I have permission to read and write it. But my efforts come to nothing: $ sudo mount -v -t vfat -o uid=32074,gid=6202 /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control $ sudo umount /mnt $ sudo mount -v -t vfat -o uid=nr,gid=nr /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control As you see, I've tried both symbolic and numberic IDs, but the files persist in being owned by root (and only writable by root). The IDs are really mine; I've had the UID since 1993. $ id uid=32074(nr) gid=6202(nr) groups=6202(nr),0(root),2(bin),4(adm),... I've put an strace at http://pastebin.com/Xue2u9FZ, and the mount(2) call looks good: mount("/dev/sde2", "/mnt", "vfat", MS_MGC_VAL, "uid=32074,gid=6202") = 0 Finally, here's my kernel version from uname -a: Linux homedog 2.6.32-5-686 #1 SMP Mon Jun 13 04:13:06 UTC 2011 i686 GNU/Linux Does anyone know if I should be doing something different, or If there is a workaround, or If this is a bug, where it should be reported?

    Read the article

  • Ubuntu-VirtualBox-LikeWiseOpen network disaster

    - by Sergio
    I've a virtual machine on VirtualBox 4.1.4 with Ubuntu 11.04. It was working perfectly, but after a reboot something really wrong happened: I wasn't able to connect to the internal network (same for NAT). $ sudo dhclient -v Internet Systems Consortium DHCP Client 4.1.1-P1 Copyright 2004-2010 Internet System Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Error creating socket to list interfaces; Permission denied Can't get list of interfaces. The network interface is PCnet-FAST III. Additional information: $ uname -a Linux LinuxFileServer 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux Any ideas? Thanks EDIT: $ sudo ifconfig -a eth1 Link encap:Ethernet HWaddr 08:00:27:af:f2:c7 indirizzo inet6: fe80::a00:27ff:feaf:f2c7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisioni:0 txqueuelen:1000 Byte RX:0 (0 B) Byte TX:3870 (3.8 KB) Interrupt:10 lo Link encap:Loopback locale indirizzo inet:127.0.0.1 Maschera:255.0.0.0 indirizzo inet6: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisioni:0 txqueuelen:0 Byte RX:960 (960.0 B) Byte TX:960 (960.0 B)

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -qi jre Name : jre Relocations: /usr/java Version : 1.6.0_20 Vendor: Sun Microsystems, Inc. Release : fcs Build Date: Mon Apr 12 19:34:13 2010 Install Date: Thu May 6 06:36:17 2010 Build Host: jdk-lin-1586 Group : Development/Tools Source RPM: jre-1.6.0_20-fcs.src.rpm Size : 50708634 License: Sun Microsystems Binary Code License (BCL) Signature : (none) Packager : Java Software <[email protected]> URL : http://java.sun.com/ Summary : Java(TM) Platform Standard Edition Runtime Environment Description : The Java Platform Standard Edition Runtime Environment (JRE) contains everything necessary to run applets and applications designed for the Java platform. This includes the Java virtual machine, plus the Java platform classes and supporting files. The JRE is freely redistributable, per the terms of the included license. [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

  • What could cause a WMV to not play to completion in a browser?

    - by Ty W
    A realtor has had videos created for a community she is selling homes for, the people who made the videos gave them to us in WMV format. I can play these videos without any problem in Windows Media Player, VLC, and Quicktime (via Flip4Mac). I can play the videos from their location at videohomeguide.com in my browser without any trouble. However when I upload the files to our server the video stops at about the 1 minute mark in Safari and FireFox on Mac OS X Snow Leopard. I'm not sure if Windows browsers have the same issue because they are loaded using Windows Media Player. http://carolepaul.com/images/uploads/cottageslsjamestown.wmv <- our server, will fail at 1:09ish. http://www.videohomeguide.com/media/cottageslsjamestown.wmv <- should play to completion (3:27ish) The files generate the same MD5 hash on my desktop and on our server. I used WGET to transfer the files, always downloading from videohomeguide.com. Since the files are identical and are playable using VLC/WMP/Quicktime, and playable in the browsers from videohomeguide.com it seems to me that it is some sort of server config... maybe incorrect headers sent to the browsers? Here are the headers sent and received by FireFox on OS X: http://carolepaul.com/images/uploads/cottageslsjamestown.wmv GET /images/uploads/cottageslsjamestown.wmv HTTP/1.1 Host: carolepaul.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Date: Mon, 29 Mar 2010 20:43:20 GMT Server: Apache/1.3.41 (Unix) PHP/5.2.6 FrontPage/5.0.2.2635 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b Last-Modified: Wed, 02 Dec 2009 18:08:46 GMT Etag: "1e7919c-198eadc-4b16ad2e" Accept-Ranges: bytes Content-Length: 26798812 Keep-Alive: timeout=10, max=200 Connection: Keep-Alive Content-Type: video/x-ms-wmv

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell)?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • PPP kernel module fails to load

    - by Harel
    I am trying to deal with a problem on a server I don't normally deal with. Out of the blue a script using ppp started failing saying that the ppp kernel module is not loaded. When I try to modprobe it it complains about files missing. Note below that the kernel version the server thinks it at, does not match the kernel version directory in /lib/modules. I'm not sure how this could have happened. Could the other maintainers of the server botched a kernel upgrade? My question is how can I fix this discrepancy. Can I simply rename the lib directory and hope for the best? I don't want to break stuff for the people who actually maintain the server but I do need to fix the PPP issue. $ sudo /sbin/modprobe -v ppp FATAL: Could not load /lib/modules/2.6.35.4-rscloud/modules.dep: No such file or directory $ cat /proc/version Linux version 2.6.35.4-rscloud ([email protected]) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #8 SMP Mon Sep 20 15:54:33 UTC 2010 $ ls /lib/modules/ 2.6.33.5-rscloud

    Read the article

  • Setting up Apache with multiple virtual host when using Plone 4.1

    - by Shaun Owens
    I have a Plone server running on CentOS, I have multiple instances of Plone running 4.0 and 4.1, I also have multiple sites. I am new to linux and haveing problems getting Apache to work with multiple virtuale hosts. The first host listed works just fine but the second host does not. I get the following error message when I start HTTPD: Starting httpd: [Mon Nov 07 14:38:31 2011] [warn] VirtualHost ordevel3.ucdavis.edu:80 overlaps with VirtualHost ordevel4.ucdavis.edu:80, the first has precedence, perhaps you need a NameVirtualHost directive. What am I missing to get the virtual hosts to work correctly? Below in my syntax in httpd.conf. <VirtualHost ordevel3.abc.edu:80> ServerAlias ordevel3.abc.edu ServerAdmin [email protected] ServerSignature On <IfModule mod_rewrite.c> RewriteEngine On # serving icons from apache 2 server RewriteRule ^/icons/ - [L] RewriteRule ^/(.*) \ http://localhost:8080/VirtualHostBase/http/%{SERVER_NAME}:80/itsdevel3/VirtualHostRoot/$1 [L,P] </IfModule> <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all </LocationMatch> </IfModule> </VirtualHost> <VirtualHost ordevel4.abc.edu:80> ServerAlias ordevel4.abc.edu ServerAdmin [email protected] ServerSignature On <IfModule mod_rewrite.c> RewriteEngine On # serving icons from apache 2 server RewriteRule ^/icons/ - [L] RewriteRule ^/(.*) \ http://localhost:8180/VirtualHostBase/http/%{SERVER_NAME}:80/ITS/VirtualHostRoot/$1 [L,P] </IfModule> <IfModule mod_proxy.c> ProxyVia On # prevent the webserver from beeing used as proxy <LocationMatch "^[^/]"> Deny from all </LocationMatch> </IfModule> </VirtualHost>

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • LDAP Authentication fails with 500 or 401 depending on bind for Apache2

    - by Erik
    I'm setting up LDAP authentication for our Subversion repository hosted through Apache on a RHEL 5 system. I run into two different issues when I try to authenticate against Active Directory. <Location /svn/> Dav svn SvnParentPath /srv/subversion SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" AuthLDAPBindPassword "mypass" AuthLDAPUrl "ldap://my.example.com:389/ou=User Accounts,dc=my,dc=example,dc=com?sAMAccountName?sub?(objectClass=user)" NONE Require valid-user </Location> If I use the above configuration it continually prompts me with the Basic prompt and I have to eventually select Cancel, which returns a 401 (Authorization Required). If I comment out the bind parts it returns 500 (Internal Server Error), griping that authentication failed: [Mon Nov 02 12:00:00 2009] [warn] [client x.x.x.x] [10744] auth_ldap authenticate: user myuser authentication failed; URI /svn [ldap_search_ext_s() for user failed][Operations error] When I perform the bind using ldapsearch and filter for a simple attribute it returns correctly: ldapsearch -h my.example.com -p 389 -D "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" -b "ou=User Accounts,dc=my,dc=example,dc=com" -w - "&(objectClass=user)(cn=myuser)" sAMAccountName Unfortunately I have no control or insight into the AD part of the system, only the RHEL server. Does anyone know what the hang up is here?

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >