Search Results

Search found 4355 results on 175 pages for 'filesystem notification'.

Page 146/175 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • Todo/task manager for android and desktop linux

    - by RiaD
    I'm looking forward to Task/ToDo Manager. It should work under Android, Linux(or Web). If it's very cool, one of them is OK too, Android is preferable in this situation. Necessary features: Russian or English language Possibility to mark task as finished Interesting features: (I want as more, as possible, while it's OK, if some of them isn't avaliable) Nice and easy-to-use interface Possibility to choose finish time for task. After that it's showed as overdue. Possibility to choose start time for task. Before it task is inactive and maybe shown or hide Possibility to add nested tasks. Task marks completed when all sub-tasks are completed. Possibility to make dependencies. If A depends on B, and B isn't finished yet, A is inactive as in third feature. Possibility to create task in a moment without fill-all-this-forms Syncronization between version(Android to linux or Android to web) Notification (on Android only) Features, that I'm not interested in: Creating more than one todo list sharing ToDo's I've seen couple of managers. Best I've seen now is Task Coach for Linux, but I don't very like its interface, and there is no version for Android.

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

  • Windows 7 - ignore security when reading external drive

    - by w-
    hi, My system hard drive on an XP computer kind of failed (random corrupt sectors). So i got a new harddrive and am trying to recover the files. The filesystem is NTFS. The system i'm trying to use when recovering the files is Windows 7. I'm obviously an admin on this box. The last data i'm trying to recover is stuff in the Documents and Settings folder. I'm using a SATA to a USB cable thingy so that I just plug it in as an External Hard Drive. The problem: In Windows Explorer when i try to copy the data, I keep getting prompted with Security warnings and error messages. It keeps telling me i have to change the owner permissions of the folder and all it's contents. If i tell it to change all the files and folder permissions it takes a really long time because it has to recurse through all the folder contents to change the permissions. Is there a way for me to ignore the file permissions when doing this? thanks

    Read the article

  • VNC Server that can be used from command line?

    - by jesusiniesta
    I'm looking for a replacement for a custom vnc server that we have been using in my company for a long time. I need a simple executable that can be run from command line by an IT Support software without the user noticing it (our application will warn the user, we don't want him to see we are using that VNC sever). I need it to support Windows and preferably also OSX. The only option I've found is UltraVNC, but I can't configure it from command line to accept loopback connections without authentication. We have already a whole VNC Viewer + VNC Repeater + Bouncers architecture, and the only missing piece is the VNC Server. Do you know any solution you could suggest me? I'm afraid I'll end up developing a new VNC server myself, may be based on an open source one. EDIT: When I said I don't want the user to notice this VNC server, I should have added that I don't want him even noticing the installation. So better if it can be installed silently or can be executed as a portable executalbe (for instance, ultravnc can be installed and ran as a service from command line, or simply executed quietly, with only a notification icon; its problem is that I can't run it without authentication).

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • How to fix Truecrypt MBR using Command Prompt or Linux live USB?

    - by Michal Stefanow
    I was playing with TrueCrypt and decided to make a fresh installation of Windows 7 from USB stick. Unfortunately Windows 7 installer: "setup was unable to create a new system partition" My entire HDD has been formatted and is visible as 320GB unallocated space, but no fdisk nor Windows 7 installer nor Windows XP installer could help. (Windows XP doesn't even see the HDD, it sees only USB stick and says "not enough space to install") It may be related to Truecrypt and pre-boot authentification, boot loader and/or MBR. As I don't have optical device I could not have created rescue disk. Right now I need a rescue of some kind, supposingly by erasing/fixing MBR using Linux live USB or using Command Promt. Another approach is to click "repair your comuter" menu from Windows 7 installer then click "restore your computer", then click OK upon error and get access to Command Prompt. Yet another another approach is to start computer without Linux USB I receive this: error:unknown filesystem. grub rescue> Any help would be greatly appreciated as my laptop is kind of not fully operational now. UPDATE: This was asked long time ago, ended up formatting everything (eventually it worked using different bootable USB)...

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • trying to setup multiple primary partitions on ubuntu linux [migrated]

    - by JohnMerlino
    I currently have ubuntu desktop installed on a harddrive. I want to partition the harddrive so that I can reserve 30 gigs for ubuntu server and 30 gigs for ubuntu desktop. The drive has 300 gigs available. Right now I am booting from dvd drive and installing ubuntu server. I selected "Guided partitioning" and created a 30 gig primary partition of Ext4 journaling filesystem, set "yes, format it" for format partition and set bootable flag to on. I intend to use this 30 gig partition to hold ubuntu server and allow me to boot from it. Now I have two other partitions. They are both set to "logical", one is currently using 285.8 gigs and is using ext4 (when I try to set bootable flag to true, it gives a warning "You are trying to set the bootable flag on a logical partition. The bootable flag is only useful on the primary partitions"). More alarming it says "No existing file system was detected in this partition". Actually, Im thinking that this is the parittion that is supposed to be holding my current Ubuntu Desktop. And of course I want this to be bootable and be a primary partition, so I could dual boot from this and the server partition. Now the third partition is also set to logical and it is being used as swap area. My question is regarding that second partition. Its supposed to be a primary partition thats holding my existing ubuntu desktop edition. How do I switch it to primary and to make sure that its pointing to my existing desktop installation?

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • How does KMS (Windows Server 2008 R2) differentiate clients?

    - by Joe Taylor
    I have recently installed a KMS Server in our domain and deployed 75 new Windows 7 machines using an image I made using Acronis True Image. There are 2 variations of this image rolled out currently. When I go to activate the machines it returns that the KMS count is not sufficient. On the server with a slmgr /dlv it shows: Key Management Service is enabled on this machine. Current count: 2 Listening on Port: 1688 DNS publishing enabled KMS Priority: Normal KMS cumulative requests received from clients: 366 Failed requests received: 2 Requests with License status unlicensed: 0 Requests with License status licensed: 0 Requests with License status Initial Grace period: 1 Requests with License statusLicense expired or hardware out of tolerance: 0 Requests with License status Non genuine grace period: 0 Requests with License status Notification: 363 Is it to do with the fact that I've used the same image for all the PC's? If so how do I get round this. Would changing the SID help? OK knowing I've been thick whats the best way to rectify the situation. Can I sysprep the machines to OOBE on each individual machine? Or would NewSID work?

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

  • Why can I not edit, delete directories inside of this directory

    - by user43053
    Hello there, First, I thought this was PHP related, but maybe it isn't. My original post, which may be irrelevant now is located at the bottom. The problem is I have a directory : /articles/. In it are 10 sub directories. I have been changing the permissions lately, but now it seems all the permissions of the parent folder, sub-folders and files are either chmod 755 or 777. I cannot move, delete or edit files inside of this parent directory or sub-directories with my FTP-client. I can however edit, delete, create new files and directories and change them with PHP-functions without problems. What may the problem be? OLD POST. Ignore everything below this line: If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. EDIT: It seems the problem is different. I cannot move files with FTP-client to the parent, or sub-directories either. This problem may not be PHP related, then. It seems the problem applies to any directory, regardless of whether it was created by PHP. EDIT 2: The parent directory has chmod 755. Thank you for your time. Kind regards Marius

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >