Search Results

Search found 2486 results on 100 pages for 'eon rusted du plessis'.

Page 95/100 | < Previous Page | 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • What is /opt/sun_docs used for, in Solaris 10?

    - by benc
    Solaris 10, SPARC. While trying clean up my "/opt" directory, I saw the "sun_docs" directory. I scanned the contents with "du -a", and also found a single, possibly related file (/var/opt/sun_docs/sundocs.html). If I understand correctly, it looks like a local set of HTML files, designed to be ready by a locally running browser? It looks like it could be shared via http, if an admin knew how to turn that on. I did google and check docs.sun.com. -ben

    Read the article

  • After using lvextend, I can't recover unused space

    - by Cory Gagliardi
    I needed to add more disk space to my CentOS VM, so I added another virtual disk, then used lvextend to add the space to the existing partition. The steps I followed was: echo "- - -" > /sys/class/scsi_host/host0/scan pvcreate /dev/sdb vgextend VolGroup00 /dev/sdb lvextend -l +100%FREE /dev/VolGroup00/LogVol00 resize2fs /dev/VolGroup00/LogVol00 This worked fine. I subsequently filled up the VM, then deleted most of the used disk space. However, the unused disk space was never recovered after I deleted all of the files. This will illustrate what I'm saying better: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 61G 32G 26G 56% / /dev/sda1 99M 20M 75M 21% /boot tmpfs 1006M 0 1006M 0% /dev/shm # pwd; du -h --max-depth=0 / 5.1G . I cannot figure out how to get the partition to see that only 5.1 GB is used. Any ideas what I'm doing wrong?

    Read the article

  • Free Windows traffic monitor that can run a command when reaching a limit

    - by leromarinvit
    Does anyone know a free traffic monitor tool for Windows XP which can run a configurable program when reaching a limit? Hoo Net Meter and DU Meter can do that, but they both cost money. What I'm trying to do is throttle the connection when getting close to the monthly limit, so that watching YouTube or downloading large files isn't fun any longer, but e-mail or looking something up still work. The throttling part is settled, Traffic Shaper XP works nicely. I just need something to automatically call a batch script when reaching the limit.

    Read the article

  • Why no "da-doomp" (disconnect notification sound) sometimes when unplugging wireless mouse receiver?

    - by DanH
    Sometimes (maybe one case in 3), when I unplug the wireless mouse receiver on my Sony VGN-CS215J laptop, there is no "da-doomp" sound, even after a minute or two. And if I plug the receiver back in there is no corresponding "du-dump" sound and the mouse is still (immediately) "live". This can happen when the activity light is out and there's nothing obviously going on -- it's not simply that the box is too busy. Other times one gets the expected behavior (and usually I get the correct behavior if I plug the receiver back in a few seconds and then unplug it after a "failure"). The reason this is significant is that if I get no "da-doomp" then the laptop will not sleep properly -- it will go to sleep initially, but then reawaken a few minutes later inside my laptop case and proceed to run the battery down (and no doubt overheat the unit). Any ideas?

    Read the article

  • How can I free up disk space in my Ubuntu Hardy Heron install?

    - by rvs
    I'd like to make some room on /dev/sda1 without necessarily having to remove a whole bunch of applications (I've already gone through and deleted all frivolous apps). This is the state of /dev/sda1 currently: Dir: / Type: ext3 Total: 9.4GiB Free: 488.6MiB Available: 0bytes Used: 8.9GiB EDIT added du output from comments below: 769068 /var/lib/mysql 351208 /usr/lib 297060 /usr/local/bin/eclipse/plugins 184124 /usr/bin 175924 /usr/lib/openoffice/program 143940 /usr/local/bin/eclipsePHP/plugins 92520 /boot 81200 /opt/android-sdk-linux/add-ons/google_apis-6_r01/images 79964 /opt That's funny, because the tables in /var/lib/mysql are the reason that I ran out in the first place. But I need them, and room for many more possibly large db's.

    Read the article

  • is it worth defragging an iPod

    - by alimack
    Essentially my 5G iPod was cutting tracks off and generally misbehaving. So I did the following: 1) Use Diskwarrior - heavy directory fragmentation which it fixed; 2) Use iDefrag - some fragmentation but it kept halting as it couldn't move files; 3) Try to write out drive with Disk Utility - got a warning from DU so gave up before I started; 4) Completely restored using iTunes; 5) Reran Diskwarrior - still heavy directory fragmentation; 6) Reran iDefrag, still fragmentation although limited to two bands; The iPod is much quicker to traverse menus and no more track skipping. My question is this - is defragging worth it or does the heat generated by the process kill the drive and make it a self-defeating process. Anyone have any metrics/ figures? Clearly it's a bad idea for solid state drives like the nano & touch.

    Read the article

  • Calculate disk space occupied by many .png files

    - by Alexander Farber
    I have 357 .png files located in different sub dirs of the current dir: settings# find . -name \*.png |wc -l 357 settings# find . -name \*.png | head ./assets/authenticationIcons/audio.png ./assets/authenticationIcons/bbid.png ./assets/authenticationIcons/camera.png ./bin/icons/ca_video_chat.png ./bin/icons/ca_voice_control.png ./bin/icons/ca_vpn.png ./bin/icons/ca_wifi.png Is there a oneliner to calculate the total disk space occupied by them (before I pngcrush them)? I've tried (unsuccessfully): settings# find . -name \*.png | xargs du -s 4 ./assets/support/wifi_locked_icon_white.png 1 ./assets/support/wifi_vpn_icon_connected.png 1 ./assets/support/wi_fi.png 1 ./assets/support/wi_fi_conected.png 8 ./bin/blackberry-tablet-icon.png 2 ./bin/icons/ca_about.png 2 ./bin/icons/ca_accessibility.png 2 ./bin/icons/ca_accounts.png 2 ./bin/icons/ca_airplane_mode.png 2 ./bin/icons/ca_application_permissions.png 1 ./bin/icons/ca_balance.png

    Read the article

  • Constantly diminishing free space on fedora 17

    - by Varun Madiath
    I don't know how to explain this other than to say that my computer seems to magically run out of free when it runs for a while. The output of df -h . oh my home direction is below /dev/mapper/vg_vmadiath--dev-lv_home 50G 47G 0 100% /home When I run sudo du -cks * | sort -rn | head -11 on /home I get the following output. I got this from decreasing free space on fedora 12 32744344 total 32744328 vmadiath 16 lost+found If I restart my system things seem to fix themselves and I'm left with about 20 or 25GB of free space. I'm running XFCE with XMonad as my window manager under fedora 17. Programs I'm running include the XFCE terminal, grep, find, firefox, eclipse, libre-office writer, zsh, emacs. Any help will be greatly appreciated. I'll gladly give you any other output you might need.

    Read the article

  • Disk space mismatch on OS X Server (Leopard)

    - by John Gardeniers
    My Nagios system sent me an alert to inform me that the disk space on one of the drives on our OS X server is very low. When I run df /Volumes/Apps/ I get /dev/disk0s3 117209520 114932472 2277048 99% /Volumes/Apps When I run du -c /Volumes/Apps it reports 11489944 total Why might there be such a vast difference? Even more importantly, how do I find the problem and what can I do about it? I'm essentially just a Windows admin, so am well out of my comfort zone here. I use a Mac but I'm not a Mac admin in any real sense of the word.

    Read the article

  • Using AutoMySQLBackup on Rackspace Cloud

    - by xref
    Since Rackspace Cloud only allows FTP access it makes using AutoMySQLBackup a little trickier, and while it is at least creating DB dumps I get errors in the backup log: ###### WARNING ###### Errors reported during AutoMySQLBackup execution.. Backup failed Error log below.. .../backups/automysqlbackup: line 1791: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1855: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 803: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1972: /usr/bin/du: Permission denied Since files are being created I'm assuming the find command failing has to do with actually rotating out and deleting the old backups? Line 803: find "${CONFIG_backup_dir}/${subfolder}${subsubfolder}" -mtime +"${rotation}" -type f -exec rm {} \; Any ideas for alternatives?

    Read the article

  • Disk space profiling in Unix

    - by user1677770
    I'm looking for a tool to summarize how disk space is being used on very large partitions. Our file system is around 950TB, mostly broken up into 20TB partitions. There are some really nice graphical tools for visualising these file spaces: http://www.disksavvy.com/disksavvy_screenshots.html http://methylblue.com/filelight/ But I'm really not sure how well they will scale. Does anybody have any experience of these tools and can make any recommendations? Even something that parses and summarises a really big du output would be a good start.

    Read the article

  • centos 100% disk full - How to remove log files, history, etc?

    - by kopeklan
    mysqld won't start because disk space is full: 101221 14:06:50 [ERROR] /usr/libexec/mysqld: Error writing file '/var/run/mysqld/mysqld.pid' (Errcode: 28) 101221 14:06:50 [ERROR] Can't start server: can't create PID file: No space left on device running df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 3.2G 12G 23% / /dev/sda5 4.8G 4.6G 0 100% /var /dev/sda3 430G 855M 407G 1% /home /dev/sda1 76M 24M 49M 33% /boot tmpfs 956M 0 956M 0% /dev/shm du -sh * in /var: 12K account 56M cache 24K db 32K empty 8.0K games 1.5G lib 8.0K local 32K lock 221M log 16K lost+found 0 mail 24K named 8.0K nis 8.0K opt 8.0K preserve 8.0K racoon 292K run 70M spool 8.0K tmp 76K webmin 2.6G www 20K yp in /dev/sda5, there is website files in /var/www. because this is first time, I have no idea which files to remove other than moving /var/www to other partition And one more, what is the right way to remove log files, history, etc in /dev/sda5?

    Read the article

  • df command show no output

    - by user119720
    I'm running the linux distro on my server.When i want to verify the size of the disk, i'm issuing this commnand to get the output. df -h But it does not produce ANY output.Strangely enough when i'm issuing other command such as fdisk -l or du -h it can show output normally. Does anyone now why is this happening?Thanks. edit: here is the output of cat /etc/fstab none /dev/pts devpts rw 0 0 and this is for mount command none on /dev/pts type devpts (rw) none on /proc/sys/fs/binfmt_misc tpe binfmt_misc (rw) edit(2): here is the output of cat /proc/mounts /dev/vzfs / vzfs rw,relatime,usrquota,grpquota 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 none /dev/tmpfs rw,relatime 0 0 none /dev/pts devpts rw,relatime 0 0 none /proc/sys/fs/binfmt_misc binfmt_msc rw,relatime 0 0

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • When using gt5 in my home directory I get a blank page.

    - by MT
    When using gt5 in various directories on my system (including my home directory) I get blank results. If I limit the max-depth enough, I get results. For example, in my home directory 'gt5 --max-depth 2' produces a listing, while 'gt5 --max-depth 3' produces a blank page. I've noticed that the temporary html file that gets created in tmp (such as '/tmp/gt5.9035.kJVM08Y9/gt5.html' is a zero-byte file. I can successfully do a du in the same directory (which is what I thought gt5 was using), so I'm not sure what to check?

    Read the article

  • Creating a Scheduled Task that runs forever on Windows XP

    - by Mike Fiedler
    When I create a scheduled task, I do so via command line: schtasks.exe /Create /TN "startup-script" /TR "C:\startup.bat" /RU taskuser /RP taskpasswd /SC ONLOGON The idea is that this task run forever. The batch opens a java process that is never meant to end. I've used ONLOGON, as the machine auto-logs in as taskuser. All this works fine, for about 72 hours, after which the Duration flag kicks in and ends the process. Windows XP doesn't have the /DU flag on command line - is there an alternative method to creating a task that is meant to run from a system startup (doesn't even require logon) and runs forever, without touching a GUI?

    Read the article

  • NAT : understanding about interconnection

    - by PITCHY
    English version below J'ai 2 routeurs A et B relié en série avec les ip respectives ( 10.0.0.1/30 10.0.0.2/30) sur le routeur A j'ai activé la fonction NAT avec un pool (200.0.0.1 - 200.0.0.15/28). Lorsque je sors je prends donc un ip du pool par exemple 200.0.0.10. Comment ça fonctionne sachant que ma nouvelle ip (200.0.0.10) ne se trouve pas sur le meme réseau que mon interface de destination (10.0.0.2)? English: I have 2 routers A and B, interconnected with a serial connection, with the ip's 10.0.0.1/30 for A and 10.0.0.2/30 for B. On router A NAT was activated with the pool 200.0.0.1 - 200.0.0.15/28. When connection to this router, I get an ip from the pool, for example 200.0.0.10. Knowing my new ip is 200.0.0.10, which is not on the same network as my destination interface (10.0.0.2), how can this work?

    Read the article

  • How to get the summarized sizes of folders and their subfolders?

    - by Kau-Boy
    Let's say I want to get the size of each folder of a linux file system. When I use ls -la I don't really get the summarized size of the folders. If I use df I get the size of each mounted file system but that also doesn't help me. And with du I get the size of each subfolder and the summary of the whole file system. But I want to have only the summarized size of each folder within the ROOT folder of the file system. Is there any command to achiev that?

    Read the article

  • Find out the size of a .tar.gz archive in the terminal without unpacking

    - by Sven
    I have a 32GB .tar.gz archive and I'd like to know the size of the files if I unpack this compressed archive. I'd like to avoid unpacking the archive first and than use e.g. du. Is it also possible to find out the size of the contained files without unpacking the compressed archive (on a Linux and/or MacOSX system)? For another archive I know, that it also contains .tar.gz files. Is it also possible to calculate the size of the unpacked archives that are contained within an archive? (for example by setting a level to which the "unpacking" should be simulated?)

    Read the article

  • Tools for tracking disk usage

    - by Carey
    I manage a number of linux fileservers. These all run applications written from 0-10 years ago. As sometimes happens, a machine will come close to, or run out of disk space. Reasons include applications not rotating log files, a machine with 500GB of disk producing 150GB of new files every month that were not written to tape, databases gradually increasing in size, people doing silly things...generally a bit of chaos. Anyway, when a machine unexpectedly goes from 50% to 100% full in a couple of hours, I figure out what broke (lots of "du") and delete files or contact someone. I also can look at cacti graphs to figure out what the machine's normal disk usage is (e.g. for /home). Does anyone know of any tools that will give finer grained information on historial usage than a cacti/RRD graph? Like "/home/abc/xyz increased 50GB in the last day".

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Windows 8 on iSCSI with LIO target: thin provisioning

    - by LubosD
    I have installed Windows 8.1 on an iSCSI target. This target is provided by Linux LIO and is backed by a sparse file. One of the reasons I created such an installation was thin provisioning. In other words, when I free disk space on Windows, LIO should punch holes into the file, thus free storage space on the Linux server as well. I have checked my kernel's sources and the SCSI UNMAP command is really supported for file-backed targets. On the other hand, deleting files on Windows doesn't lower the amount of space taken by the backing file on Linux (checked with du). Actually, the backing file sometimes grows even more. Some sources on Google say Win8 should support UNMAP/DISCARD on iSCSI, but even in Wireshark I only see ordinary read and write commands when files are being deleted. Any way to fix or troubleshoot it?

    Read the article

  • USB flash drive showing empty but half of the capacity is in Used

    - by tamakisquare
    Not sure if I should post my question in superuser, but it looks like the most appropriate place among all StackExchange sites. I have a 16GB Kingston DataTraveler USB drive. When I tried to use it this morning, it showed up nothing in there but yet its details showed that half of the capacity was in used. I tried it with OS X, Ubuntu, and Windows 7 and the results were the same. I tried to create a new folder and it worked. Apparently, the drive is working but somehow not showing my previously stored data. Note that I was still using the drive last night and there wasn't any problems. Following @rob's suggestion, du -h gave me: 16K ./.Trashes 960K ./.Spotlight-V100/Store-V1/Stores/2620683B-A38B-42F4-A247-45CAF4826ADE 976K ./.Spotlight-V100/Store-V1/Stores 1008K ./.Spotlight-V100/Store-V1 1.0M ./.Spotlight-V100 1.1M And, df -h gave me: /dev/sdb1 15G 7.9G 7.1G 53% /media/KINGSTON Confirming what I reported. Anyone got a clue/answer to this issue? Thanks.

    Read the article

  • Mysql showing some high spikes

    - by user111196
    We one mysql server suddenly the access to it becomes slow of out sudden. So I read some place they said maybe due to var size is it? I am not too sure any idea how to check the root cause of it. The cpu is like nearly 150%. Any indication on it. I have tried this so far. du -sh * 4.0K account 67M cache 4.0K cvs 16K db 8.0K empty 4.0K games 4.0K gdm 148G lib 4.0K local 16K lock 624M log 0 mail 4.0K nis 4.0K opt 4.0K preserve 400K run 298M spool 4.0K tmp 359M www 12K yp

    Read the article

  • web grid server pagination trigger multiple controller call when changing page

    - by Thomas Scattolin
    When I server-filter on "au" my web grid and change page, multiple call to the controller are done : the first with 0 filtering, the second with "a" filtering, the third with "au" filtering. My table load huge data so the first call is longer than others. I see the grid displaying firstly the third call result, then the second, and finally the first call (this order correspond to the response time of my controller due to filter parameter) Why are all that controller call made ? Can't just my controller be called once with my total filter "au" ? What should I do ? Here is my grid : $("#" + gridId).kendoGrid({ selectable: "row", pageable: true, filterable:true, scrollable : true, //scrollable: { // virtual: true //false // Bug : Génère un affichage multiple... //}, navigatable: true, groupable: true, sortable: { mode: "multiple", // enables multi-column sorting allowUnsort: true }, dataSource: { type: "json", serverPaging: true, serverSorting: true, serverFiltering: true, serverGrouping:false, // Ne fonctionne pas... pageSize: '@ViewBag.Pagination', transport: { read: { url: Procvalue + "/LOV", type: "POST", dataType: "json", contentType: "application/json; charset=utf-8" }, parameterMap: function (options, type) { // Mise à jour du format d'envoi des paramètres // pour qu'ils puissent être correctement interprétés côté serveur. // Construction du paramètre sort : if (options.sort != null) { var sort = options.sort; var sort2 = ""; for (i = 0; i < sort.length; i++) { sort2 = sort2 + sort[i].field + '-' + sort[i].dir + '~'; } options.sort = sort2; } if (options.group != null) { var group = options.group; var group2 = ""; for (i = 0; i < group.length; i++) { group2 = group2 + group[i].field + '-' + group[i].dir + '~'; } options.group = group2; } if (options.filter != null) { var filter = options.filter.filters; var filter2 = ""; for (i = 0; i < filter.length; i++) { // Vérification si type colonne == string. // Parcours des colonnes pour trouver celle qui a le même nom de champ. var type = ""; for (j = 0 ; j < colonnes.length ; j++) { if (colonnes[j].champ == filter[i].field) { type = colonnes[j].type; break; } } if (filter2.length == 0) { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; } else { // Sans '' autour de la valeur. filter2 = filter2 + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } else { if (type == "string") { // Avec '' autour de la valeur. filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~'" + filter[i].value + "'"; }else{ filter2 = filter2 + '~' + options.filter.logic + '~' + filter[i].field + '~' + filter[i].operator + "~" + filter[i].value; } } } options.filter = filter2; } var json = JSON.stringify(options); return json; } }, schema: { data: function (data) { return eval(data.data.Data); }, total: function (data) { return eval(data.data.Total); } }, filter: { logic: "or", filters:filtre(valeur) } }, columns: getColonnes(colonnes) }); Here is my controller : [HttpPost] public ActionResult LOV([DataSourceRequest] DataSourceRequest request) { return Json(CProduitsManager.GetProduits().ToDataSourceResult(request)); }

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100  | Next Page >