Search Results

Search found 5942 results on 238 pages for 'total starnger'.

Page 165/238 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • How to compress .pdfs in word 2007?

    - by chobo2
    Hi I am trying to send my cover letter and resume away but apparently it is too big to send through craigs list(my computer says the total size is 500kb) as it has a 600kb limit(so small should be at least a meg). Hi there. You recently tried to email Some job Email, an anonymous craigslist address. However, your message was too big to be sent through our system. Craigslist has a 600KB limit on the messages we'll send. Please reduce the size of your mail and try again. Thanks for using craigslist. So when I convert my word 2007(.docx) files to pdf they become huge. Like they got from 32kb to 320kb. So is there a way I can either get around craigslist limits or compress my pdfs a bit to make it happy. I don't want to send zips and stuff since the person who gets it might not even know what to do. I rather not send .docx since not sure if will have office 2007 or the compatibility view installed and I rather just send it as pdf(as some place require it anyways to be in pdfs). Thanks

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • On Windows and Windows 7's Task Manager, why Memory is 1118MB Available but only 62MB Free? [closed]

    - by Jian Lin
    Possible Duplicate: Windows 7 memory usage What are the "Cached", "Available", and "Free" memory in the following picture (From Windows 7's Task Manager). If it is 1118MB Available, then why isn't it Free (to use)? As I understand it, if a bowl of noodle is available, that doesn't mean it is free... it may still cost $7. But what about in the Task Manager, when it is Available, it is also not Free? Does it cost $2 per MB? What about the "Cached"... What exactly is the Cached Memory? We may put some hard disk data in RAM and so we cache the data in RAM, for faster access (that's the operating system's job). So the Total Physical RAM is 6GB, what is the 1106 Cached? Cached in where? Caching physical RAM in ... some where? It is also strange that the Cached value is sometimes higher and sometimes lower than the Available value. Can somebody who is knowledgeable about this shred some light on these meanings?

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Apache2 random 403 error & info server busy logs on Ubuntu

    - by risyasin
    Hello, I have a strange situation with apache2. Meanless, random 403 errors. Any page (html, php etc.) normally working. but if i request repeatedly by pressing refresh button of browser. it interrupts & sends a 403 randomly. after a few seconds it works again. in the error log, i see client denied by server configuration. main error log of apache says [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 8 children, there are 99 idle, and 137 total children my current values IfModule mpm_prefork_module StartServers 120 MinSpareServers 100 MaxSpareServers 200 MaxClients 256 MaxRequestsPerChild 500 /IfModule i've increased 10 by 10. from 20. but nothing solved. i've disabled KeepAlive. What may cause this problem ? thank you in advance. a fresh install Ubuntu server x86 8.04.4 Virtualmin from it's website (not from debian repositories). Linux 2.6.24-27-server #1 SMP i686 - Apache 2.2.8 Mpm prefork Virtualmin version 3.78.gpl GPL PHP Version 5.2.4-2ubuntu5.10 Loaded modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module shared) alias_module (shared) auth_basic_module (shared) auth_digest_module (shared) uthn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) uthz_host_module (shared) authz_user_module (shared) autoindex_module (shared) ache_module shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) expires_module (shared) fcgid_module (shared) file_cache_module (shared) eaders_module (shared) mime_module (shared) mime_magic_module (shared) evasive20_module shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) etenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • ProCurve 1800 switch issue

    - by user98651
    I recently deployed ProCurve 1800-24G switches in place of some older ProCurve 2424M switches in my network. However, I'm having a serious problem with the switch connected to the router. It seems, every night when our Windows 2008 R2 server (off site) runs a backup to a iSCSI target (on site) [facilitated through a PPTP tunnel] the LAN loses connectivity with the router. To clarify, there is only one router which is connected to the switch affected by this problem. The only way to resolve the issue is to either reboot the router or pull the ethernet cable that goes to the router and plug it back in. During the outage, clients cannot receive DHCP requests, DNS requests, ping, or do anything else with the router in this state. Now, neither the switch or router are configured extensively and the issue only seems to have surfaced with the new switch in place. I have tried a number of things including replacing cables, rebooting and checking the switch configuration (it is literally as basic as you can get at this point-- flat LAN, no trunking). Interestingly, the router shows (accessed externally) no changes in configuration or status during this state but similarly cannot ping or access other hosts on the network. This issue occurs in different stages of backup (ie, different amounts transferred). I've also dumped packets from the switch into WireShark but cannot seem to find any anomaly yet (I'm looking at packets around the time the issue appeared and at the time when I reset the NIC). Any suggestions for what to look for? Ideas on what could be causing this? I'm seeing some transmit/receive errors on the NIC from both the router and switch side but nothing serious when compared to the total packet counts. I'm seriously doubting hardware at this point, as I have tried another switch, different cables, and a different NIC on the router.

    Read the article

  • IPMI sdr entity 8 (memory module) only showing 3 records?

    - by thinice
    I've got two Dell PE R710's - A has a single socket and 3 DIMMs in one bank B has both sockets and 6 (2 banks @ 3 DIMMs) filled The output from "ipmitool sdr entity 8" confuses me - according to the OpenIPMI documentation these are supposed to represent DIMM slots. Output from A (1 CPU, 3 DIMMS, 1 bank.): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 27 degrees C Temp | 0Bh | ns | 8.1 | Disabled Temp | 0Ch | ucr | 8.1 | 52 degrees C Output from B (2 CPUs, 3 DIMMS in both banks, 6 total): ~#: ipmitool sdr entity 8 Temp | 0Ah | ok | 8.1 | 26 degrees C Temp | 0Bh | ok | 8.1 | 25 degrees C Temp | 0Ch | ucr | 8.1 | 51 degrees C Now, I'm starting to think this output isn't DIMMS themselves, but maybe a sensor for each bank and something else? (Otherwise, shouldn't I see 6 readings for the one with both banks active?) The CPU's aren't near 50 deg C, so I doubt the significantly higher reading is due to proximity - Is anyone able to explain what I'm seeing? Does the output from my ipmitool sdr entity 8 -v here on pastebin seem to hint at different sensors? The sensor naming conventions are poor - seems like a dell thing. Here is output from racadm racdump

    Read the article

  • 16TB Volumes and SNMP On Windows

    - by John K
    As volumes larger than 16TB became more common, it was recognized that the 32 bit value used to report disk size and usage within the standard "HOST-RESOURCES" MIB in SNMP was not large enough to report the proper disk size. Net-SNMP seems to have addressed this issue by simply manipulating the value of "AllocationUnits" to maintain a 32 bit value for disk utilization (since total disk size/usage is equal to the 32 bit space value times the allocation unit), to allow for the calculation of a volume larger than 8/16TB. Presuming you don't have any reporting interest in the allocation unit, this seems like a fine solution. https://bugzilla.redhat.com/show_bug.cgi?id=654384 Window's built in SNMP service, however, seems to continue to suffer from this error, simply reporting the modulo of the used/assigned disk space, resulting in inaccurate disk size reporting. Is there a way to enable Windows to correctly report disk usage for volumes over 16TB? We attempted to simply install Net-SNMP 5.5 x64 and disable Windows SNMP service entirely, however this unfortunately did not fix our issue. I've seen people in the Cacti community mention simply scripting out a solution. Unfortunately, we're using Observium for quick and basic systems monitoring. If the issue can't be correct on the Window's side, can Observium be made to report custom MIBs?

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • How do I repair the corrupted files found by sfc /scannow? "Windows Resource Protection found corrupt files but was unable to fix some of them."

    - by galacticninja
    After running chkdsk C: /F /R and finding out that my hard disk has 24 KB in bad sectors (log is posted below), I decided to run Windows 7's System File Checker utility (sfc /scannow). SFC showed the ff. message after I ran it: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log." Since the CBS.log file is too large, I ran findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt" (as per Microsoft's KB 928228 article) to only get the log text pertaining to the corrupt files. (log is also posted below) How do I troubleshoot and repair the corrupted files mentioned by sfc /scannow? My OS is Windows 7, 64-bit. chkdsk log Checking file system on C: The type of the file system is NTFS. A disk check has been scheduled. Windows will now check the disk. CHKDSK is verifying files (stage 1 of 5)... 936192 file records processed. File verification completed. 25238 large file records processed. 0 bad file records processed. 4 EA records processed. 44 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 1051640 index entries processed. Index verification completed. 0 unindexed files scanned. 0 unindexed files recovered. CHKDSK is verifying security descriptors (stage 3 of 5)... 936192 file SDs/SIDs processed. Cleaning up 24 unused index entries from index $SII of file 0x9. Cleaning up 24 unused index entries from index $SDH of file 0x9. Cleaning up 24 unused security descriptors. Security descriptor verification completed. 57725 data files processed. CHKDSK is verifying Usn Journal... 36994248 USN bytes processed. Usn Journal verification completed. CHKDSK is verifying file data (stage 4 of 5)... 936176 files processed. File data verification completed. CHKDSK is verifying free space (stage 5 of 5)... 306238 free clusters processed. Free space verification is complete. Adding 1 bad clusters to the Bad Clusters File. Correcting errors in the Volume Bitmap. Windows has made corrections to the file system. 488282111 KB total disk space. 485595420 KB in 766458 files. 401856 KB in 57726 indexes. 24 KB in bad sectors. 1059863 KB in use by the system. 65536 KB occupied by the log file. 1224948 KB available on disk. 4096 bytes in each allocation unit. 122070527 total allocation units on disk. 306237 allocation units available on disk. Internal Info: 00 49 0e 00 81 93 0c 00 34 01 17 00 00 00 00 00 .I......4....... 6b 29 00 00 2c 00 00 00 00 00 00 00 00 00 00 00 k)..,........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ sfc /scannow log (through findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt") Note: The full log is at http://pastebin.com/raw.php?i=gTEGZmWj . I've only quoted parts of the full log below (mostly from the last part), as the full log won't fit within the character limit for questions. I've added it to serve as a preview. ... 2013-12-28 19:37:50, Info CSI00000542 [SR] Beginning Verify and Repair transaction 2013-12-28 19:37:55, Info CSI00000544 [SR] Verify complete 2013-12-28 19:37:56, Info CSI00000545 [SR] Verifying 95 (0x000000000000005f) components 2013-12-28 19:37:56, Info CSI00000546 [SR] Beginning Verify and Repair transaction 2013-12-28 19:38:03, Info CSI00000548 [SR] Verify complete 2013-12-28 19:38:03, Info CSI00000549 [SR] Repairing 43 (0x000000000000002b) components 2013-12-28 19:38:03, Info CSI0000054a [SR] Beginning Verify and Repair transaction ... 2013-12-28 19:38:15, Info CSI00000730 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:62{31}]"GroupPolicy-Admin-Gpedit-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000733 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:30{15}]"frs-core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000736 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"gpmgmt-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000739 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:74{37}]"MediaServer-ASPAdmin-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073c [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:36{18}]"Ldap-Client-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073f [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"iSNS_Service-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000742 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"MediaServer-Multicast-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000745 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:78{39}]"Kerberos-Key-Distribution-Center-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000748 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:86{43}]"GroupPolicy-CSE-SoftwareInstallation-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074b [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:28{14}]"ieframe-dl.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074e [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"GroupPolicy-Admin-Gpedit-Snapin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000751 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:32{16}]"IPSec-Svc-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000754 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:22{11}]"HTTP-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000757 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:56{28}]"MediaServer-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075a [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"GPBase-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075d [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"IasMigPlugin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000760 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:50{25}]"International-Core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000762 [SR] Cannot repair member file [l:24{12}]"wbemdisp.dll" of Microsoft-Windows-WMI-Scripting, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000763 [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000766 [SR] Could not reproject corrupted file [ml:58{29},l:56{28}]"\??\C:\Windows\SysWOW64\wbem"\[l:24{12}]"wbemdisp.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000768 [SR] Cannot repair member file [l:56{28}]"Microsoft.MediaCenter.UI.dll" of Microsoft.MediaCenter.UI, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_MSIL (8), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000769 [SR] This component was referenced by [l:176{88}]"Microsoft-Windows-MediaCenter-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.MediaCenter" 2013-12-28 19:38:16, Info CSI0000076c [SR] Could not reproject corrupted file [ml:520{260},l:40{20}]"\??\C:\Windows\ehome"\[l:56{28}]"Microsoft.MediaCenter.UI.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000076e [SR] Cannot repair member file [l:24{12}]"ReAgentc.exe" of Microsoft-Windows-WinRE-RecoveryTools, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000076f [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000772 [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:24{12}]"ReAgentc.exe"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000774 [SR] Cannot repair member file [l:82{41}]"System.Management.Automation.dll-Help.xml" of Microsoft-Windows-PowerShell-PreLoc.Resources, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10{5}]"en-US", VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000775 [SR] This component was referenced by [l:266{133}]"Microsoft-Windows-Client-Features-Package~31bf3856ad364e35~amd64~en-US~6.1.7601.17514.Microsoft-Windows-Client-Features-Language-Pack" 2013-12-28 19:38:16, Info CSI00000778 [SR] Could not reproject corrupted file [ml:520{260},l:104{52}]"\??\C:\Windows\System32\WindowsPowerShell\v1.0\en-US"\[l:82{41}]"System.Management.Automation.dll-Help.xml"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000077a [SR] Cannot repair member file [l:18{9}]"hlink.dll" of Microsoft-Windows-HLink, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000077b [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI0000077e [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:18{9}]"hlink.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000780 [SR] Repair complete 2013-12-28 19:38:16, Info CSI00000781 [SR] Committing transaction 2013-12-28 19:38:19, Info CSI00000785 [SR] Verify and Repair Transaction completed. All files and registry keys listed in this transaction have been successfully repaired

    Read the article

  • Ways to go about optimizing website performance WordPress, Amazon EC2 Apache and RDS MySQL

    - by fuzzybee
    I have 6 WordPress websites running on 1 single EC2 instance. All the the websites are connecting to databases in 1 same RDS instance. Earlier today, traffic to the largest website peaked and the RDS instance went bottle-neck - CPU utilization was 100% for over an hour. It affected all of my websites as it took them all forever to load. In order to prevent such issue from happening again, which of the following will matter most so that I invest time and effort in first of all? (I will work on all later, I just need to prioritise now) To improve caching for all websites To fine-tune the database server To fine-tune my Apache server What will be the effect on user experience for my websites? Some quick searches show that I should limit number of concurrent connections to my web server but wouldn't that prevent users from accessing my websites? More background: My largest website has 140k visits and 660k page views a month. The other 5 websites should add up much less than that. I'm using a large EC2 instance as the web server I'm using a medium RDS instance as the database server What I've already done: Use W3 Total Cache plugin for caching for most the websites, especially the largest one (I can barely anything else in terms of caching I could do for the largest website) Am I using my resources wastefully or is there simply not enough resources for my websites - or rather, how do I answer that question myself?

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Please advise on VPS config choice (with SQL Server Express)

    - by tjeuten
    Hi all, I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu

    Read the article

  • Why doesn't the monitor output anything in Linux console mode?

    - by flypen
    I install Linux without graphics support. Previously I used a monitor with 720p support. And it can display normally. Now I change to a monitor with 1080p support. I can see BIOS and GRUB info on monitor, and kernel messages in early stages. However, the monitor says that there is no input immediately, and then I can't see anything again. It seems that it happens after something initializes. Is it related to vesafb? vesafb: mode is 1280x1024x32, linelength=5120, pages=0 vesafb: scrolling: redraw vesafb: Truecolor: size=8:8:8:8, shift=24:16:8:0 mtrr: type mismatch for 7f800000,800000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,400000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,200000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,100000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,80000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,40000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,20000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,10000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,8000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,4000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,2000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,1000 old: write-back new: write-combining vesafb: framebuffer at 0x7f800000, mapped to 0xffffc90011380000, using 5120k, total 5120k Console: switching to colour frame buffer device 160x64 fb0: VESA VGA frame buffer device

    Read the article

  • tftpd-hpa service must be restarted before working after fresh boot

    - by Steve
    I'm running Ubuntu 12.04 inside a VirtualBox VM. I've installed tftpd-hpa so I can boot an embedded Linux device via tftp. My problem is that after a fresh boot of the VM, tftpd doesn't seem to work until I restart the service, after which is works great until the system is rebooted. The transcript below should explain the situation. EDIT: After the fresh boot, I execute netstat -a | grep tftp and find nothing. After restarting the service, the same command returns udp 0 0 *:tftp *:* (whitespace removed). I think this might be the key to the problem, I'm just not sure how to resolve it. I don't think it's related to this specific issue, but I had another problem with tftpd that was asked and answered in this question. steve@steve-VirtualBox:~$ cat /etc/default/tftpd-hpa # /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" steve@steve-VirtualBox:~$ ls -l /var/lib/tftpboot total 8204 -rw-r--r-- 1 root root 34352 May 28 08:22 am335x-boneblack.dtb -rw-r--r-- 1 root root 33206 May 28 08:22 am335x-bone.dtb -rw-r--r-- 1 root root 41564 May 28 08:22 am335x-evm.dtb -rw-r--r-- 1 root root 38048 May 28 08:22 am335x-evmsk.dtb -rwxr-xr-x 1 root root 4117904 May 20 09:39 zImage -rw-r--r-- 1 root root 4117616 May 28 08:22 zImage-am335x-evm.bin steve@steve-VirtualBox:~$ tftp localhost tftp> get zImage Transfer timed out. tftp> quit steve@steve-VirtualBox:~$ sudo service tftpd-hpa restart [sudo] password for steve: tftpd-hpa stop/waiting tftpd-hpa start/running, process 2106 steve@steve-VirtualBox:~$ tftp localhost tftp> get zImage Received 4143798 bytes in 1.4 seconds tftp> quit steve@steve-VirtualBox:~$

    Read the article

  • EFI vs MBR - Installing Windows Server 2008 R2 or 2012 on 8TB

    - by Riaan de Lange
    I'm having some difficulty installing Windows Server 2008 R2 and Windows Server 2012 on an Intel Server platform. The server specs is as follows: Intel Grizzly Pass Server System - R2308GZ4GC 2x Intel Xeon 2620 - 2.0 GHZ - BX80621E52620 132 GB of Memory REG-DIMM - TS1GKR72V6H 4x Seagate Constellation ES 2TB 3.5" 7200rpm 6GB/S - ST32000645NS Intel Big Laurel 4CH 6G SAS RAID 512MB - RS2BL040 On the Intel RAID Controller Setup, I have setup the HDD to be in RAID-0 - for testing purposes. (Ultimately configured in RAID-5) So, the total size of HDD space I can use is 7.6 TB something... When I install the Server OS's, they don't seem to go beyond 2 TB (1.76 TB) I have read up on EFI and UEFI boot, and this seems to work in 2012, but I could not install any drivers for the motherboard... So, I also tried EFI for 2008R2, and this worked while installing the OS, it did not however work with the Windows Boot Manager option in the BIOS. It kept on freezing once it tries to load the partition. My idea was to allocate the complete 8 TB for the OS, and load a few VM's on there. I have now started with a new approach where I'll have a 256 GB OS Partition, and a secondary 7.5 TB Data partition. Oh, and I also did a diskpart - convert disk to gpt whilst installing 2008R2. The whole disk was accessible, 7.6TB Can anyone please clarify that EFI/UEFI is meant for larger boot volumes? Bigger than 2TB. If I were to have an ideal situation where my OS is run on a SSD, 256GB, and I can attach the 8 TB drives as normal disk to the OS? I'm I correct in saying that if I wanted to boot from a 8TB partition, I would need to force the BIOS to boot from EFI? The limit for MBR is 2 TB as far as I know now... *FYI: The motherboard is EFI-ready

    Read the article

  • Basic OpenVPN setup

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Trying to delete a directory stored on a WIndows server, mounted on a mac

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >