Search Results

Search found 14653 results on 587 pages for 'disk cache'.

Page 94/587 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • ERROR with Ubuntu: Cannot open the disk 'D:\My Documents\My Virtual Machines\Ubuntu\Ubuntu-1.vmdk' or one of the snapshot disks it depends on

    - by leiyu
    Cannot open the disk 'D:\My Documents\My Virtual Machines\Ubuntu\Ubuntu-1.vmdk' or one of the snapshot disks it depends on. Reason: The physical disk is already in use. ====================== When I powered on my Ubuntu on VMwave, a window showed up within words above. I tried to remove the old hard disk in settings and created a new one, but it still doesnot work. Also, I tried to delete the .lck files and even the doc. BUT....... Has someone solved this problem? PLEASE do me a favour!!Many thanks!!

    Read the article

  • How do I get the Grub menu back after installing Windows on a separate disk?

    - by Shazzner
    Tried sudo grub-install on sda1 but it complained about being a BAD IDEA. I had to install windows for a work related issue so I used a separate disk (I had used it for ubuntu on this computer, but bought a bigger disk so installed ubuntu on that and left the old one in in case I needed an old file). Windows installed fine but overwrote Grub. So if I choose the Ubuntu disk to boot first in BIOS I get a blank screen. I googled and followed this advice: https://help.ubuntu.com/community/RecoveringUbuntuAfterInstallingWindows However, when I get down to this section: sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda1 I get this: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea… --recheck does nothing. Any ideas?

    Read the article

  • Why does the first partition start at sector 34 when I choose "Guided - Use entire disk" during install?

    - by Kent
    After choosing "Guided - Use entire disk" during installation I find that the first partition starts on sector 34. Why that specific sector and not the first one? (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sda: 5860533168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 390659s 390626s fat32 boot 2 390660s 890660s 500001s ext2 3 890661s 5860533118s 5859642458s (parted) In case you prefer bytes as the unit: (parted) unit B (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sda: 3000592982016B Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 17408B 200017919B 200000512B fat32 boot 2 200017920B 456018431B 256000512B ext2 3 456018432B 3000592956927B 3000136938496B

    Read the article

  • USB disk not recognized after detaching from DVD player. What to do?

    - by MMA
    I had one Transcend 4GB USB stick formatted to NTFS and was working fine. Today I inserted this disk into a DVD player, and it was saying, "loading". After a long time, noting happened, and it seemed that the stick (NTFS) is not recognized by it. I took out the stick and tried to reformat to FAT32. But the stick is not being recognized in my machine (Ubuntu 12.04). I tried the advices from USB drive not recognized after Erase Disk, without any success. When I tried Disk Utility, the stick is indicated as a generic device. See image, Formatting this device fails, saying, No medium found. Again see image, gparted does not even list this device. The same thing happens for fdisk. It is not listed there. Have I totally lost this stick? What should I do?

    Read the article

  • How to use Android's CacheManager?

    - by punnie
    I'm currently developing an Android application that fetches images using http requests. It would be quite swell if I could cache those images in order to improve to performance and bandwidth use. I came across the CacheManager class in the Android reference, but I don't really know how to use it, or what it really does. I already scoped through this example, but I need some help understanding it: /core/java/android/webkit/gears/ApacheHttpRequestAndroid.java Also, the reference states: "Network requests are provided to this component and if they can not be resolved by the cache, the HTTP headers are attached, as appropriate, to the request for revalidation of content." I'm not sure what this means or how it would work for me, since CacheManager's getCacheFile accepts only a String URL and a Map containing the headers. Not sure what the attachment mentioned means. An explanation or a simple code example would really do my day. Thanks! Update Here's what I have right now. I am clearly doing it wrong, just don't know where. public static Bitmap getRemoteImage(String imageUrl) { URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); copyStream(is, cache_result.getOutputStream()); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream()); return bmp; }

    Read the article

  • ASP.NET CacheDependency out of ThreadPool

    - by Stephen
    In an async http handler, we add items to the ASP.NET cache, with dependencies on some files. If the async method executes on a thread from the ThreadPool, all is fine: AsyncResult result = new AsyncResult(context, cb, extraData); ThreadPool.QueueUserWorkItem(new WaitCallBack(DoProcessRequest), result); But as soon as we try to execute on a thread out of the ThreadPool: AsyncResult result = new AsyncResult(context, cb, extraData); Runner runner = new Runner(result); Thread thread = new Thread(new ThreadStart(runner.Run()); ... where Runner.Run just invokes DoProcessRequest, The dependencies do trigger right after the thread exits. I.e. the items are immediately removed from the cache, the reason being the dependencies. We want to use an out-of-pool thread because the processing might take a long time. So obviously something's missing when we create the thread. We might need to propagate the call context, the http context... Has anybody already encountered that issue? Note: off-the-shelf custom threadpools probably solve this. Writing our own threadpool is probably a bad idea (think NIH syndrom). Yet I'd like to understand this in details, though.

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • How do I stop js files being cached in IE?

    - by DoctaJonez
    Hello stackers! I've created a page that uses the CKEditor javascript rich edit control. It's a pretty neat control, especially seeing as it's free, but I'm having serious issues with the way it allows you to add templates. To add a template you need to modify the templates js file in the CKEditor templates folder. The documentation page describing it is here. This works fine until I want to update a template or add a new one (or anything else that requires me to modify the js file). Internet Explorer caches the js file and doesn't pick up the update. Emptying the cache allows the update to be picked up, but this isn't an acceptable solution. Whenever I update a template I do not want to tell all of the users across the organisation to empty their IE cache. There must be a better way! Is there a way to stop IE caching the js file? Or is there another solution to this problem?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • UIImagePickerController Save to Disk then Load to UIImageView

    - by Harley Gagrow
    Hi, I have a UIImagePickerController that saves the image to disk as a png. When I try to load the PNG and set a UIImageView's imageView.image to the file, it is not displaying. Here is my code: (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage]; NSData *imageData = UIImagePNGRepresentation(image); // Create a file name for the image NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setTimeStyle:NSDateFormatterShortStyle]; [dateFormatter setDateStyle:NSDateFormatterShortStyle]; NSString *imageName = [NSString stringWithFormat:@"photo-%@.png", [dateFormatter stringFromDate:[NSDate date]]]; [dateFormatter release]; // Find the path to the documents directory NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString *fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // Write out the data. [imageData writeToFile:fullPathToFile atomically:NO]; // Set the managedObject's imageLocation attribute and save the managed object context [self.managedObject setValue:fullPathToFile forKey:@"imageLocation"]; NSError *error = nil; [[self.managedObject managedObjectContext] save:&error]; [self dismissModalViewControllerAnimated:YES]; } Then here is how I try to load it: self.imageView.backgroundColor = [UIColor lightGrayColor]; self.imageView.frame = CGRectMake(10, 10, 72, 72); if ([self.managedObject valueForKey:@"imageLocation"] != nil) { NSLog(@"Trying to load the imageView with: %@", [self.managedObject valueForKey:@"imageLocation"]); UIImage *image = [[UIImage alloc] initWithContentsOfFile:[self.managedObject valueForKey:@"imageLocation"]]; self.imageView.image = image; } else { self.imageView.image = [UIImage imageNamed:@"no_picture_taken.png"]; } I get the message that it's trying to load the imageView in the debugger, but the image is never displayed in the imageView. Can anyone tell me what I've got wrong here? Thanks a bunch.

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • how to save sitecore webpage in html file on local disk or server

    - by Sam
    WebRequest mywebReq ; WebResponse mywebResp ; StreamReader sr ; string strHTML ; StreamWriter sw; mywebReq = WebRequest.Create("http://domain/sitecore/content/test/page10.aspx"); mywebResp = mywebReq.GetResponse(); sr = new StreamReader(mywebResp.GetResponseStream()); strHTML = sr.ReadToEnd(); sw = File.CreateText(Server.MapPath("hello.html")); sw.WriteLine(strHTML); sw.Close(); Hi , I want to save sitecore .aspx page into html file on local disk , but i am getting exception. But if i use any other webpage example(google.com) it works fine. The exception : System.Net.WebException was caught HResult=-2146233079 Message=The remote server returned an error: (404) Not Found. Source=System StackTrace: at System.Net.WebClient.DownloadFile(Uri address, String fileName) at System.Net.WebClient.DownloadFile(String address, String fileName) at BlueDiamond.addmodule.btnSubmit(Object sender, EventArgs e) in c:\inetpub\wwwroot\STGLocal\Website\addmodule.aspx.cs:line 97 InnerException: Any help. Thanks in advance

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Chef: nested data bag data to template file returns "can't convert String into Integer"

    - by Dalho Park
    I'm creating simple test recipe with a template and data bag. What I'm trying to do is creating a config file from data bag that has simple nested information, but I receive error "can't convert String into Integer" Here are my setting file 1) recipe/default.rb data1 = data_bag_item( 'mytest', 'qa' )['test'] data2 = data_bag_item( 'mytest', 'qa' ) template "/opt/env/test.cfg" do source "test.erb" action :create_if_missing mode 0664 owner "root" group "root" variables({ :pepe1 = data1['part.name'], :pepe2 = data2['transport.tcp.ip2'] }) end 2)my data bag named "mytest" $knife data bag show mytest qa id: qa test: part.name: L12 transport.tcp.ip: 111.111.111.111 transport.tcp.port: 9199 transport.tcp.ip2: 222.222.222.222 3)template file test.erb part.name=<%= @pepe1 % transport.tcp.binding=<%= @pepe2 % Error reurns when I run chef-client on my server, [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in from_file' [2013-06-24T19:50:38+00:00] DEBUG: filtered backtrace of compile error: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in[]',/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in block in from_file',/var/chef/cache/cookbooks/config_test/recipes/default.rb:12:infrom_file' [2013-06-24T19:50:38+00:00] DEBUG: backtrace entry for compile error: '/var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in `[]'' [2013-06-24T19:50:38+00:00] DEBUG: Line number of compile error: '19' Recipe Compile Error in /var/chef/cache/cookbooks/config_test/recipes/default.rb TypeError can't convert String into Integer Cookbook Trace: /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:in []' /var/chef/cache/cookbooks/config_test/recipes/default.rb:19:inblock in from_file' /var/chef/cache/cookbooks/config_test/recipes/default.rb:12:in `from_file' Relevant File Content: /var/chef/cache/cookbooks/config_test/recipes/default.rb: 12: template "/opt/env/test.cfg" do 13: source "test.erb" 14: action :create_if_missing 15: mode 0664 16: owner "root" 17: group "root" 18: variables({ 19 :pepe1 = data1['part.name'], 20: :pepe2 = data2['transport.tcp.ip2'] 21: }) 22: end 23: I tried many things and if I comment out "pepe1 = data1['part.name'],", then :pepe2 = data2['transport.tcp.ip2'] works fine. only nested data "part.name" cannot be set to @pepe1. Does anyone knows why I receive the errors? thanks,

    Read the article

  • How to increase hard drive transfer speed

    - by atif089
    This is my motherboard GA-M61PME-S2 (SATA upto 3.0 Gbps) and This is my Hard Disk Samsung hd502hi Capacity 500 GB Cache 16MB Disks / Heads 1 / 2 Interface SATA 3Gb/s Spindle Speed 5400 RPM Sustained Data Rate OD 100 MB/s Average Seek 8.9 ms Average Latency 5.56 ms Data Transfer Rate 300 MB/sec Weight 470 grams Power: Idle / Seek / R-W / Spin-up 3.9W / 4.8W / 5.1W / ~24W Acoustics (sound power) 2.2 / 2.7 Bel (idle / quiet seek / performance seek) When I copy things from one partition to another they transfer at a maximum of 30 MBps. However the drive supports upto 300 MBps right ? How do I increase the transfer speeds? P.S - Using Windows XP, All partitions are NTFS.

    Read the article

  • OS X won't boot up unless I hold down option key

    - by Gazzer
    I have a strange issue on an early 2008 Mac Pro running OS 10.6: if I restart the computer it restarts normally if I shutdown and boot, it stops at the grey screen just before the boot process if I shutdown and boot but hold down the option key, I can select the boot disk and all is good. I've just cloned the disk, and the same thing happens. The disk is a SAMSUNG HD154UI The disk is partitioned (the second partition holds a clone of the Snow Leopard Install disk) One weird thing on the original disk was one of the partitions said 'EFI Boot' in a non-aliased font rather than the name of the disk when the disks are listed upon holding down option. Solution: it seems that there was a problem with the disk. Part of the difficulty in finding the solution was that you need to remove the disk from the computer completely. For example, a good disk in Bay 3, wouldn't boot up if the bad disk was in Bay 2. So for ages I thought the problem was hardware related in Bay 3. So if you think you have a dodgy disk remove it totally if you are testing the hardware with a 'clean' disk. Cleaning the PRAM helped to get the new disk to work too.

    Read the article

  • Mysterious "media" volume mounted on desktop Mac OS X

    - by Allen
    I have a mysterious volume mounted on my desktop that I can't seem to forcibly unmount. I've tried using umount and also diskutil, but it seems to automatically remount itself. I've copied my hdd with Time Machine, and copied it onto a new computer, and it also has the drive mounted on it. It's not pointing to anything and I can't open it, nor can I forcibily remove it by hand with rm -Rf. Any ideas? I noticed this problem after I upgraded to Mountain Lion from Lion. It causes problems because when I try to select a file using the built in Finder dialog box, it freezes for a few minutes because it tries to cache or read into the "media" mounted volume.

    Read the article

  • Storage sizing for virtual machines

    - by njo
    I am currently doing research to determine the consolidation ratio my company could expect should we start using a virtualization platform. I find myself continually running into a dead end when researching how to translate observed performance (weeks of perfmon data) to hdd array requirements for a virtualization server. I am familiar with the concept of IOPs, but they seem to be an overly simplistic measurement that fails to take into account cache, write combining, etc. Is there a seminal work on storage array performance analysis that I'm missing? This seems like an area where hearsay and 'black magic' have taken over for cold, hard fact.

    Read the article

  • body onload cache not clearing

    - by Mad Cow
    I'm using an image swapping function generated by Dreamweaver to allow an image to change when moused over. The images are small. I have a problem because the images are getting stored in cache and without clearing it out I cant get the new images to show. It works on some browsers, but unfortunately not on all... I've read about putting "a random query" into the javascript to force the page to reload, but I dont know where to put it (the code was generated for me by dreamweaver). A subset of my code is : <script type="text/javascript"> function MM_preloadImages() { //v3.0 var d=document; if(d.images){ if(!d.MM_p) d.MM_p=new Array(); var i,j=d.MM_p.length,a=MM_preloadImages.arguments; for(i=0; i<a.length; i++) if (a[i].indexOf("#")!=0){ d.MM_p[j]=new Image; d.MM_p[j++].src=a[i];}} } function MM_swapImgRestore() { //v3.0 var i,x,a=document.MM_sr; for(i=0;a&&i<a.length&&(x=a[i])&&x.oSrc;i++) x.src=x.oSrc; } function MM_findObj(n, d) { //v4.01 var p,i,x; if(!d) d=document; if((p=n.indexOf("?"))>0&&parent.frames.length) { d=parent.frames[n.substring(p+1)].document; n=n.substring(0,p);} if(!(x=d[n])&&d.all) x=d.all[n]; for (i=0;!x&&i<d.forms.length;i++) x=d.forms[i][n]; for(i=0;!x&&d.layers&&i<d.layers.length;i++) x=MM_findObj(n,d.layers[i].document); if(!x && d.getElementById) x=d.getElementById(n); return x; } function MM_swapImage() { //v3.0 var i,j=0,x,a=MM_swapImage.arguments; document.MM_sr=new Array; for(i=0;i<(a.length-2);i+=3) if ((x=MM_findObj(a[i]))!=null){document.MM_sr[j++]=x; if(!x.oSrc) x.oSrc=x.src; x.src=a[i+2];} } </script> </head> <body onload="MM_preloadImages('../images/navigation/social-about-us-over.jpg','../images/navigation/social-about-us.jpg','../images/navigation/social-activities-over.jpg','../images/navigation/social-ourservices-over.jpg','../images/navigation/social-howwework-over.jpg','../images/navigation/social-fundraising-over.jpg','../images/navigation/social-howtohelp-over.jpg','../images/navigation/social-contactus-over.jpg')"> My website is http://www.clockhouse.org.uk/ I'm sure there is a better way i could have written this, but if anyone can help me fix this code I'd be very grateful Many thanks

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • Sening a file from memory (rather than disk) over HTTP using libcurl

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >