Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 82/578 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Error // Usage: rails new APP_PATH [options] // when running 'rails server'

    - by madphill
    Background info: I'm using GIT to get a repository of a project with Ruby files in it. The project lives in my SITES folder under home directory on my Mac. I have Ruby: 1.8.7 I have just upgraded Rails to: 3.0.3 All I am trying to accomplish is to be able to render localhost.com:3000 in my browser of the GIT project i've already downloaded so I can work on it locally. I ran the command 'rails server' and was returned the message below:: Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -f, [--force] # Overwrite files that already exist -s, [--skip] # Skip files that already exist -p, [--pretend] # Run but do not make any changes -q, [--quiet] # Supress status output Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Im Stumped, Why is UIImage\Texture2d memory not being freed

    - by howsyourface
    I've been looking everywhere trying to find a solution to this problem. Nothing seems to help. I've set up this basic test to try to find the cause of why my memory wasn't being freed up: if (texture != nil) { [texture release]; texture = nil; } else { UIImage* ui = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]]; texture = [[Texture2D alloc] initWithImage:ui]; } Now i would place this in the touches began and test by monitoring the memory usage using intstruments at the start (normally 11.5 - 12mb) after the first touch, with no object existing the texture is created and memory jumps to 13.5 - 14 However, after the second touch the memory does decrease, but only to around 12.5 - 13. There is a noticeable chunk of memory still occupied. I tested this on a much larger scale, loading 10 of these large textures at a time The memory jumps to over 30 mb and remains there, but on the second touch after releasing the textures it only falls to around 22mb. I tried the test another time loading the images in with [uiimage imagenamed:] but because of the caching this method performs it just means that the full 30mb remains in memory.

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

  • C# WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Proper Usage of SqlConnection in .NET

    - by Jojo
    Hi guys, I just want an opinion on the proper usage or a proper design with regards to using SqlConnection object. Which of the 2 below is the best use: A data provider class whose methods (each of them) contain SqlConnection object (and disposed when done). Like: IList<Employee> GetAllEmployees() { using (SqlConnection connection = new SqlConnection(this.connectionString)) { // Code goes here... } } Employee GetEmployee(int id) { using (SqlConnection connection = new SqlConnection(this.connectionString)) { // Code goes here... } } or SqlConnection connection; // initialized in constructor IList<Employee> GetAllEmployees() { this.TryOpenConnection(); // tries to open member SqlConnection instance // Code goes here... this.CloseConnection(); // return } Employee GetEmployee(int id) { this.TryOpenConnection(); // tries to open member SqlConnection instance // Code goes here... this.CloseConnection(); // return } Or is there a better approach than this? I have a focused web crawler type of application and this application will crawl 50 or more websites simultaneously (multithreaded) with each website contained in a crawler object and each crawler object has an instance of a data provider class (above). Please advise. Thanks.

    Read the article

  • Dynamic table memory usage

    - by Dan
    I use a dynamic table: <html> <body> <button id="button">Build table</button> <div id="container"> <script type="text/javascript"> window.onload=function(){ var table = null; var row = "<tr><td>111111111111111111111111111111111111111111111111111111</td>" + "<td>222222222222222222222222222222222222222222222222222222</td>" + "<td>333333333333333333333333333333333333333333333333333333</td></tr>"; var data = null; for (var i = 0; i < 2000; i++){ data += row; } var obj = document.getElementById("button"); obj.onclick=function buildTable(){ document.getElementById("container").innerHTML = "<div><table><tbody>" + data + "</tbody></table></div>"; }; }; </script> </body> </html> Using chromes task manager, each time new data is loaded the memory usage increases considerably and doesn't go down, so after some time the app consumes a lot of memory and requires the browser to be closed. Is there any change in the code I can use to solve this or is it a browser side problem?

    Read the article

  • Multiple usage of MenuItems declared once (WPF)

    - by Alex Kofman
    Is it possible in WPF to define some menu structure and than use it in multiple contexts? For example I'd like to use a set of menu items from resources in ContextMenu, Window's menu and ToolBar (ToolBar with icons only, without headers). So items order, commands, icons, separators must be defined just once. I look for something like this: Declaration in resources: <MenuItem Command="MyCommands.CloneObject" CommandParameter="{Binding SelectedObject}" Header="Clone"> <MenuItem.Icon> <Image Source="Images\Clone.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> <MenuItem Command="MyCommands.RemoveCommand" CommandParameter="{Binding SelectedObject}" Header="Remove"> <MenuItem.Icon> <Image Source="Images\Remove.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> <Separator/> <MenuItem Command="MCommands.CreateChild" CommandParameter="{Binding SelectedObject}" Header="Create child"> <MenuItem.Icon> <Image Source="Images\Child.png" Height="16" Width="16"></Image> </MenuItem.Icon> </MenuItem> Usage: <ToolBar MenuItems(?)="{Reference to set of items}" ShowText(?)="false" /> and <ContextMenu MenuItems(?)="{Reference to set of items}" />

    Read the article

  • When are predicates appropriate and what is the best pattern for usage

    - by Maxim Gershkovich
    When are predicates appropriate and what is the best pattern for usage? What are the advantages of predicates? It seems to me like most cases where a predicate can be employed a tight loop would accomplish the same functionality? I don’t see a reusability argument given you will probably only implement a predicate in one method right? They look and feel nice but besides that they seem like you would only employ them when you need a quick hack on the collection classes? UPDATE But why would you be rewriting the tight loop again and again? In my mind/code when it comes to collections I always end up with something like Class Person End Class Class PersonList Inherits List(Of Person) Function FindByName(Name) as Person tight loop.... End Function End Class @Ani By that same logic I could implement the method as such Class PersonList Inherits List(Of Person) Function FindByName(Name) as PersonList End Function Function FindByAge(Age) as PersonList End Function Function FindBySocialSecurityNumber(SocialSecurityNumber) as PersonList End Function End Class And call it as such Dim res as PersonList = MyList.FindByName("Max").FindByAge(25).FindBySocialSecurityNumber(1234) and the result along with the amount of code and its reusability is largely the same, no? I am not arguing just trying to understand.

    Read the article

  • Inheritence and usage of dynamic_cast

    - by Mewzer
    Hello, Suppose I have 3 classes as follows (as this is an example, it will not compile!): class Base { public: Base(){} virtual ~Base(){} virtual void DoSomething() = 0; virtual void DoSomethingElse() = 0; }; class Derived1 { public: Derived1(){} virtual ~Derived1(){} virtual void DoSomething(){ ... } virtual void DoSomethingElse(){ ... } virtual void SpecialD1DoSomething{ ... } }; class Derived2 { public: Derived2(){} virtual ~Derived2(){} virtual void DoSomething(){ ... } virtual void DoSomethingElse(){ ... } virtual void SpecialD2DoSomething{ ... } }; I want to create an instance of Derived1 or Derived2 depending on some setting that is not available until run-time. As I cannot determine the derived type until run-time, then do you think the following is bad practice?... class X { public: .... void GetConfigurationValue() { .... // Get configuration setting, I need a "Derived1" b = new Derived1(); // Now I want to call the special DoSomething for Derived1 (dynamic_cast<Derived1*>(b))->SpecialD1DoSomething(); } private: Base* b; }; I have generally read that usage of dynamic_cast is bad, but as I said, I don't know which type to create until run-time. Please help!

    Read the article

  • FileConnection Blackberry memory usage

    - by Dean
    Hello, I'm writing a blackberry application that reads ints and strings out of a database. This is my first time dealing with reading/writing on the blackberry, so forgive me if this is a dumb question. The database file I'm reading is only about 4kB I open the file with the following code fconn = (FileConnection) Connector.open("file_path_here", Connector.READ); if(fconn.exists()==false){fconn.close();return;} is = fconn.openDataInputStream(); while(!eof){ //etc... } is.close(); fconn.close(); The problem is, this code appears to be eating a LOT of memory. Using breakpoints and the "Memory Statistics" view, I determined the following: calling "Connector.open" creates 71 objects and changes "RAM Bytes in use" by 5376 calling "fconn.openDataInputStream();" increases RAM usage by a whopping 75920 Is this normal? Or am I doing something wrong? And how can I fix this? 75MB of RAM is a LOT of memory to waste on a handheld device, especially when the file I'm reading is only 4kB and I haven't even begun reading any data! How is this even possible?

    Read the article

  • gcc compilations (sometimes) result in cpu underload

    - by confusedCoder
    I have a larger C++ program which starts out by reading thousands of small text files into memory and storing data in stl containers. This takes about a minute. Periodically, a compilation will exhibit behavior where that initial part of the program will run at about 22-23% CPU load. Once that step is over, it goes back to ~100% CPU. It is more likely to happen with O2 flag turned on but not consistently. It happens even less often with the -p flag which makes it almost impossible to profile. I did capture it once but the gprof output wasn't helpful - everything runs with the same relative speed just at low cpu usage. I am quite certain that this has nothing to do with multiple cores. I do have a quad-core cpu, and most of the code is multi-threaded, but I tested this issue running a single thread. Also, when I run the problematic step in multiple threads, each thread only runs at ~20% CPU. I apologize ahead of time for the vagueness of the question but I have run out of ideas as to how to troubleshoot it further, so any hints might be helpful. UPDATE: Just to make sure it's clear, the problematic part of the code does sometimes (~30-40% of the compilations) run at 100% CPU, so it's hard to buy the (otherwise reasonable) argument that I/O is the bottleneck

    Read the article

  • Fatal error: Allowed memory size exhausted...

    - by Nano HE
    HI, I upload my php testing script to online vps server just now. The script used to parse a big size XML file(about 4M, 7000Lines). But my IE explorer show the online error message below. Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 77 bytes) in /var/www/test/result/index.php on line 26 I am sure I already tested the php script on localhost successfully. Is there any configuration need be enable/modify on my VPS? Such as php.ini or some setting for apache server? I just verified there are about 200M memory usage are avaliable for my VPS. How can I fix this? ...... function startElementHandler ($parser,$name,$attrib){ global $usercount; global $userdata; global $state; // Line #26; //Debug //print "name is: ".$name."\n"; switch ($name) { case $name=="_ID" : { $userdata[$usercount]["first"] = $attrib["FIRST"]; $userdata[$usercount]["last"] = $attrib["LAST"]; $userdata[$usercount]["nick"] = $attrib["NICK"]; $userdata[$usercount]["title"] = $attrib["TITLE"]; break; } ...... default : {$state=$name;break;} } }

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • Incorrect usage of UPDATE and ORDER BY

    - by nico55555
    I have written some code to update certain rows of a table with a decreasing sequence of numbers. To select the correct rows I have to JOIN two tables. The last row in the table needs to have a value of 0, the second last -1 and so on. To achieve this I use ORDER BY DESC. Unfortunately my code brings up the following error: Incorrect usage of UPDATE and ORDER BY My reading suggests that I can't use UPDATE, JOIN and ORDER BY together. I've read that maybe subqueries might help? I don't really have any idea how to change my code to do this. Perhaps someone could post a modified version that will work? while($row = mysql_fetch_array( $result )) { $products_id = $row['products_id']; $products_stock_attributes = $row['products_stock_attributes']; mysql_query("SET @i = 0"); $result2 = mysql_query("UPDATE orders_products op, orders ord SET op.stock_when_purchased = (@i:=(@i - op.products_quantity)) WHERE op.orders_id = ord.orders_id AND op.products_id = '$products_id' AND op.products_stock_attributes = '$products_stock_attributes' AND op.stock_when_purchased < 0 AND ord.orders_status = 2 ORDER BY orders_products_id DESC") or die(mysql_error()); }

    Read the article

  • Javascript dynamic table memory usage

    - by Dan
    I use a dynamic table: <html> <body> <button id="button">Build table</button> <div id="container"> <script type="text/javascript"> window.onload=function(){ var table = null; var row = "<tr><td>111111111111111111111111111111111111111111111111111111</td>" + "<td>222222222222222222222222222222222222222222222222222222</td>" + "<td>333333333333333333333333333333333333333333333333333333</td></tr>"; var data = null; for (var i = 0; i < 2000; i++){ data += row; } var obj = document.getElementById("button"); obj.onclick=function buildTable(){ document.getElementById("container").innerHTML = "<div><table><tbody>" + data + "</tbody></table></div>"; }; }; </script> </body> </html> Using chromes task manager, each time new data is loaded the memory usage increases considerably and doesn't go down, so after some time the app consumes a lot of memory and requires the browser to be closed. Is there any change in the code I can use to solve this or is it a browser side problem?

    Read the article

  • ERROR with Ubuntu: Cannot open the disk 'D:\My Documents\My Virtual Machines\Ubuntu\Ubuntu-1.vmdk' or one of the snapshot disks it depends on

    - by leiyu
    Cannot open the disk 'D:\My Documents\My Virtual Machines\Ubuntu\Ubuntu-1.vmdk' or one of the snapshot disks it depends on. Reason: The physical disk is already in use. ====================== When I powered on my Ubuntu on VMwave, a window showed up within words above. I tried to remove the old hard disk in settings and created a new one, but it still doesnot work. Also, I tried to delete the .lck files and even the doc. BUT....... Has someone solved this problem? PLEASE do me a favour!!Many thanks!!

    Read the article

  • How do I get the Grub menu back after installing Windows on a separate disk?

    - by Shazzner
    Tried sudo grub-install on sda1 but it complained about being a BAD IDEA. I had to install windows for a work related issue so I used a separate disk (I had used it for ubuntu on this computer, but bought a bigger disk so installed ubuntu on that and left the old one in in case I needed an old file). Windows installed fine but overwrote Grub. So if I choose the Ubuntu disk to boot first in BIOS I get a blank screen. I googled and followed this advice: https://help.ubuntu.com/community/RecoveringUbuntuAfterInstallingWindows However, when I get down to this section: sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda1 I get this: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea… --recheck does nothing. Any ideas?

    Read the article

  • Why does the first partition start at sector 34 when I choose "Guided - Use entire disk" during install?

    - by Kent
    After choosing "Guided - Use entire disk" during installation I find that the first partition starts on sector 34. Why that specific sector and not the first one? (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sda: 5860533168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 390659s 390626s fat32 boot 2 390660s 890660s 500001s ext2 3 890661s 5860533118s 5859642458s (parted) In case you prefer bytes as the unit: (parted) unit B (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sda: 3000592982016B Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 17408B 200017919B 200000512B fat32 boot 2 200017920B 456018431B 256000512B ext2 3 456018432B 3000592956927B 3000136938496B

    Read the article

  • USB disk not recognized after detaching from DVD player. What to do?

    - by MMA
    I had one Transcend 4GB USB stick formatted to NTFS and was working fine. Today I inserted this disk into a DVD player, and it was saying, "loading". After a long time, noting happened, and it seemed that the stick (NTFS) is not recognized by it. I took out the stick and tried to reformat to FAT32. But the stick is not being recognized in my machine (Ubuntu 12.04). I tried the advices from USB drive not recognized after Erase Disk, without any success. When I tried Disk Utility, the stick is indicated as a generic device. See image, Formatting this device fails, saying, No medium found. Again see image, gparted does not even list this device. The same thing happens for fdisk. It is not listed there. Have I totally lost this stick? What should I do?

    Read the article

  • UIImagePickerController Save to Disk then Load to UIImageView

    - by Harley Gagrow
    Hi, I have a UIImagePickerController that saves the image to disk as a png. When I try to load the PNG and set a UIImageView's imageView.image to the file, it is not displaying. Here is my code: (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage]; NSData *imageData = UIImagePNGRepresentation(image); // Create a file name for the image NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setTimeStyle:NSDateFormatterShortStyle]; [dateFormatter setDateStyle:NSDateFormatterShortStyle]; NSString *imageName = [NSString stringWithFormat:@"photo-%@.png", [dateFormatter stringFromDate:[NSDate date]]]; [dateFormatter release]; // Find the path to the documents directory NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString *fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // Write out the data. [imageData writeToFile:fullPathToFile atomically:NO]; // Set the managedObject's imageLocation attribute and save the managed object context [self.managedObject setValue:fullPathToFile forKey:@"imageLocation"]; NSError *error = nil; [[self.managedObject managedObjectContext] save:&error]; [self dismissModalViewControllerAnimated:YES]; } Then here is how I try to load it: self.imageView.backgroundColor = [UIColor lightGrayColor]; self.imageView.frame = CGRectMake(10, 10, 72, 72); if ([self.managedObject valueForKey:@"imageLocation"] != nil) { NSLog(@"Trying to load the imageView with: %@", [self.managedObject valueForKey:@"imageLocation"]); UIImage *image = [[UIImage alloc] initWithContentsOfFile:[self.managedObject valueForKey:@"imageLocation"]]; self.imageView.image = image; } else { self.imageView.image = [UIImage imageNamed:@"no_picture_taken.png"]; } I get the message that it's trying to load the imageView in the debugger, but the image is never displayed in the imageView. Can anyone tell me what I've got wrong here? Thanks a bunch.

    Read the article

  • What is private bytes, virtual bytes, working set?

    - by Devil Jin
    I am using perfmon windows utility to debug memory leak in a process. Perfmon explaination: Working Set- Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the Working Set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from Working Sets. If they are needed they will then be soft-faulted back into the Working Set before leaving main memory. Virtual Bytes- Virtual Bytes is the current size, in bytes, of the virtual address space the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries. Private Bytes- Private Bytes is the current size, in bytes, of memory that this process has allocated that cannot be shared with other processes. Q1. Is it the private byte should I measure to be sure if the process is having any leak as it does not involve any shared libraries and any leak if happening will be coming from the process itself? Q2. What is the total memory consumed by the process? Is it the Virtual byte size? or Is it the sum of Virtual Bytes and Working Set Q3. Is there any relation between private bytes, working set and virtual bytes. Q4. Any tool which gives a better idea memory information?

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • XElement vs Dcitionary

    - by user135498
    Hi All, I need advice. I have application that imports 10,000 rows containing name & address from a text file into XElements that are subsequently added to a synchronized queue. When the import is complete the app spawns worker threads that process the XElements by deenqueuing them, making a database call, inserting the database output into the request document and inserting the processed document into an output queue. When all requests have been processed the output queue is written to disk as an XML doc. I used XElements for the requests because I needed the flexibility to add fields to the request during processing. i.e. Depending on the job type the app might require that it add phone number, date of birth or email address to a request based on a name/address match against a public record database. My questions is; The XElements seems to use quite a bit of memory and I know there is a lot of parsing as the document makes its way through the processing methods. I’m considering replacing the XElements with a Dictionary object but I’m skeptical the gain will be worth the effort. In essence it will accomplish the same thing. Thoughts?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >