Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 273/672 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • create a list of threads in C

    - by Hristo
    My implementation (in C) isn't working... the first node is always NULL so I can't free the memory. I'm doing: thread_node_t *thread_list_head = NULL; // done in the beginning of the program ... // later on in the program thread_node_t *thread = (thread_node_t*) malloc(sizeof(thread_node_t)); pthread_create(&(thread->thread), NULL, client_thread, &csFD); thread->next = thread_list_head; thread_list_head = thread; So when I try to free this memory, thread_list_head is NULL. while (thread_list_head != NULL) { thread_node_t *temp = thread_list_head; thread_list_head = thread_list_head->next; free(temp); temp = NULL; } Any advice on how to fix this or just a new way to create this list that would work better? Thanks, Hristo

    Read the article

  • J2ME Reduce Image color-depth/ Compress Image size

    - by updateraj
    Hi, I need to transmit the image from the mobile phone to the server. I am able to reduce the image screen size but not the memory size. I understand i have to deal with the color depth. J2ME does not seem to offer any scaling method which is available in J2SE: image rescaled = image.getScaledInstance(thumbWidth, thumbHeight, Image.SCALE_AREA_AVERAGING); BufferedImage biRescaled = toBufferedImage(rescaled, thumbWidth, thumbHeight, BufferedImage.TYPE_INT_RGB); How i would i tackle this ? I would like to reduce the image memory size before i transmit to the server. Thank you

    Read the article

  • I don't understand Application Domains

    - by Jeremy Edwards
    .NET has this concept of Application Domains which from what I understand can be used to load an assembly into memory. I've done some research on Application Domains as well as go to my local book store for some additional knowledge on this subject matter but it seems very scarce. All I know that I can do with Application Domains is to load assemblies in memory and I can unload them when I want. What are the capabilities other that I have mentioned of Application Domains? Do Threads respect Application Domains boundaries? Are there any drawbacks from loading Assemblies in different Application Domains other than the main Application Domains beyond performance of communication? Links to resources that discuss Application Domains would be nice as well. I've already checked out MSDN which doesn't have that much information about them.

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • CUDA threads output different value

    - by kar
    HAi, I wrote a cuda program , i have given the kernel function below. The device memory is allocated through CUDAMalloc(); the value of *md is 10; __global__ void add(int *md) { int x,oper=2; x=threadIdx.x; * md = *md*oper; if(x==1) { *md = *md*0; } if(x==2) { *md = *md*10; } if(x==3) { *md = *md+1; } if(x==4) { *md = *md-1; } } executed the above code add<<<1,5>>(*md) , add<<<1,4>>>(*md) for <<<1,5>>> the output is 19 for <<<1,4>>> the output is 21 1) I have doubt that cudaMalloc() will allocate in device main memory? 2) Why the last thread alone is executed always in the above program? Thank you

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • Handling exception form unmanaged dll in C#

    - by StuffHappens
    Hello. I have the following function written in C# public static string GetNominativeDeclension(string surnameNamePatronimic) { if(surnameNamePatronimic == null) throw new ArgumentNullException("surnameNamePatronimic"); IntPtr[] ptrs = null; try { ptrs = StringsToIntPtrArray(surnameNamePatronimic); int resultLen = MaxResultBufSize; int err = decGetNominativePadeg(ptrs[0], ptrs[1], ref resultLen); ThrowException(err); return IntPtrToString(ptrs, resultLen); } catch { return surnameNamePatronimic; } finally { FreeIntPtr(ptrs); } } Function decGetNominativePadeg is in unmanaged dll [DllImport("Padeg.dll", EntryPoint = "GetNominativePadeg")] private static extern Int32 decGetNominativePadeg(IntPtr surnameNamePatronimic, IntPtr result, ref Int32 resultLength); and throws an exception: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The catch that is in C# code doesn't actually catch it. Why? How to handle this exception? Thank you for your help!

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • streaming XML serialization in .net

    - by Luca Martinetti
    Hello, I'm trying to serialize a very large IEnumerable<MyObject> using an XmlSerializer without keeping all the objects in memory. The IEnumerable<MyObject> is actually lazy.. I'm looking for a streaming solution that will: Take an object from the IEnumerable<MyObject> Serialize it to the underlying stream using the standard serialization (I don't want to handcraft the XML here!) Discard the in memory data and move to the next I'm trying with this code: using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(MyObject)); foreach (var myObject in myObjectsIEnumerable) { xmlSerializer.Serialize(writer, myObject); } } but I'm getting multiple XML headers and I cannot specify a root tag <MyObjects> so my XML is invalid. Any idea? Thanks

    Read the article

  • Efficient banner rotation with PHP

    - by reggie
    I rotate a banner on my site by selecting it randomly from an array of banners. Sample code as demonstration: <?php $banners = array( '<iframe>...</iframe>', '<a href="#"><img src="#.jpg" alt="" /></a>', //and so on ); echo $banners(rand(0, count($banners))); ?> The array of banners has become quite big. I am concerned with the amount of memory that this array adds to the execution of my page. But I can't figure out a better way of showing a random banner without loading all the banners into memory...

    Read the article

  • Node.js for lua?

    - by Shahbaz
    I've been playing around with node.js (nodejs) for the past few day and it is fantastic. As far as I can tell, lua doesn't have a similar integration of libev and libio which let's one avoid almost any blocking calls and interact with the network and the filesystem in an asynchronous manner. I'm slowly porting my java implementation to nodejs, but I'm shocked that luajit is much faster than v8 JavaScript AND uses far less memory! I imagine writing my server in such an environment (very fast and responsive, very low memory usage, very expressive) will improve my project immensly. Being new to lua, I'm just not sure if such a thing exists. I'll appreciate any pointers. Thanks

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Can you decode a mutable Bitmap from an InputStream?

    - by Daniel Lew
    Right now I've got an Android application that: Downloads an image. Does some pre-processing to that image. Displays the image. The dilemma is that I would like this process to use less memory, so that I can afford to download a higher-resolution image. However, when I download the image now, I use BitmapFactory.decodeStream(), which has the unfortunate side effect of returning an immutable Bitmap. As a result, I'm having to create a copy of the Bitmap before I can start operating on it, which means I have to have 2x the size of the Bitmap's memory allocated (at least for a brief period of time; once the copy is complete I can recycle the original). Is there a way to decode an InputStream into a mutable Bitmap?

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • MySQL query cache vs caching result-sets in the application layer

    - by GetFree
    I'm running a php/mysql-driven website with a lot of visits and I'm considering the possibility of caching result-sets in shared memory in order to reduce database load. However, right now MySQL's query cache is enabled and it seems to be doing a pretty good job since if I disable query caching, the use of CPU jumps to 100% immediately. Given that situation, I dont know if caching result-sets (or even the generated HTML code) locally in shared memory with PHP will result in any noticeable performace improvement. Does anyone out there have any experience on this matter? PS: Please avoid suggesting heavy-artillery solutions like memcached. Right now I'm looking for simple solutions that dont require too much time to implement, deploy and maintain.

    Read the article

  • Do I need to force the GAC to reload an assembly? Is this possible?

    - by Ben McCormack
    I've added types to my .NET classes that I'm using for COM interop. To get it to work with my VB6 application, I unregistered the DLL and re-registered it (using regasm). I then uninstalled and reinstalled it to the GAC (using gacutil). The types are showing up in the VB6 object explorer, but when I run the application in the VB6 IDE, it breaks on the line that instantiates the new types with the error: Automation Errror - The System cannot find the file specified. I thought this odd since I had already updated the GAC, so I uninstalled the dll from the GAC and got the exact same error, which seems to indicate that the older version of the dll is already in memory and needs to be "reloaded" so that the newer DLL is in memory. Is this possible, and if so, what do I need to do?

    Read the article

  • HttpClient multithread performance

    - by pepper
    I have an application which downloads more than 4500 html pages from 62 target hosts using HttpClient (4.1.3 or 4.2-beta). It runs on Windows 7 64-bit. Processor - Core i7 2600K. Network bandwidth - 54 Mb/s. At this moment it uses such parameters: DefaultHttpClient and PoolingClientConnectionManager; Also it hasIdleConnectionMonitorThread from http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html; Maximum total connections = 80; Default maximum connections per route = 5; For thread management it uses ForkJoinPool with the parallelism level = 5 (Do I understand correctly that it is a number of working threads?) In this case my network usage (in Windows task manager) does not rise above 2.5%. To download 4500 pages it takes 70 minutes. And in HttpClient logs I have such things: DEBUG ForkJoinPool-2-worker-1 [org.apache.http.impl.conn.PoolingClientConnectionManager]: Connection released: [id: 209][route: {}-http://stackoverflow.com][total kept alive: 6; route allocated: 1 of 5; total allocated: 10 of 80] Total allocated connections do not raise above 10-12, in spite of that I've set it up to 80 connections. If I'll try to rise parallelism level to 20 or 80, network usage remains the same but a lot connection time-outs will be generated. I've read tutorials on hc.apache.org (HttpClient Performance Optimization Guide and HttpClient Threading Guide) but they does not help. Task's code looks like this: public class ContentDownloader extends RecursiveAction { private final HttpClient httpClient; private final HttpContext context; private List<Entry> entries; public ContentDownloader(HttpClient httpClient, List<Entry> entries){ this.httpClient = httpClient; context = new BasicHttpContext(); this.entries = entries; } private void computeDirectly(Entry entry){ final HttpGet get = new HttpGet(entry.getLink()); try { HttpResponse response = httpClient.execute(get, context); int statusCode = response.getStatusLine().getStatusCode(); if ( (statusCode >= 400) && (statusCode <= 600) ) { logger.error("Couldn't get content from " + get.getURI().toString() + "\n" + response.toString()); } else { HttpEntity entity = response.getEntity(); if (entity != null) { String htmlContent = EntityUtils.toString(entity).trim(); entry.setHtml(htmlContent); EntityUtils.consumeQuietly(entity); } } } catch (Exception e) { } finally { get.releaseConnection(); } } @Override protected void compute() { if (entries.size() <= 1){ computeDirectly(entries.get(0)); return; } int split = entries.size() / 2; invokeAll(new ContentDownloader(httpClient, entries.subList(0, split)), new ContentDownloader(httpClient, entries.subList(split, entries.size()))); } } And the question is - what is the best practice to use multi threaded HttpClient, may be there is a some rules for setting up ConnectionManager and HttpClient? How can I use all of 80 connections and raise network usage? If necessary, I will provide more code.

    Read the article

  • How to insert HTML into a UIWebView

    - by Mohan Gulati
    I have HTML Content which was being displayed in a text. The next iteration of my app is display the HTML contents into a UIWebView. So I basically replaced my UITextView with UIWebView. However I could not figure out how to inset my HTML snippit into the view. It seems to need a URLRequest which I do not want. I have already stored the HTML content in memory and want to load and display it from memory. Any ideas how I should proceed?

    Read the article

  • Core dump utility for .NET

    - by Dave
    In my past life as a COBOL mainframe developer I made extensive use of a tool called Abendaid which, in the event of an exception, would give me a complete memory dump including a formatted list of every variable in memory as well as a complete stack trace of the program with the offending statement highlighted. This made pinpointing the cause of an error much simpler and saved a lot of step-through debugging and/or trace statements. Now I've made the transition to C# and .NET web development I find that the information provided by ASP.NET only tells half the story, giving me a stack trace, but not any of the variable or class information. This makes debugging more difficult as you then have to run the process again with the debugger to try and reproduce the error, not easy with intermittent errors or with assemblies that run under the likes of SQL Server or CRM. I've looked around quite a lot for something that does this but I can't find anything obvious. Does anyone have any idea if there is one, or if not, what I'd need to start with in order to write one?

    Read the article

  • Taming the malloc/free beast -- tips & tricks

    - by roufamatic
    I've been using C on some projects for a master's degree but have never built production software with it. (.NET & Javascript are my bread and butter.) Obviously, the need to free() memory that you malloc() is critical in C. This is fine, well and good if you can do both in one routine. But as programs grow, and structs deepen, keeping track of what's been malloc'd where and what's appropriate to free gets harder and harder. I've looked around on the interwebs and only found a few generic recommendations for this. What I suspect is that some of you long-time C coders have come up with your own patterns and practices to simplify this process and keep the evil in front of you. So: how do you recommend structuring your C programs to keep dynamic allocations from becoming memory leaks?

    Read the article

  • pushScene and popScene or replaceScene . which should we use and when ?

    - by srikanth rongali
    I am using push scene to get the next scene. But, I read that for each PushScene the scene is stored in stack. The memory usage is more. So, I am using the replaceScene in place of pushScene. But, with replace scene I am getting the memory-bad-access message in debugger. So, I want to popScene after using it, so that the retain count is zero. But, I am confused in using popScene. If I have a Scene1 and Scene2. I used the following to go in to Scene2. Now I need to remove Scene1 from stack. [[CCDirector sharedDirector] pushScene:Scene2]; Where should I write the popScene to popScene1. How to get the previous scene in current running scene ? Thank you/

    Read the article

  • is my function correct?

    - by sbsp
    This is part of an assignment so please dont post solutions, just point me in the right direction if possible? I am passing a pointer to a char array to my method, as well as a value for the actual height of the char array. I am looping through to see if all values are 0, if they are then return 0, esle return one The method is used as a test to see if i should free memory or not and set the pointer to null if it is full of 0's. The issue i am having is that the programme should have "some unfree" memory at the end, so i have no idea whether or not its doing it correctly - and gdb i struggle with immensley. Thanks for reading int shouldBeNull(char *charPointer, int sizeOfCharArray) { int isIn = 0; int i = 0; while(i < sizeOfCharArray){ if(*charPointer != '0'){ isIn = 1; break; } i++; charPointer++; } return isIn; }

    Read the article

  • What is the best way to implement an object cache with Entity Framework?

    - by Harshal
    Say I have a table of "BlogPosts" in a database and i want to be able to cache the ones that were retrieved already in memory, for further reads, I can just use a standard hashtable type memory cache like System.Web.Caching.Cache, but if i then need to update a property on one of these blog posts e.g. blogPost.Title and update the record in DB, i cannot do this without fetching it again from database as the Entity Framework context used to fetch this record when it was loaded into my cache is already disposed? How do I write code so that I am getting an object from my cache, updating one property and just calling the SaveChanges method without incurring an extra read.

    Read the article

  • Strange behavior with gcc inline assembly

    - by Chris
    When inlining assembly in gcc, I find myself regularly having to add empty asm blocks in order to keep variables alive in earlier blocks, for example: asm("rcr $1,%[borrow];" "movq 0(%[b_],%[i],8),%%rax;" "adcq %%rax,0(%[r_top],%[i],8);" "rcl $1,%[borrow];" : [borrow]"+r"(borrow) : [i]"r"(i),[b_]"r"(b_.data),[r_top]"r"(r_top.data) : "%rax","%rdx"); asm("" : : "r"(borrow) : ); // work-around to keep borrow alive ... Another example of weirdness is that the code below works great without optimizations, but with -O3 it seg-faults: ulong carry = 0,hi = 0,qh = s.data[1],ql = s.data[0]; asm("movq 0(%[b]),%%rax;" "mulq %[ql];" "movq %%rax,0(%[sb]);" "movq %%rdx,%[hi];" : [hi]"=r"(hi) : [ql]"r"(ql),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); for (long i = 1; i < b.size; i++) { asm("movq 0(%[b],%[i],8),%%rax;" "mulq %[ql];" "xorq %%r10,%%r10;" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],%[hi];" "movq %%r10,%[carry];" : [carry]"+r"(carry),[hi]"+r"(hi) : [i]"r"(i),[ql]"r"(ql),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","%r10","memory"); } asm("movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],8(%[sb],%[i],8);" : [hi]"+r"(hi),[carry]"+r"(carry) : [i]"r"(long(b.size)),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); I think it has to do with the fact that it's using so many registers. Is there something I'm missing here or is the register allocation just really buggy with gcc inline assembly?

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >