Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 495/672 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • .NET COM Interop with references to other libraries

    - by user262190
    Hello,I'm up against a problem when loading a class in a managed library from a COM Interop library. basically I have some Unmanaged C++ code and a COM Interop library written in C#. And finally a 3rd library which is referenced by the COM Interop library which contains a class: public class MyClass{ public MyClass(){} } What I'd like to do is from my unmanaged c++ code, call a function in the Interop library The C++ code doesn't need to know of the existence of the third library, it's only used within the Interop. Init(){ MyClass _class = new MyClass(); } for some reason this line in Init fails "MyClass _class = new MyClass();", and I don't get very usefull error messages, all I have to go on is a few of these in my output window: "First-chance exception at 0x7c812afb in DotNet_Com_Call.exe: Microsoft C++ exception: [rethrow] at memory location 0x00000000.." and the HRESULT of the "HRESULT hr = pDotNetCOMPtr-Init();" line in my C++ code is "The system cannot find the specified file" I'm new to COM so if anyone has any ideas or pointer to get me going the right direction, I'd appreciate it, Thanks

    Read the article

  • UIScrollView and UIImage in it with paging enabled

    - by Infinity
    Hello guys! I need to display 10 - 1000 images on a UIScrollView. Paging is enabled. So every page of the scrollview is an image, it is an uiimage. I can load 10 images to 10 uiimages which stays in the memory, but with 1000 images I have problems on the iPhone or on the iPad. I am trying to unload and then load the images when I am doing scrolling. I every time displays 3 images. The current page image, and the -1 and +1 pages. When I am scrolling I unload the image and then load the next. With this method I have two problems. The scrolling is laggy and if I scroll very fast the images don't appear. Maybe is there any good solution for these problems? Can you suggest me one?

    Read the article

  • Outlets buggy, causing crash?

    - by Moshe
    I've been having issues with making outlets and I was wondering if there is a difference and what the difference could be... EDIT: Using Outlets seem to be causeing SIGABRT errors... // // hebOmer.h // tizkorPush // // Created by Moshe -...- . // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> #import "support.h" #import "hdate.h" @interface hebOmer : UIViewController{ IBOutlet UIScrollView * hebrewScrollView; UILabel *hebrewOmer; } @property (nonatomic, retain) UIScrollView * hebrewScrollView; @property (nonatomic, retain) IBOutlet UILabel *hebrewOmer; @end Creating an outlet to the UILabel is causing problems. The implementation seems fine. @synthesize and the memory release functions were called properly.

    Read the article

  • Difference between performSelectorInBackground and NSOperation Subclass

    - by AmitSri
    I have created one testing app for running deep counter loop. I run the loop fuction in background thread using performSelectorInBackground and also NSOperation subclass separately. I am also using performSelectorOnMainThread to notify main thread within backgroundthread method and [NSNotificationCenter defaultCenter] postNotificationName within NSOperation subclass to notify main thread for updating UI. Initially both the implementation giving me same result and i am able to update UI without having any problem. The only difference i found is the Thread count between two implementations. The performSelectorInBackground implementation created one thread and got terminated after loop finished and my app thread count again goes to 1. The NSOperation subclass implementation created two new threads and keep exists in the application and i can see 3 threads after loop got finished in main() function. So, my question is why two threads created by NSOperation and why it didn't get terminated just like the first background thread implementation? I am little bit confuse and unable to decide which implementation is best in-terms of performance and memory management. Thanks

    Read the article

  • how to change stack size for a C# program?

    - by carter-boater
    Dear friends, I have a program that does recursive calls for 2 billion times and the stack overflow. I make changes, and then it still need 40K resursive calls. So I need probably serveral MB stack memory. I heard the stack size is default to 1MB. I tried search online. Some one said to go properties -linker .........in visual studio, but I cannot find it. Does anybody knows how to increase it? Also I am wondering if I can set it some where in my C# program? P.S. I am using 32-bit winXP and 64bit win7. Thanks a lot

    Read the article

  • Is tcerl for Mnesia production ready? Is there any alternatives?

    - by Sanoj
    I would like to create a scalable web service using Mnesia as database. However Mnesia per default isn't scalable for persistent storgage since it is using Dets (which has a 2GB limit) as backend. I have seen discussions about extending Mnesia with MnesiaEx and use tcerl as backend. It sounds good and have showed good performance. However, I have seen in a talk about Tokyo Cabinet and CouchDB with Mnesia that there are some issues: issues with durability issues with memory leaks issues with crashes Is tcerl + Mnesia really production ready? And is there any other alternatives? How doe´s companies overcome these issues if they use Mnesia in bigger systems? Is there a working solution with Mnesia and Tokyo Tyrant that is working better?

    Read the article

  • How to debug a native Java crash on Linux?

    - by Paul J. Lucas
    I've seen this question and this article on how to debug a native Java crash. The article is with respect to Windows. What are the equivalent debugging aids on Linux? Note: All I have is this crash log from a user in the field. I do not have access to the machine on which the crash occurred. Update: I am pretty sure the crash is due to JNI code we have. I never meant to imply that it was the JVM itself that was faulty. Per request, here is the crash dump (or as much of it as will fit in the 30K stackoverflow limit): # # An unexpected error has been detected by Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x06300e76, pid=9983, tid=4106996592 # # Java VM: Java HotSpot(TM) Client VM (1.6.0_03-b05 mixed mode, sharing) # Problematic frame: # V [libjvm.so+0x300e76] # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # --------------- T H R E A D --------------- Current thread (0x0922e000): VMThread [id=9985] siginfo:si_signo=11, si_errno=0, si_code=1, si_addr=0x00000008 Registers: EAX=0x00000008, EBX=0x88a829b3, ECX=0x88a829b0, EDX=0xa7d6c1dc ESP=0xf4cbba5c, EBP=0xf4cbba68, ESI=0xa7d6d1d8, EDI=0x00000404 EIP=0x06300e76, CR2=0x00000008, EFLAGS=0x00010202 Top of Stack: (sp=0xf4cbba5c) 0xf4cbba5c: a7d6c1c8 0920cc30 aa0de5c0 f4cbba98 0xf4cbba6c: 063517d7 cf8f2a20 a7d6c1c8 0920cc30 0xf4cbba7c: 0920cc30 00000000 00000000 6d224c40 0xf4cbba8c: 00000001 f4cbbbb0 0920b440 f4cbbab8 0xf4cbba9c: 061dd4df 0920cc30 f4cbbb10 f4cbbac8 0xf4cbbaac: 0633cb7e 0643b5b8 f4492968 f4cbbad8 0xf4cbbabc: 061dcd68 f4cbbaf0 0920cc30 f4cbbaf8 0xf4cbbacc: 061df31e f4cbbb10 d4cbcc2c f4cbbb08 Instructions: (pc=0x06300e76) 0x06300e66: 82 39 f2 73 34 90 8d 74 26 00 8b 02 85 c0 74 22 0x06300e76: 8b 18 80 3d 45 10 42 06 00 74 0c 89 d8 31 c9 83 Stack: [0xf4c3c000,0xf4cbd000), sp=0xf4cbba5c, free space=510k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x300e76] V [libjvm.so+0x3517d7] V [libjvm.so+0x1dd4df] V [libjvm.so+0x1dcd68] V [libjvm.so+0x1dc3cc] V [libjvm.so+0x1d4c52] V [libjvm.so+0x1d32cc] V [libjvm.so+0x1d4229] V [libjvm.so+0x1dc82a] V [libjvm.so+0x1d1d34] V [libjvm.so+0x186125] V [libjvm.so+0x1d20bc] V [libjvm.so+0x3b2cbe] V [libjvm.so+0x3c5037] V [libjvm.so+0x3c46bc] V [libjvm.so+0x3c488a] V [libjvm.so+0x3c446f] V [libjvm.so+0x30b719] C [libpthread.so.0+0x5cb2] VM_Operation (0xf2b60728): generation collection for allocation, mode: safepoint, requested by thread 0x09449c00 --------------- P R O C E S S --------------- Java Threads: ( = current thread ) 0x092afc00 JavaThread "RawImageCache" daemon [_thread_blocked, id=10026] 0xf37d1000 JavaThread "TimerQueue" daemon [_thread_blocked, id=10022] 0x09410000 JavaThread "SunTileScheduler0Standard7" daemon [_thread_blocked, id=10021] 0x0940f000 JavaThread "SunTileScheduler0Standard6" daemon [_thread_blocked, id=10020] 0x0946fc00 JavaThread "SunTileScheduler0Standard5" daemon [_thread_blocked, id=10019] 0x0946e800 JavaThread "SunTileScheduler0Standard4" daemon [_thread_blocked, id=10018] 0x0946d400 JavaThread "SunTileScheduler0Standard3" daemon [_thread_blocked, id=10017] 0x0946c000 JavaThread "SunTileScheduler0Standard2" daemon [_thread_blocked, id=10016] 0x0946ac00 JavaThread "SunTileScheduler0Standard1" daemon [_thread_blocked, id=10015] 0x0946a000 JavaThread "SunTileScheduler0Standard0" daemon [_thread_blocked, id=10014] 0x0944a800 JavaThread "Image List Poller" [_thread_blocked, id=10012] 0x09449c00 JavaThread "Image Task Queue" [_thread_blocked, id=10011] 0xf37e3c00 JavaThread "Laf-Widget fade tracker" [_thread_blocked, id=10010] 0x094abc00 JavaThread "FileCacheMonitor" daemon [_thread_blocked, id=10009] 0xf37e3800 JavaThread "DestroyJavaVM" [_thread_blocked, id=9984] 0xf37ee400 JavaThread "Thread-6" daemon [_thread_blocked, id=10006] 0xf3a7c800 JavaThread "DirectoryMonitor.MonitorThread" daemon [_thread_blocked, id=10005] 0xf3a73800 JavaThread "AWT Watchdog" daemon [_thread_blocked, id=10004] 0xf3adb800 JavaThread "TileReaper" daemon [_thread_blocked, id=10003] 0x093c3c00 JavaThread "process reaper" daemon [_thread_in_native, id=10001] 0x093ac800 JavaThread "Timer-0" daemon [_thread_blocked, id=9999] 0x093a8c00 JavaThread "AWT-EventQueue-0" [_thread_blocked, id=9997] 0x093a8000 JavaThread "AWT-Shutdown" [_thread_blocked, id=9996] 0x09378c00 JavaThread "AWT-XAWT" daemon [_thread_blocked, id=9994] 0x09368400 JavaThread "Java2D Disposer" daemon [_thread_blocked, id=9993] 0x09350000 JavaThread "Thread-1" daemon [_thread_blocked, id=9992] 0x0923b400 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=9990] 0x09239c00 JavaThread "CompilerThread0" daemon [_thread_blocked, id=9989] 0x09238800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=9988] 0x09230800 JavaThread "Finalizer" daemon [_thread_blocked, id=9987] 0x0922f400 JavaThread "Reference Handler" daemon [_thread_blocked, id=9986] Other Threads: =0x0922e000 VMThread [id=9985] 0x09245000 WatcherThread [id=9991] VM state:at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event]) [0x09205178/0x092051a0] Threads_lock - owner thread: 0x0922e000 [0x09205638/0x09205650] Heap_lock - owner thread: 0x09449c00 Heap def new generation total 83968K, used 9280K [0x55600000, 0x5b110000, 0x5ec40000) eden space 74688K, 0% used [0x55600000, 0x55600000, 0x59ef0000) from space 9280K, 100% used [0x5a800000, 0x5b110000, 0x5b110000) to space 9280K, 0% used [0x59ef0000, 0x59ef0000, 0x5a800000) tenured generation total 1233640K, used 1233529K [0x5ec40000, 0xaa0fa000, 0xcf800000) the space 1233640K, 99% used [0x5ec40000, 0xaa0de5c0, 0x8b4af400, 0xaa0fa000) compacting perm gen total 13312K, used 13175K [0xcf800000, 0xd0500000, 0xd3800000) the space 13312K, 98% used [0xcf800000, 0xd04ddd70, 0xd04dde00, 0xd0500000) ro space 8192K, 69% used [0xd3800000, 0xd3d8f608, 0xd3d8f800, 0xd4000000) rw space 12288K, 57% used [0xd4000000, 0xd46eee98, 0xd46ef000, 0xd4c00000) Dynamic libraries: [ snip ] VM Arguments: jvm_args: -Dinstall4j.jvmDir=/home/berbmit/bin/LightZone/jre -Dinstall4j.appDir=/home/berbmit/bin/LightZone -Dexe4j.moduleName=/home/berbmit/bin/LightZone/LightZone -Dcom.lightcrafts.licensetype=ESD -Xmx2000000k java_command: com.install4j.runtime.Launcher launch com.lightcrafts.platform.linux.LinuxLauncher true false /home/berbmit/bin/LightZone/LightZone.log /home/berbmit/bin/LightZone/LightZone.log false true false true true -1 -1 20 20 Arial 0,0,0 8 500 20 40 Arial 0,0,0 8 500 -1 Launcher Type: SUN_STANDARD Environment Variables: PATH=/home/berbmit/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=berbmit LD_LIBRARY_PATH=/home/berbmit/bin/LightZone/jre/lib/i386/client:/home/berbmit/bin/LightZone/jre/lib/i386:/home/berbmit/bin/LightZone/jre/../lib/i386:/home/berbmit/bin/LightZone/.: SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: SIGSEGV: [libjvm.so+0x3b29c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGBUS: [libjvm.so+0x3b29c0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGFPE: [libjvm.so+0x309ec0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGPIPE: SIG_IGN, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGILL: [libjvm.so+0x309ec0], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR1: SIG_DFL, sa_mask[0]=0x00000000, sa_flags=0x00000000 SIGUSR2: [libjvm.so+0x30bef0], sa_mask[0]=0x00000000, sa_flags=0x10000004 SIGHUP: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGINT: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGQUIT: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGTERM: [libjvm.so+0x30b910], sa_mask[0]=0x7ffbfeff, sa_flags=0x10000004 SIGUSR2: [libjvm.so+0x30bef0], sa_mask[0]=0x00000000, sa_flags=0x10000004 --------------- S Y S T E M --------------- OS:squeeze/sid uname:Linux 2.6.35-23-generic #41-Ubuntu SMP Wed Nov 24 11:55:36 UTC 2010 x86_64 libc:glibc 2.12.1 NPTL 2.12.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:0.67 0.54 0.36 CPU:total 8 (8 cores per cpu, 2 threads per core) family 6 model 10 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, ht Memory: 4k page, physical 8191552k(3359308k free), swap 1016828k(1016828k free) vm_info: Java HotSpot(TM) Client VM (1.6.0_03-b05) for linux-x86, built on Sep 24 2007 22:45:46 by "java_re" with gcc 3.2.1-7a (J2SE release)

    Read the article

  • Visual C#, Large Arrays, and LOH Fragmentation. What is the accepted convention?

    - by Gorchestopher H
    I have an other active question HERE regarding some hopeless memory issues that possibly involve LOH Fragmentation among possibly other unknowns. What my question now is, what is the accepted way of doing things? If my app needs to be done in Visual C#, and needs to deal with large arrays to the tune of int[4000000], how can I not be doomed by the garbage collector's refusal to deal with the LOH? It would seem that I am forced to make any large arrays global, and never use the word "new" around any of them. So, I'm left with ungraceful global arrays with "maxindex" variables instead of neatly sized arrays that get passed around by functions. I've always been told that this was bad practice. What alternative is there? Is there some kind of function to the tune of System.GC.CollectLOH("Seriously") ? Are there possibly some way to outsource garbage collection to something other than System.GC? Anyway, what are the generally accepted rules for dealing with large (85Kb) variables?

    Read the article

  • jquery error() calls showing up in firebug profile

    - by Aros
    I am working on an ASP.NET application that make a lot of jquery and javascript calls and trying to optimize the client side code as much as possible. (This web application is only designed to run on special hardware that has very low memory and processing power.) The profiler in firebug is great for figuring out what calls are taking up the most time. I have already optimized a lot of my selectors and it is much faster. However the profile shows a lot of jquery error() calls. In the attached image of the firebug profile window you can see it was called 52 times, accounting for 15.4 of the processing time. Is that normal for jquery to call its error() like that? My code works flawlessy, and there are no error messages in the firefox error console. It seems like that is a significant performance hit. Is there anyway to get more info on what the errors are? Thanks.

    Read the article

  • Why are listener lists Lists?

    - by Joonas Pulakka
    Why are listener lists (e.g. in Java those that use addXxxListener() and removeXxxListener() to register and unregister listeners) called lists, and usually implemented as Lists? Wouldn't a Set be a better fit, since in the case of listeners there's No matter in which order they get called (although there may well be such needs, but they're special cases; ordinary listener mechanisms make no such guarantees), and No need to register the same listener more than once (whether doing that should result in calling the same listener 1 times or N times, or be an error, is another question) Is it just a matter of tradition? Sets are some kind of lists under the hood anyway. Are there performance differences? Is iterating through a List faster or slower than iterating through a Set? Does either take more or less memory? The differences are certainly almost negligible.

    Read the article

  • MPI_SCATTER Fortran Matrices by Rows

    - by Fortran
    What is the best way to scatter a Fortran 90 matrix by its rows rather than columns? That is, let's say I have a matrix a(4,50) and I want to MPI_SCATTER it onto two processes where each part is alocal(2,50), where rank 0 has rows 1 and 2, and rank 1 has 3 and 4. Now, in C, this is simple since arrays are row-major, but in Fortran 90 they are column-major. I'm trying to avoid using TRANSPOSE to flip a before scattering (i.e, doubling the memory use), and I figure there must be a way in MPI to do this. Would it be MPI_TYPE_VECTOR? MPI_TYPE_CREATE_SUBARRAY? Likewise, what if I have a 3d array b(4,50,3) and I want two scattered matrices of blocal(2,50,3) distributed as above?

    Read the article

  • Extending Perforce to use a custom content diff tool for certain file extensions

    - by Fraser Graham
    I have various custom binary files stored in perforce and for many of the file types I have built a custom diff tool to show the content creators a diff of the actual changes to the file. E.g. If the file holds simple key value pairs as a compressed binary blob the diff tool would load each version into an in memory format and generate a list of additions, deletions and edits to the file presented in a nice clean report view. Much like the built in image diff tool in P4V i'd like to be able to use my own diff tool for certain file extensions within my depot and allow the users to use the existing P4V interface to pick revisions to diff between and examine history. So, I am aware you can write add-ins to P4V but I can't find any documentation on it and I'd like to know if this kind of extension functionality is available in P4V and how to use it?

    Read the article

  • Core Data performance deleteObject and save managed object context

    - by Gary
    I am trying to figure out the best way to bulk delete objects inside of my Core Data database. I have some objects with a parent/child relationship. At times I need to "refresh" the parent object by clearing out all of the existing children objects and adding new ones to Core Data. The 'delete all' portion of this operation is where I am running into trouble. I accomplish this by looping through the children and calling deleteObject for each one. I have noticed that after the NSManagedObjectContext:Save call following all of the deleteObject calls is very slow when I am deleting 15,000 objects. How can I speed up this call? Are there things happening during the save operation that I can be aware of and avoid by setting parameters different or setting up my model another way? I've noticed that memory spikes during this operation as well. I really just want to "delete * from". Thanks.

    Read the article

  • Create a custom worksheet function in Excel VBA

    - by Tomalak
    I have a faint memory of being able to use VBA functions to calculate values in Excel, like this (as the cell formula): =MyCustomFunction(A3) Can this be done? EDIT: This is my VBA function signature: Public Function MyCustomFunction(str As String) As String The function sits in the ThisWorkbook module. If I try to use it in the worksheet as shown above, I get the #NAME? error. Solution (Thanks, codeape): The function is not accessible when it is defined ThisWorkbook module. It must be in a "proper" module, one that has been added manually to the workbook.

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • Illustration of buffer overflows for students (linux, C)

    - by osgx
    Hello My friend is teacher of first-year CS students. We want to show them buffer overflow exploitation. But modern distribs are protected from simples buffer overflows: HOME=`perl -e "print 'A'x269"` one_widely_used_utility_is_here --help on debian (blame it) Caught signal 11, on modern commercial redhat *** buffer overflow detected ***: /usr/bin/one_widely_used_utility_is_here terminated ======= Backtrace: ========= /lib/libc.so.6(__chk_fail+0x41)[0xc321c1] /lib/libc.so.6(__strcpy_chk+0x43)[0xc315e3] /usr/bin/one_widely_used_utility_is_here[0x805xxxc] /usr/bin/one_widely_used_utility_is_here[0x804xxxc] /lib/libc.so.6(__libc_start_main+0xdc)[0xb61e9c] /usr/bin/one_widely_used_utility_is_here[0x804xxx1] ======= Memory map: ======== 00336000-00341000 r-xp 00000000 08:02 2751047 /lib/libgcc_s-4.1.2-20080825.so.1 00341000-00342000 rwxp 0000a000 08:02 2751047 /lib/libgcc_s-4.1.2-20080825.so.1 008f3000-008f4000 r-xp 008f3000 00:00 0 [vdso] The same detector fails for more synthetic examples from the internet. How can we demonstrate buffer overflow with modern non-GPL distribs (there is no debian in classes) How can we DISABLE canary word checking in stack ? DISABLE checking variants of strcpy/strcat ? write an example (in plain C) with working buffer overrun ?

    Read the article

  • Eclipse: Synchronizing project on Thumbdrive with PC

    - by Thomas Matthews
    I have a thumb drive (memory stick, flash drive, etc.) on which I use for my projects when I am away from my home PC. Currently I am accessing my Eclipse project directly from my thumb drive when connected to my PC. I would like to copy my files to the PC, develop on the PC, then "synchronize" with the thumb drive (update files on the thumb drive). I also need the reverse process too: synchronize thumb drive files with files on PC. I have looked at the FileSync plugin, but it specifically says it is one-way. How can I synchronize my Eclipse project both directions (PC to thumb drive and thumb drive to PC) on demand (I don't need this done automagically)?

    Read the article

  • Computing MD5SUM of large files in C#

    - by spkhaira
    I am using following code to compute MD5SUM of a file - byte[] b = System.IO.File.ReadAllBytes(file); string sum = BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(b)); This works fine normally, but if I encounter a large file (~1GB) - e.g. an iso image or a DVD VOB file - I get an Out of Memory exception. Though, I am able to compute the MD5SUM in cygwin for the same file in about 10secs. Please suggest how can I get this to work for big files in my program. Thanks

    Read the article

  • Passing parameters between interrupt handlers on a Cortex-M3 ...

    - by Captain NedD
    I'm building a light kernel for a Cortex-M3. From a high priority interrupt I'd like to invoke some code to run in a lower priority interrupt and pass some parameters along. I don't want to use a queue to post work to the lower priority interrupt. I just have a buffer and size to pass to it. In the proramming manual it says that the SVC interrupt handler is synchronous which presumably means that if you invoke it from an interrupt that's a lower priority than SVC's handler it gets called immediately (the upshot of this being that you can pass parameters to it as though it were a function call (a little like the BIOS calls in MS-DOS)). I'd like to do it the other way: passing parameters from a high priority interrupt to a lower priority one (at the moment I'm doing it by leaving the parameters in a fixed location in memory). What's the best way to do this (if at all possible)? Thanks,

    Read the article

  • Using Objective-C blocks with old compiler

    - by H2CO3
    I'm using the opensource iPhone toolchain on Linux for developing for jailbroken iPhones. I'd like to take advantage of the new (4.0, 5.0) iOS SDK features, but I can't as my old build of GCC doesn't understand the ^ block syntax. I noticed that blocks are just of type id (struct objc_object *). I know this from two resources: first, class-dump reports them as id, second, Apple docs clarify that "blocks can be retained". My quiestion is, how can I take advantage of blocks using this knowledge? I thought of something like: // this is in SDK 4.x/5.x - (void) doSomethingWithBlock:((int)(^block)(int)); // and I modify it like: - (void) doSomethingWithBlock(id)block; the question is: HOW TO ACTUALLY CALL IT? How do I create blocks? I can, of course, create function pointers (IMPs in particular), but how to achieve the object-like memory layout?

    Read the article

  • Response.TransmitFile() with UNC share (ASP.NET)

    - by frankadelic
    In the comments of this page: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx ..it says that TransmitFile() cannot be used with UNC shares. As far as I can tell, this is the case; I get this error in Event Log when I attempt it: TransmitFile failed. File Name: \\myshare1\e$\file.zip, Impersonation Enabled: 0, Token Valid: 1, HRESULT: 0x8007052e The suggested alternative is to use WriteFile(), however, this is problematic because it loads the file into memory. In my application, the files are 200MB, so this is not going to scale. Is there a method in ASP.NET for streaming files to users that's: scalable (doesn't read entire file into RAM or occupy ASP.NET threads) works with UNC shares Mapping a network drive as a virtual directory is not an option for us. I would like to avoid copying the file to the local web server as well. Thanks

    Read the article

  • Is there a performance advantage in using a 64bit version of openCV+Emgu instead of 32bit?

    - by Jelly Amma
    Hello, I am developing an application that processes images captured in real time by a Point Grey camera (http://www.ptgrey.com/). The Point Grey SDK is a .net wrapper and can be either 32bit or 64bit. Then to process the captured images, I'm using a wrapper for openCV called Emgu CV (http://www.emgu.com/) that comes in both 32bit or 64bit flavors as well. Now, being on Vista64 I went for the 64bit versions of FlyCapture (Point Grey's SDK) and Emgu CV (which includes openCV in its install) hoping to maximize performance. Recently I've been wanting to call my FlyCapture+Emgu DLL code from XNA, which unfortunately only exists in 32bit, and I realize that I may have to reinstall all those components in 32bit as I don't really want to go through IPC, remoting, etc. Apart from the obvious limit to memory space inherent to 32bit, is there also a performance loss I should be expecting? How dramatic would that be and why ? Thanks in advance for any advice or explanation.

    Read the article

  • Large XML files in dataset (outofmemory)

    - by dklein
    Hi folks, I am currently trying to load a slightly large xml file into a dataset. The xml file is about 700 MB and every time I try to read the xml it needs plenty of time and after a while it throws an "out of memory" exception. DataSet ds = new DataSet(); ds.ReadXml(pathtofile); The main problem is, that it is necessary for me to use those datasets (I use it to import the data from xml file into a sybase database (foreach table, foreach row, foreach column)) and that I have no scheme file. I already googled a while, but I did only find solutions that won't be usable for me.

    Read the article

  • Entity Framework 4 relationship management in POCO Templates - More lazy than FixupCollection?

    - by Joe Wood
    I've been taking a look at EF4 POCO templates in beta 2. The FixupCollection looks fine for maintaining the model correctness after updating the relationship collection property (i.e. product.Orders it would set the order.Product reference ). But what about support for handling the scenario when some of those Order objects are removed from the context? The use-case of maintaining cascading deletes in the in-memory model. The old Typed DataSet model used to do this by performing the query through the container to derive the relationship results. Like the DataSet, this would require a reference to the ObjectContext inside the entity class so that it could query the top-level Order collection. Better support for Separation of Concerns in the ObjectContext would be required. It looks like EF is not suited to this use-case that DataSets did out of the box.... am I right?

    Read the article

  • Use javac fork attribute with IBM JDK

    - by avjaz
    Hi - I have a large ant build that I'm working on, that is currently running out of memory. One ways I've read that can help mitigate this problem is to use javac fork="true" to run javac in a separate jvm. My problem is that I need to compile the project with the IBM JDK (this is not the JDK referenced by JAVA_HOME, and I would prefer it not to be). I tried setting the executable attribute of Ant's javac, to the path to IBM's javac but no joy (the project still won't compile). Ant's docs for the executable attribute state: Complete path to the javac executable to use in case of fork="yes". Defaults to the compiler of the Java version that is currently running Ant. Ignored if fork="no". Since Ant 1.6 this attribute can also be used to specify the path to the executable when using jikes, jvc, gcj or sj. Does anyone have any ideas? Thanks -

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >