Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 203/515 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Python Ctypes Read/WriteProcessMemory() - Error 5/998 Help!

    - by user299805
    Please don't get scared but the following code, if you are familiar with ctypes or C it should be easy to read. I have been trying to get my ReadProcessMemory() and WriteProcessMemory() functions to be working for so long and have tried almost every possibility but the right one. It launches the target program, returns its PID and handle just fine. But I always get a error code of 5 - ERROR_ACCESS_DENIED. When I run the read function(forget the write for now). I am launching this program as what I believe to be a CHILD process with PROCESS_ALL_ACCESS or CREATE_PRESERVE_CODE_AUTHZ_LEVEL. I have also tried PROCESS_ALL_ACCESS and PROCESS_VM_READ when I open the handle. I can also say that it is a valid memory location because I can find it on the running program with CheatEngine. As for VirtualQuery() I get an error code of 998 - ERROR_NOACCESS which further confirms my suspicion of it being some security/privilege problem. Any help or ideas would be very appreciated, again, it's my whole program so far, don't let it scare you =P. from ctypes import * from ctypes.wintypes import BOOL import binascii BYTE = c_ubyte WORD = c_ushort DWORD = c_ulong LPBYTE = POINTER(c_ubyte) LPTSTR = POINTER(c_char) HANDLE = c_void_p PVOID = c_void_p LPVOID = c_void_p UNIT_PTR = c_ulong SIZE_T = c_ulong class STARTUPINFO(Structure): _fields_ = [("cb", DWORD), ("lpReserved", LPTSTR), ("lpDesktop", LPTSTR), ("lpTitle", LPTSTR), ("dwX", DWORD), ("dwY", DWORD), ("dwXSize", DWORD), ("dwYSize", DWORD), ("dwXCountChars", DWORD), ("dwYCountChars", DWORD), ("dwFillAttribute",DWORD), ("dwFlags", DWORD), ("wShowWindow", WORD), ("cbReserved2", WORD), ("lpReserved2", LPBYTE), ("hStdInput", HANDLE), ("hStdOutput", HANDLE), ("hStdError", HANDLE),] class PROCESS_INFORMATION(Structure): _fields_ = [("hProcess", HANDLE), ("hThread", HANDLE), ("dwProcessId", DWORD), ("dwThreadId", DWORD),] class MEMORY_BASIC_INFORMATION(Structure): _fields_ = [("BaseAddress", PVOID), ("AllocationBase", PVOID), ("AllocationProtect", DWORD), ("RegionSize", SIZE_T), ("State", DWORD), ("Protect", DWORD), ("Type", DWORD),] class SECURITY_ATTRIBUTES(Structure): _fields_ = [("Length", DWORD), ("SecDescriptor", LPVOID), ("InheritHandle", BOOL)] class Main(): def __init__(self): self.h_process = None self.pid = None def launch(self, path_to_exe): CREATE_NEW_CONSOLE = 0x00000010 CREATE_PRESERVE_CODE_AUTHZ_LEVEL = 0x02000000 startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() security_attributes = SECURITY_ATTRIBUTES() startupinfo.dwFlags = 0x1 startupinfo.wShowWindow = 0x0 startupinfo.cb = sizeof(startupinfo) security_attributes.Length = sizeof(security_attributes) security_attributes.SecDescriptior = None security_attributes.InheritHandle = True if windll.kernel32.CreateProcessA(path_to_exe, None, byref(security_attributes), byref(security_attributes), True, CREATE_PRESERVE_CODE_AUTHZ_LEVEL, None, None, byref(startupinfo), byref(process_information)): self.pid = process_information.dwProcessId print "Success: CreateProcess - ", path_to_exe else: print "Failed: Create Process - Error code: ", windll.kernel32.GetLastError() def get_handle(self, pid): PROCESS_ALL_ACCESS = 0x001F0FFF PROCESS_VM_READ = 0x0010 self.h_process = windll.kernel32.OpenProcess(PROCESS_VM_READ, False, pid) if self.h_process: print "Success: Got Handle - PID:", self.pid else: print "Failed: Get Handle - Error code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) def read_memory(self, address): buffer = c_char_p("The data goes here") bufferSize = len(buffer.value) bytesRead = c_ulong(0) if windll.kernel32.ReadProcessMemory(self.h_process, address, buffer, bufferSize, byref(bytesRead)): print "Success: Read Memory - ", buffer.value else: print "Failed: Read Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.CloseHandle(self.h_process) windll.kernel32.SetLastError(10000) def write_memory(self, address, data): count = c_ulong(0) length = len(data) c_data = c_char_p(data[count.value:]) null = c_int(0) if not windll.kernel32.WriteProcessMemory(self.h_process, address, c_data, length, byref(count)): print "Failed: Write Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) else: return False def virtual_query(self, address): basic_memory_info = MEMORY_BASIC_INFORMATION() windll.kernel32.SetLastError(10000) result = windll.kernel32.VirtualQuery(address, byref(basic_memory_info), byref(basic_memory_info)) if result: return True else: print "Failed: Virtual Query - Error Code: ", windll.kernel32.GetLastError() main = Main() address = None main.launch("C:\Program Files\ProgramFolder\Program.exe") main.get_handle(main.pid) #main.write_memory(address, "\x61") while 1: print '1 to enter an address' print '2 to virtual query address' print '3 to read address' choice = raw_input('Choice: ') if choice == '1': address = raw_input('Enter and address: ') if choice == '2': main.virtual_query(address) if choice == '3': main.read_memory(address) Thanks!

    Read the article

  • very large image manipulation and tiling

    - by Mohammad
    I need to a software , Program(Java),or a method for tiling very larg images (more than 140MB). I have used imagemagic and convert tools photoshop and corel draw and matlab (in win os) but I have problem with memory amount.and memory is not enough.imagemagic is very slow and result is not desirable. I dont know how can i only load a small part of image on hard disk to RAM .(with out load whole image from hard)

    Read the article

  • Forcing the GC to collect JNI proxy objects

    - by SyBer
    Hi. While I do my best to clean JNI objects to free native memory in the end of the usage, there are still some that hang around for a long time, wasting system native memory. Is there any way to force the GC to give priority in collection of these JNI proxies? I mean is there a way to cause GC to concentrate on a particular kind of object, namely the JNI proxies? Thanks.

    Read the article

  • Large data processing in x86 C# gives System.OutOfMemory exception

    - by Cool
    I am processing XML coming from server which contains both images and data in one C# function (compiled 32 bit). When I try to parse this xml in memory it gives me System.OutOfMemory exception. Is there any way to avoid this error? My guess is, system cannot find contiguous block of 50-100MB memory. (my pc hv 8Gig ram and its quad core)

    Read the article

  • Bitmap byte-size after decoding?

    - by need4sid
    Hi! How can I determine/calculate the byte size of a bitmap (after decoding with BitmapFactory)? I need to know how much memory space it occupies, because I'm doing memory caching/management in my app. (file size is not enough, since these are jpg/png files) Thanks for any solutions!

    Read the article

  • How does heap compaction work quickly?

    - by Mason Wheeler
    They say that compacting garbage collectors are faster than traditional memory management because they only have to collect live objects, and by rearranging them in memory so everything's in one contiguous block, you end up with no heap fragmentation. But how can that be done quickly? It seems to me that that's equivalent to the bin-packing problem, which is NP-hard and can't be completed in a reasonable amount of time on a large dataset within the current limits of our knowledge about computation. What am I missing?

    Read the article

  • WCF streaming files

    - by Pinu
    I need to pass a memory stream to the WCF server , how do i need to add this data type in my data contract. I will eventually need to convert this to a memory stream and pass it on to my service layer. datacontact[DataMember] Stream str = null; public Stream File { get { return str; } set { str = value; } }

    Read the article

  • Right to Left UI in iPhone (Hebrew)

    - by Reflog
    Hello, I'm struggling in creating a RTL UI in iPhone application. The framework doesn't seem to have any support for RTL languages. The only thing is the alignment inside labels, which is nice, but it conflicts with other controls behaviour. The question is: Is there a working code for a RTL TableView? Something that would handle the disclosure buttons to be on the left, section titles to be right aligned, index view to be left aligned? As far as I understand I cannot move the index view of the tableview, i have to overlay some custom control... Any suggestions/pointers/examples? p.s. this is not a duplication of this question: http://stackoverflow.com/questions/1677988/right-to-left-alignment-for-uitableview since what I am looking for is a deeper customization, not just a new type of CellView. (Update: Mar 10) For now - I've removed support for indexView from the tableView at all, implemented the cells as custom views by myself (with disclosure buttons on the left), and customized the header/footer of the table as well. the only thing that is left is the Index View. Thanks in advance!

    Read the article

  • question about sorting

    - by skydoor
    Bubble sort is O(n) at best, O(n^2) at worst, and its memory usage is O(1) . Merge sort is always O(n log n), but its memory usage is O(n). Which algorithm we would use to implement a function that takes an array of integers and returns the max integer in the collection, assuming that the length of the array is less than 1000. What if the array length is greater than 1000?

    Read the article

  • Suggestion for a Data Structure!

    - by Jay
    I have the following requirements for a data structure: Direct access to an element with the help of a key (Key will be an integer, range is also same as integer range) Avoid memory allocation in chunks (Allocate contigous memory for the data structure including the data) Should be able to grow the data structure size dynamically Which data structure would you suggest? Any pointers in the direction will also be of help.

    Read the article

  • How to open DataSet in Visual Studio 2008 faster?

    - by Ekkapop
    When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically. I want to know, Is it has anyway to reduce this waiting time? Hardware - Desktop CPU: Intel Q6600 Memory: 4 GB HDD: 320 GB 7200 rpm OS: Windows XP 32 bit with Service Pack 3

    Read the article

  • one high-end server with one Application Server or multiple Application Servers?

    - by elgcom
    If I have a high-end server, for example with 1T memory and 8x4core CPU... will it bring more performance if I run multiple App Server (on different JVM) rather than just one App Server? On App Server I will run some services (EAR whith message driven beans) which exchange message with each other. btw, has java 64bit now no memory limitation any more? http://java.sun.com/products/hotspot/whitepaper.html#64

    Read the article

  • C Programming - My program is good enough for my assignment but I know its not good

    - by Joe
    Hi there I'm just starting an assignment for uni and it's raised a question for me. I don't understand how to return a string from a function without having a memory leak. char* trim(char* line) { int start = 0; int end = strlen(line) - 1; /* find the start position of the string */ while(isspace(line[start]) != 0) { start++; } //printf("start is %d\n", start); /* find the position end of the string */ while(isspace(line[end]) != 0) { end--; } //printf("end is %d\n", end); /* calculate string length and add 1 for the sentinel */ int len = end - start + 2; /* initialise char array to len and read in characters */ int i; char* trimmed = calloc(sizeof(char), len); for(i = 0; i < (len - 1); i++) { trimmed[i] = line[start + i]; } trimmed[len - 1] = '\0'; return trimmed; } as you can see I am returning a pointer to char which is an array. I found that if I tried to make the 'trimmed' array by something like: char trimmed[len]; then the compiler would throw up a message saying that a constant was expected on this line. I assume this meant that for some reason you can't use variables as the array length when initialising an array, although something tells me that can't be right. So instead I made my array by allocating some memory to a char pointer. I understand that this function is probably waaaaay sub-optimal for what it is trying to do, but what I really want to know is: 1. Can you normally initialise an array using a variable to declare the length like: char trimmed[len]; ? 2. If I had an array that was of that type (char trimmed[]) would it have the same return type as a pointer to char (ie char*). 3. If I make my array by callocing some memory and allocating it to a char pointer, how do I free this memory. It seems to me that once I have returned this array, I can't access it to free it as it is a local variable. Many thanks in advance Joe

    Read the article

  • x86_64 Assembly Command Line Arguments

    - by Brandon oubiub
    I'm new to assembly, and I just got familiar with the call stack, so bare with me. To get the command line arguments in x86_64 on Mac OS X, I can do the following: _main: sub rsp, 8 ; 16 bit stack alignment mov rax, 0 mov rdi, format mov rsi, [rsp + 32] call _printf Where format is "%s". rsi gets set to argv[0]. So, from this, I drew out what (I think) the stack looks like initially: top of stack <- rsp after alignment return address <- rsp at beginning (aligned rsp + 8) [something] <- rsp + 16 argc <- rsp + 24 argv[0] <- rsp + 32 argv[1] <- rsp + 40 ... ... bottom of stack And so on. Sorry if that's hard to read. I'm wondering what [something] is. After a few tests, I find that it is usually just 0. However, occasionally, it is some (seemingly) random number. EDIT: Also, could you tell me if the rest of my stack drawing is correct?

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • Tuning MySQL to take advantage of a 4GB VPS

    - by alistair.mp
    Hello, We're running a large site at the moment which has a dedicated VPS for it's database server which is running MySQL and nothing else. At the moment all four CPU cores are running at close to 100% all of the time but the memory usage sticks at around 268MB out of an available 4096MB. I'm wondering what we can do to better utilise the memory and reduce the CPU load by tweaking MySQL's settings? Here is what we currently have in my.cnf: http://pastie.org/private/hxeji9o8n3u9up9mvtinbq Thanks

    Read the article

  • Calling COM Library From XBAP

    - by Benny
    I am trying to call an old COM library from my XBAP and continue to receive the following exception: System.AccessViolationException was unhandled Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I have tried adding the HKLM value for RunUnrestricted to no avail. I don't get anything else but this error when calling the library. Any ideas? (This library even works from a pure ASP.NET app)

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • GCC - How to realign stack?

    - by psihodelia
    I try to build an application which uses pthreads and __m128 SSE type. According to GCC manual, default stack alignment is 16 bytes. In order to use __m128, the requirement is the 16-byte alignment. My target CPU supports SSE. I use a GCC compiler which doesn't support runtime stack realignment (e.g. -mstackrealign). I cannot use any other GCC compiler version. My test application looks like: #include <xmmintrin.h> #include <pthread.h> void *f(void *x){ __m128 y; ... } int main(void){ pthread_t p; pthread_create(&p, NULL, f, NULL); } The application generates an exception and exits. After a simple debugging (printf "%p", &y), I found that the variable y is not 16-byte aligned. My question is: how can I realign the stack properly (16-byte) without using any GCC flags and attributes (they don't help)? Should I use GCC inline Assembler within this thread function f()?

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • Add Data to DataRepeater Control in winform

    - by Mohsan
    hi. Visual Studio 2008 service pack 1 comes with Visual Basic Powerpack and has DataRepeatr control. i want to know that how I can add data in this control. i have in memory data. the examples i found on net are about binding DataSet to DataRepeater by fetching data from database. i want to bind in memory data. how to do this.

    Read the article

  • ReadProcessMemory to memcpy conversion. Need help

    - by Phil
    I'm using: ReadProcessMemory(hProcess,(PVOID)(dwEngine_DLL+0x2E15C8+i),&memSnap[i],1,NULL); //Store the memory into a byte array To store a section of memory into an array of byte, but I realized this was sloppy since I'm in the same address space, but I'm not sure how to do the same thing with memcpy.

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • do I need to close an audio Clip?

    - by Michael
    have an application that processes real-time data and is supposed to beep when a certain event occurs. The triggering event can occur multiple times per second, and if the beep is already playing when another event triggers the code is just supposed to ignore it (as opposed to interrupting the current beep and starting a new one). Here is the basic code: Clip clickClip public void prepareProcess() { super.prepareProcess(); clickClip = null; try { clipFile = new File("C:/WINDOWS/Media/CHIMES.wav"); ais = AudioSystem.getAudioInputStream(clipFile); clickClip = AudioSystem.getClip(); clickClip.open(ais); fileIsLoaded = true; } catch (Exception ex) { clickClip = null; fileIsLoaded = false; } } public void playSound() { if (fileIsLoaded) { if ((clickClip==null) || (!clickClip.isRunning())) { try { clickClip.setFramePosition(0); clickClip.start(); } catch (Exception ex) { System.out.println("Cannot play click noise"); ex.printStackTrace(); } } } The prepareProcess method gets run once in the beginning, and the playSound method is called every time a triggering event occurs. My question is: do I need to close the clickClip object? I know I could add an actionListener to monitor for a Stop event, but since the event occurs so frequently I'm worried the extra processing is going to slow down the real-time data collection. The code seems to run fine, but my worry is memory leaks. The code above is based on an example I found while searching the net, but the example used an actionListener to close the Clip specifically "to eliminate memory leaks that would occur when the stop method wasn't implemented". My program is intended to run for hours so any memory leaks I have will cause problems. I'll be honest: I have no idea how to verify whether or not I've got a problem. I'm using Netbeans, and running the memory profiler just gave me a huge list of things that I don't know how to read. This is supposed to be the simple part of the program, and I'm spending hours on it. Any help would be greatly appreciated! Michael

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >