Search Results

Search found 161 results on 7 pages for 'memcpy'.

Page 4/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • Counting leaf nodes in hierarchical tree

    - by timn
    This code fills a tree with values based their depths. But when traversing the tree, I cannot manage to determine the actual number of children. node-cnt is always 0. I've already tried node-parent-cnt but that gives me lots of warnings in Valgrind. Anyway, is the tree type I've chosen even appropriate for my purpose? #include <string.h> #include <stdio.h> #include <stdlib.h> #ifndef NULL #define NULL ((void *) 0) #endif // ---- typedef struct _Tree_Node { // data ptr void *p; // number of nodes int cnt; struct _Tree_Node *nodes; // parent nodes struct _Tree_Node *parent; } Tree_Node; typedef struct { Tree_Node root; } Tree; void Tree_Init(Tree *this) { this->root.p = NULL; this->root.cnt = 0; this->root.nodes = NULL; this->root.parent = NULL; } Tree_Node* Tree_AddNode(Tree_Node *node) { if (node->cnt == 0) { node->nodes = malloc(sizeof(Tree_Node)); } else { node->nodes = realloc( node->nodes, (node->cnt + 1) * sizeof(Tree_Node) ); } Tree_Node *res = &node->nodes[node->cnt]; res->p = NULL; res->cnt = 0; res->nodes = NULL; res->parent = node; node->cnt++; return res; } // ---- void handleNode(Tree_Node *node, int depth) { int j = depth; printf("\n"); while (j--) { printf(" "); } printf("depth=%d ", depth); if (node->p == NULL) { goto out; } printf("value=%s cnt=%d", node->p, node->cnt); out: for (int i = 0; i < node->cnt; i++) { handleNode(&node->nodes[i], depth + 1); } } Tree tree; int curdepth; Tree_Node *curnode; void add(int depth, char *s) { printf("%s: depth (%d) > curdepth (%d): %d\n", s, depth, curdepth, depth > curdepth); if (depth > curdepth) { curnode = Tree_AddNode(curnode); Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); curdepth++; } else { while (curdepth - depth > 0) { if (curnode->parent == NULL) { printf("Illegal nesting\n"); return; } curnode = curnode->parent; curdepth--; } Tree_Node *node = Tree_AddNode(curnode); node->p = malloc(strlen(s)); memcpy(node->p, s, strlen(s)); } } void main(void) { Tree_Init(&tree); curnode = &tree.root; curdepth = 0; add(0, "1"); add(1, "1.1"); add(2, "1.1.1"); add(3, "1.1.1.1"); add(4, "1.1.1.1.1"); add(2, "1.1.2"); add(0, "2"); handleNode(&tree.root, 0); }

    Read the article

  • How to copy bytes from buffer into the managed struct?

    - by Chupo_cro
    I have a problem with getting the code to work in a managed environment (VS2008 C++/CLI Win Forms App). The problem is I cannot declare the unmanaged struct (is that even possible?) inside the managed code, so I've declared a managed struct but now I have a problem how to copy bytes from buffer into that struct. Here is the pure C++ code that obviously works as expected: typedef struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; int offset = 0; int filesize = 256; // simulates filesize int point_num = 10; // simulates number of records int main () { char *buffer_dyn = new char[filesize]; // allocate RAM // here, the file would have been read into the buffer buffer_dyn[0xa8] = 0x1e; // simulates the speed data (1e 00 00 00) buffer_dyn[0xa9] = 0x00; buffer_dyn[0xaa] = 0x00; buffer_dyn[0xab] = 0x00; offset = 0x90; // if the data with this offset is transfered trom buffer // to struct, int speed is alligned with the buffer at the // offset of 0xa8 track_point *points = new track_point[point_num]; points[0].speed = 0xff; // (debug) it should change into 0x1e memcpy(&points[0],buffer_dyn+offset,32); cout << "offset: " << offset << "\r\n"; //cout << "speed: " << points[0].speed << "\r\n"; printf ("speed : 0x%x\r\n",points[0].speed); printf("byte at offset 0xa8: 0x%x\r\n",(unsigned char)buffer_dyn[0xa8]); // should be 0x1e delete[] buffer_dyn; // release RAM delete[] points; /* What I need is to rewrite the lines 29 and 31 to work in the managed code (VS2008 Win Forms C++/CLI) What should I have after: array<track_point^>^ points = gcnew array<track_point^>(point_num); so I can copy 32 bytes from buffer_dyn to the managed struct declared as typedef ref struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; */ return 0; } Here is the paste to codepad.org so it can be seen the code is OK. What I need is to rewrite these two lines: track_point *points = new track_point[point_num]; memcpy(&points[0],buffer_dyn+offset,32); to something that will work in a managed application. I wrote: array<track_point^>^ points = gcnew array<track_point^>(point_num); and now trying to reproduce the described copying of the data from buffer over the struct, but haven't any idea how it should be done. Alternatively, if there is a way to use an unmanaged struct in the same way shown in my code, then I would like to avoid working with managed struct.

    Read the article

  • iPhone - UIImage Leak, CGBitmapContextCreateImage Leak

    - by bbullis21
    Alright I am having a world of difficulty tracking down this memory leak. When running this script I do not see any memory leaking, but my objectalloc is climbing. Instruments points to CGBitmapContextCreateImage create_bitmap_data_provider malloc, this takes up 60% of my objectalloc. This code is called several times with a NSTimer. //GET IMAGE FROM RESOURCE DIR NSString * fileLocation = [[NSBundle mainBundle] pathForResource:imgMain ofType:@"jpg"]; NSData * imageData = [NSData dataWithContentsOfFile:fileLocation]; UIImage * blurMe = [UIImage imageWithData:imageData]; NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; UIImage * scaledImage = [blurMe _imageScaledToSize:CGSizeMake(blurMe.size.width / dblBlurLevel, blurMe.size.width / dblBlurLevel) interpolationQuality:3.0]; UIImage * labelImage = [scaledImage _imageScaledToSize:blurMe.size interpolationQuality:3.0]; UIImage * imageCopy = [[UIImage alloc] initWithCGImage:labelImage.CGImage]; [pool drain]; // deallocates scaledImage and labelImage imgView.image = imageCopy; [imageCopy release]; Below is the blur function. I believe the objectalloc issue is located in here. Maybe I just need a pair of fresh eyes. Would be great if someone could figure this out. Sorry it is kind of long... I'll try and shorten it. @implementation UIImage(Blur) - (UIImage *)blurredCopy:(int)pixelRadius { //VARS unsigned char *srcData, *destData, *finalData; CGContextRef context = NULL; CGColorSpaceRef colorSpace; void * bitmapData; int bitmapByteCount; int bitmapBytesPerRow; //IMAGE SIZE size_t pixelsWide = CGImageGetWidth(self.CGImage); size_t pixelsHigh = CGImageGetHeight(self.CGImage); bitmapBytesPerRow = (pixelsWide * 4); bitmapByteCount = (bitmapBytesPerRow * pixelsHigh); colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { return NULL; } bitmapData = malloc( bitmapByteCount ); if (bitmapData == NULL) { CGColorSpaceRelease( colorSpace ); } context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst ); if (context == NULL) { free (bitmapData); } CGColorSpaceRelease( colorSpace ); free (bitmapData); if (context == NULL) { return NULL; } //PREPARE BLUR size_t width = CGBitmapContextGetWidth(context); size_t height = CGBitmapContextGetHeight(context); size_t bpr = CGBitmapContextGetBytesPerRow(context); size_t bpp = (CGBitmapContextGetBitsPerPixel(context) / 8); CGRect rect = {{0,0},{width,height}}; CGContextDrawImage(context, rect, self.CGImage); // Now we can get a pointer to the image data associated with the bitmap // context. srcData = (unsigned char *)CGBitmapContextGetData (context); if (srcData != NULL) { size_t dataSize = bpr * height; finalData = malloc(dataSize); destData = malloc(dataSize); memcpy(finalData, srcData, dataSize); memcpy(destData, srcData, dataSize); int sums[5]; int i, x, y, k; int gauss_sum=0; int radius = pixelRadius * 2 + 1; int *gauss_fact = malloc(radius * sizeof(int)); for (i = 0; i < pixelRadius; i++) { .....blah blah blah... THIS IS JUST LONG CODE THE CREATES INT FIGURES ........blah blah blah...... } if (gauss_fact) { free(gauss_fact); } } size_t bitmapByteCount2 = bpr * height; //CREATE DATA PROVIDER CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, srcData, bitmapByteCount2, NULL); //CREATE IMAGE CGImageRef cgImage = CGImageCreate( width, height, CGBitmapContextGetBitsPerComponent(context), CGBitmapContextGetBitsPerPixel(context), CGBitmapContextGetBytesPerRow(context), CGBitmapContextGetColorSpace(context), CGBitmapContextGetBitmapInfo(context), dataProvider, NULL, true, kCGRenderingIntentDefault ); //RELEASE INFORMATION CGDataProviderRelease(dataProvider); CGContextRelease(context); if (destData) { free(destData); } if (finalData) { free(finalData); } if (srcData) { free(srcData); } UIImage *retUIImage = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); return retUIImage; } The only thing I can think of that is holding up the objectalloc is this UIImage *retUIImage = [UIImage imageWithCGImage:cgImage];...but how to do I release that after it has been returned? Hopefully someone can help please.

    Read the article

  • Problem using void pointer as a function argument

    - by Nazgulled
    Hi, I can't understand this result... The code: void foo(void * key, size_t key_sz) { HashItem *item = malloc(sizeof(HashItem)); printf("[%d]\n", (int)key); ... item->key = malloc(key_sz); memcpy(item->key, key, key_sz); } void bar(int num) { foo(&num, sizeof(int)); } And I do this call: bar(900011009); But the printf() output is: [-1074593956] I really need key to be a void pointer, how can I fix this?

    Read the article

  • OpenCV: How to copy CvSeq data into CvMat?

    - by Can Bal
    I have a CvSeq structure at hand, which is the output of an available OpenCV function. This holds 128 bytes of data in each of the sequence elements. I want to copy each of these 128-byte elements into rows of a CvMat structure to form a N-by-128 of type CV_32FC1. What would be the most efficient way to do this? I thought of using memcpy but I couldn't come up with a working solution. For the details, I want to calculate the SURF features in an image by cvExtractSURF() function, and copy the SURF descriptors into a matrix for passing it to the cvKMeans2().

    Read the article

  • Cannot convert CString to BYTE array

    - by chekalin-v
    I need to convert CString to BYTE array. I don't know why, but everything that I found in internet does not work :( For example, I have CString str = _T("string"); I've been trying so 1) BYTE *pbBuffer = (BYTE*)(LPCTSTR)str; 2) BYTE *pbBuffer = new BYTE[str.GetLength()+1]; memcpy(pbBuffer, (VOID*)(LPCTSTR)StrRegID, str.GetLength()); 3) BYTE *pbBuffer = (BYTE*)str.GetString(); And always pbBuffer contains just first letter of str DWORD dwBufferLen = strlen((char *)pbBuffer)+1; is 2 But if I use const string: BYTE *pbBuffer = (BYTE*)"string"; pbBuffer contains whole string Where is my mistake?

    Read the article

  • I wanted to be a programmer

    - by Henrik P. Hessel
    Hello, let me ask your opinion. I'm 25 now, living in Germany. I started with QBASIC, did some Java in Highschool, and after School I created some Websites in PHP. Now, because my Company is Microsoft Gold Partner, I've to use Microsoft all the time. C#, MSSQL, ASPX and Sharepoint <- I really hate it! So, in my spare I concentrate to gain more knowledge (C++, Java, Silverlight or WPF), because it feels that I'm so far behind, in comparison for example to you guys, or other older employees in my company. Do think that my behaviour is useful? Should I focus my time to become i.e. a pure C# Programmer? I did C# even before I started to learn some C++. Should I learn what do with pointers, memcpy and stuff like that, even if managed code brings us so much benefits? Or is it a waste of time, better invested in learning the latest technologies? rAyt

    Read the article

  • When does printf("%s", char*) stop printing?

    - by remagen
    In my class we are writing our own copy of C's malloc() function. To test my code (which can currently allocate space fine) I was using: char* ptr = my_malloc(6*sizeof(char)); memcpy(ptr, "Hello\n", 6*sizeof(char)); printf("%s", ptr); The output would typically be this: Hello Unprintable character Some debugging figured that my code wasn't causing this per say, as ptr's memory is as follows: [24 bytes of meta info][Number of requested bytes][Padding] So I figured that printf was reaching into the padding, which is just garbage. So I ran a test of: printf("%s", "test\nd"); and got: test d Which makes me wonder, when DOES printf("%s", char*) stop printing chars?

    Read the article

  • Copying part of a string in C

    - by wolfPack88
    This seems like it should be really simple, but for some reason, I'm not getting it to work. I have a string called seq, which looks like this: ala ile val I want to take the first 3 characters and copy them into a different string. I use the command: memcpy(fileName, seq, 3 * sizeof(char)); That should make fileName = "ala", right? But for some reason, I get fileName = "ala9". I'm currently working around it by just saying fileName[4] = '\0', but was wondering why I'm getting that 9. Note: After changing seq to ala ile val ser and rerunning the same code, fileName becomes "alaK". Not 9 anymore, but still an erroneous character.

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • how to convert bitmap to intptr in C#

    - by carl
    code as fellow: Bitmap bmp = new Bitmap(e.width, e.height, System.Drawing.Imaging.PixelFormat.Format24bppRgb); System.Drawing.Imaging.BitmapData data = bmp.LockBits(new Rectangle(0, 0, e.width, e.height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb); int dataLength = e.width * e.height * 3; Win32.memcpy(data.Scan0, (IntPtr)e.data, (uint)dataLength); convertBitmapToIntptr(bmp);????? how to code in this function like convertBitmapToIntptr(bmp). who give me some idea. thank you very much.

    Read the article

  • changing command line arguments

    - by Shadi
    Hi, I am writing a C program. It takes its arguments from commandLine. I want to change the commandLine arguments in the code. As they are defined as "const char *", I can not change them using "strcpy", "memcpy", ... Also, you know, I can not just change their type from "const char *" to "char *". Is there any way to change them? Thank you so much in advance. Best regards, Shadi.

    Read the article

  • trying to copy a bitmap into the WMP Renderer -> upside down!

    - by Roey
    Hi All. I'm writing a video DMO decoder and trying to return a bitmap to the WMP renderer for display ... but WMP displays it upside down! This is the code : HBITMAP* hBmp = new HBITMAP(); int result; m_pScrRenderer->CreateFrame(hBmp, &result); ///This returns the HBITMAP handle. BITMAP bmStruct; memset(&bmStruct, 0, sizeof(BITMAP)); GetObject(*hBmp, sizeof(BITMAP), &bmStruct); int size = bmStruct.bmWidthBytes * bmStruct.bmHeight; memcpy(pbOutData, bmStruct.bmBits, size); //PBoutData is WMP's renderer buffer. This produces an upside down image. What should I change in this code? Thank You! Roey.

    Read the article

  • Converting between unsigned and signed int safely

    - by polemic
    I have an interface between a client and a server where a client sends (1) an unsigned value, and (2) a flag which indicates if value is signed/unsigned. Server would then static cast unsigned value to appropriate type. I later found out that this is implementation defined behavior and I've been reading about it but I couldn't seem to find an appropriate solution that's completely safe? I've read about type punning, pointer conversions, and memcpy. Would simply using a union type work? A UnionType containing signed and unsigned int, along with the signed/unsigned flag. For signed values, client sets the signed part of the union, and server reads the signed part. Same for the unsigned part. Or am I completely misunderstanding something? Side question: how do I know the specific behavior in this case for a specific scenario, e.g. windriver diab on PPC? I'm a bit lost on how to find such documentation.

    Read the article

  • C++ union assignment, is there a good way to do this?

    - by Sqeaky
    I am working on a project with a library and I must work with unions. Specifically I am working with SDL and the SDL_Event union. I need to make copies of the SDL_Events, and I could find no good information on overloading assignment operators with unions. Provided that I can overload the assignment operator, should I manually sift through the union members and copy the pertinent members or can I simply come some members (this seems dangerous to me), or maybe just use memcpy() (this seems simple and fast, but slightly dangerous)? If I can't overload operators what would my best options be from there? I guess I could make new copies and pass around a bunch of pointers, but in this situation I would prefer not to do that. Any ideas welcome!

    Read the article

  • Unit testing with serialization mock objects in C++

    - by lhumongous
    Greetings, I'm fairly new to TDD and ran across a unit test that I'm not entirely sure how to address. Basically, I'm testing a couple of legacy class methods which read/write a binary stream to a file. The class functions take a serializable object as a parameter, which handles the actual reading/writing to the file. For testing this, I was thinking that I would need a serialization mock object that I would pass to this function. My initial thought was to have the mock object hold onto a (char*) which would dynamically allocate memory and memcpy the data. However, it seems like the mock object might be doing too much work, and might be beyond the scope of this particular test. Is my initial approach correct, or can anyone think of another way of correctly testing this? Thanks!

    Read the article

  • c++: Reference array of maps

    - by donalmg
    I have a function which creates an array of Maps: map<string, int> *pMap And a function which writes maps to the array: int iAddMap(map<string, int> mapToAdd, map<string, int> *m, int i) { m = &(pMap[i]); memcpy(m, mapToAdd, sizeof(map<string, int>)); } And a function to get maps from the array map<string, int>& getMap(int i) { return pMap[i]; } I can write maps to the array without any issue, but every get call results in a seg fault: int val; // val defined after this map<string, int> * pGetMap = &(getMap(val)); Any suggestions on why this is happening?

    Read the article

  • Data-only static libraries with GCC

    - by regularfry
    How can I make static libraries with only binary data, that is without any object code, and make that data available to a C program? Here's the build process and simplified code I'm trying to make work: ./datafile: abcdefghij Makefile: libdatafile.a: ar [magic] datafile main: libdatafile.a gcc main.c libdatafile.a -o main main.c: #define TEXTPTR [more magic] int main(){ char mystring[11]; memset(mystring, '\0', 11); memcpy(TEXTPTR, mystring, 10); puts(mystring); puts(mystring); return 0; } The output I'm expecting from running main is, of course: abcdefghijabcdefghij My question is: what should [magic] and [more magic] be?

    Read the article

  • c/c++ how to convert short to char

    - by changed
    Hi I am using ms c++. I am using struct like struct header { unsigned port : 16; unsigned destport : 16; unsigned not_used : 7; unsigned packet_length : 9; }; struct header HR; here this value of header i need to put in separate char array. i did memcpy(&REQUEST[0], &HR, sizeof(HR)); but value of packet_length is not appearing properly. like if i am assigning HR.packet_length = 31; i am getting -128(at fifth byte) and 15(at sixth byte). if you can help me with this or if their is more elegant way to do this. thanks

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • Float32 to Float16

    - by Goz
    Can someone explain to me how I convert a 32-bit floating point value to a 16-bit floating point value? (s = sign e = exponent and m = mantissa) If 32-bit float is 1s7e24m And 16-bit float is 1s5e10m Then is it as simple as doing? int fltInt32; short fltInt16; memcpy( &fltInt32, &flt, sizeof( float ) ); fltInt16 = (fltInt32 & 0x00FFFFFF) >> 14; fltInt16 |= ((fltInt32 & 0x7f000000) >> 26) << 10; fltInt16 |= ((fltInt32 & 0x80000000) >> 16); I'm assuming it ISN'T that simple ... so can anyone tell me what you DO need to do?

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • C++ DWORD* to BYTE*

    - by NomeSkavinski
    My issue, i am trying to convert and array of dynamic memory of type DWORD to a BYTE. Fair enough i can for loop through this and convert the DWORD into a BYTE per entry. But is their a faster way to do this? to take a pointer to DWORD data and convert the whole piece of data into a pointer to BYTE data? such as using a memcpy operation? I feel this is not possible, im not requesting an answer just an experienced opinion on my approach, as i have tried testing both approaches but seem to fail getting to a solution on my second solution. Thanks for any input, again no answers just a point in the right direction. Nor is this a homework question, i felt that had to be mentioned.

    Read the article

  • How to implement copy operator for such C++ structure?

    - by Kabumbus
    So having struct ResultStructure { ResultStructure(const ResultStructure& other) { // copy code in here ? using memcpy ? how??? } ResultStructure& operator=(const ResultStructure& other) { if (this != &other) { // copy code in here ? } return *this } int length; char* ptr; }; How to implement copy? (sorry - I am C++ nube)

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >