Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 99/245 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Crash generated during destruction of hash_map

    - by Alien01
    I am using hash_map in application as typedef hash_map<DWORD,CComPtr<IInterfaceXX>> MapDword2Interface; In main application I am using static instance of this map static MapDword2Interface m_mapDword2Interface; I have got one crash dump from one of the client machines which point to the crash in clearing this map I opened that crash dump and here is assembly during debugging > call std::list<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> >,std::allocator<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> > > >::clear > mov eax,dword ptr [CMainApp::m_mapDword2Interface+8 (49XXXXX)] Here is code where crash dump is pointing. Below code is from stl:list file void clear() { // erase all #if _HAS_ITERATOR_DEBUGGING this->_Orphan_ptr(*this, 0); #endif /* _HAS_ITERATOR_DEBUGGING */ _Nodeptr _Pnext; _Nodeptr _Pnode = _Nextnode(_Myhead); _Nextnode(_Myhead) = _Myhead; _Prevnode(_Myhead) = _Myhead; _Mysize = 0; for (; _Pnode != _Myhead; _Pnode = _Pnext) { // delete an element _Pnext = _Nextnode(_Pnode); this->_Alnod.destroy(_Pnode); this->_Alnod.deallocate(_Pnode, 1); } } Crash is pointing to the this->_Alnod.destroy(_Pnode); statement in above code. I am not able to guess it, what could be reason. Any ideas??? How can I make sure, even is there is something wrong with the map , it should not crash?

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • Problem with setjmp/longjmp

    - by user294732
    The code below is just not working. Can anybody point out why #define STACK_SIZE 1524 static void mt_allocate_stack(struct thread_struct *mythrd) { unsigned int sp = 0; void *stck; stck = (void *)malloc(STACK_SIZE); sp = (unsigned int)&((stck)); sp = sp + STACK_SIZE; while((sp % 8) != 0) sp--; #ifdef linux (mythrd->saved_state[0]).__jmpbuf[JB_BP] = (int)sp; (mythrd->saved_state[0]).__jmpbuf[JB_SP] = (int)sp-500; #endif } void mt_sched() { fprintf(stdout,"\n Inside the mt_sched"); fflush(stdout); if ( current_thread->state == NEW ) { if ( setjmp(current_thread->saved_state) == 0 ) { mt_allocate_stack(current_thread); fprintf(stdout,"\n Jumping to thread = %u",current_thread->thread_id); fflush(stdout); longjmp(current_thread->saved_state, 2); } else { new_fns(); } } } All I am trying to do is to run the new_fns() on a new stack. But is is showing segmentation fault at new_fns(). Can anybody point me out what's wrong.

    Read the article

  • C/C++: feedback in analyzing a code example

    - by KaiserJohaan
    Hello, I have a piece of code from an assignment I am uncertain about. I feel confident that I know the answer, but I just want to double-check with the community incase there's something I forgot. The title is basically secure coding and the question is just to explain the results. int main() { unsigned int i = 1; unsigned int c = 1; while (i > 0) { i = i*2; c++; } printf("%d\n", c); return 0; } My reasoning is this: At first glance you could imagine the code would run forever, considering it's initialized to a positive value and ever increasing. This of course is wrong because eventually the value will grow so large it will cause an integer overflow. This in turn is not entirely true either, because eventally it will force the variable 'i' to be signed by making the last bit to 1 and therefore regarded as a negative number, therefore terminating the loop. So it is not writing to unallocated memory and therefore cause integer overflow, but rather violating the data type and therefore causing the loop to terminate. I am quite sure this is the reason, but I just want to double check. Any opinions?

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • Hashtable resizing leaks memory

    - by thpetrus
    I wrote a hashtable and it basically consists of these two structures: typedef struct dictEntry { void *key; void *value; struct dictEntry *next; } dictEntry; typedef struct dict { dictEntry **table; unsigned long size; unsigned long items; } dict; dict.table is a multidimensional array, which contains all the stored key/value pair, which again are a linked list. If half of the hashtable is full, I expand it by doubling the size and rehashing it: dict *_dictRehash(dict *d) { int i; dict *_d; dictEntry *dit; _d = dictCreate(d->size * 2); for (i = 0; i < d->size; i++) { for (dit = d->table[i]; dit != NULL; dit = dit->next) { _dictAddRaw(_d, dit); } } /* FIXME memory leak because the old dict can never be freed */ free(d); // seg fault return _d; } The function above uses the pointers from the old hash table and stores it in the newly created one. When freeing the old dict d a Segmentation Fault occurs. How am I able to free the old hashtable struct without having to allocate the memory for the key/value pairs again?

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • strcmp() but with 0-9 AFTER A-Z? (C/C++)

    - by Aaron
    For reasons I completely disagree with but "The Powers (of Anti-Usability) That Be" continue to decree despite my objections, I have a sorting routine which does basic strcmp() compares to sort by its name. It works great; it's hard to get that one wrong. However, at the 11th hour, it's been decided that entries which begin with a number should come AFTER entries which begin with a letter, contrary to the ASCII ordering. They cite the EBCDIC standard has numbers following letters so the prior assumption isn't a universal truth, and I have no power to win this argument... but I digress. Therein lies my problem. I've replaced all appropriate references to strcmp with a new function call nonstd_strcmp, and now need to implement the modifications to accomplish the sort change. I've used a FreeBSD source as my base: http://freebsd.active-venture.com/FreeBSD-srctree/newsrc/libkern/strncmp.c.html if (n == 0) return (0); do { if (*s1 != *s2++) return (*(const unsigned char *)s1 - *(const unsigned char *)(s2 - 1)); if (*s1++ == 0) break; } while (--n != 0); return (0); I guess I might need to take some time away to really think about how it should be done, but I'm sure I'm not the only one who's experienced the brain-deadness of just-before-release spec changes.

    Read the article

  • Managed C++ or C# .NET, Downloading from rapidshare?

    - by cruisx
    I am trying to download a file from rapidshare via C++ .NET but I'm having a bit of trouble. The address used to be "https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi" but that no longer works, does anyone know what the new one is? The code works but the file size is always 1KB, I don't think its connecting to the right server. private: void downloadFileAsync(String^ fileUrl) { String^ uriString; //fileUrl = "http://rapidshare.com/files/356458319/Keeping.Up.with.the.Kardashians.S04E10.Delivering.Baby.Mason.HDTV.XviD-MOMENTUM.rar"; uriString = "https://ssl.rapidshare.com/premzone.html";//"https://ssl.rapidshare.com"; NameValueCollection^ postvals = gcnew NameValueCollection(); postvals->Add("login", "bob"); postvals->Add("password", "12345"); // postvals->Add("uselandingpage", "1"); WebClient^ myWebClient = gcnew WebClient(); array<unsigned char>^ responseArray = gcnew array<unsigned char>(10024); responseArray = myWebClient->UploadValues(uriString, "POST", postvals); StreamReader^ strRdr = gcnew StreamReader(gcnew MemoryStream(responseArray)); String^ cookiestr = myWebClient->ResponseHeaders->Get("Set-Cookie"); myWebClient->Headers->Add("Cookie", cookiestr); //myWebClient->DownloadFileCompleted += gcnew AsyncCompletedEventHandler(myWebClient->DownloadFileCompleted); myWebClient-DownloadFileAsync(gcnew Uri(fileUrl),"C:\rapid\"+Path::GetFileName(fileUrl)); }

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • How to declare array of 2D array pointers and access them?

    - by vikramtheone
    Hi Guys, How can I declare an 2D array of 2D Pointers? And later access the individual array elements of the 2D arrays. Is my approach correct? main() { int i, j; int **array[10][10]; int **ptr = NULL; for(i=0;i<10;i++) { for(j=0j<10;j++) { alloc_2D(&ptr, 10, 10); array[i][j] = ptr; } } //After I do this, how can I access the individual 2D array //and then the individual elements of the 2D arrays? } void alloc_2D(float ***memory, unsigned int rows, unsigned int cols) { float **ptr; *memory = NULL; ptr = malloc(rows * sizeof(float*)); if(ptr == NULL) { status = ERROR; printf("\nERROR: Memory allocation failed!"); } else { int i; for(i = 0; i< rows; i++) { ptr[i] = malloc(cols * sizeof(float)); if(ptr[i]==NULL) { status = ERROR; printf("\nERROR: Memory allocation failed!"); } } } *memory = ptr; }

    Read the article

  • _beginthreadx and socket

    - by user638197
    hi, i have a question about the _beginthreadx function In the third and fourth parameter: if i have this line to create the thread hThread=(HANDLE)_beginthreadex(0,0, &RunThread, &m_socket,CREATE_SUSPENDED,&threadID ); m_socket is the socket that i want inside the thread (fourth parameter) and i have the RunThread function (third parameter) in this way static unsigned __stdcall RunThread (void* ptr) { return 0; } It is sufficient to create the thread independently if m_socket has something or not? Thanks in advance Thank you for the response Ciaran Keating helped me understand better the thread I'll explain a little more the situation I´m creating the tread in this function inside a class public: void getClientsConnection() { numberOfClients = 1; SOCKET temporalSocket = NULL; firstClient = NULL; secondClient = NULL; while (numberOfClients < 2) { temporalSocket = SOCKET_ERROR; while (temporalSocket == SOCKET_ERROR) { temporalSocket = accept(m_socket, NULL, NULL); //----------------------------------------------- HANDLE hThread; unsigned threadID; hThread=(HANDLE)_beginthreadex(0,0, &RunThread, &m_socket,CREATE_SUSPENDED,&threadID ); WaitForSingleObject( hThread, INFINITE ); if(!hThread) printf("ERROR AL CREAR EL HILO: %ld\n", WSAGetLastError()); //----------------------------------------------- } if(firstClient == NULL) { firstClient = temporalSocket; muebleC1 = temporalSocket; actionC1 = temporalSocket; ++numberOfClients; printf("CLIENTE 1 CONECTADO\n"); } else { secondClient = temporalSocket; muebleC2 = temporalSocket; actionC2 = temporalSocket; ++numberOfClients; printf("CLIENTE 2 CONECTADO\n"); } } } What i'm trying to do is to have the socket inside the thread while wait for a client connection Is this feasible as i have the code of the thread? I can change the state of the thread that is not a problem Thanks again

    Read the article

  • Correct answer will not output

    - by rEgonicS
    I made a program that returns the sum of all primes under 2 million. I really have no idea what's going on with this one, I get 142891895587 as my answer when the correct answer is 142913828922. It seems like its missing a few primes in there. I'm pretty sure the getPrime function works as it is supposed to. I used it a couple times before and worked correctly than. The code is as follows: vector<int> getPrimes(int number); int main() { unsigned long int sum = 0; vector<int> primes = getPrimes(2000000); for(int i = 0; i < primes.size(); i++) { sum += primes[i]; } cout << sum; return 0; } vector<int> getPrimes(int number) { vector<bool> sieve(number+1,false); vector<int> primes; sieve[0] = true; sieve[1] = true; for(int i = 2; i <= number; i++) { if(sieve[i]==false) { primes.push_back(i); unsigned long int temp = i*i; while(temp <= number) { sieve[temp] = true; temp = temp + i; } } } return primes; }

    Read the article

  • Why do I get this strange output behavior?

    - by WilliamKF
    I have the following program test.cc: #include <iostream> unsigned char bogus1[] = { // Changing # of periods (0x2e) changes output after periods. 0x2e, 0x2e, 0x2e, 0x2e }; unsigned int bogus2 = 1816; // Changing this value changes output. int main() { std::clog << bogus1; } I build it with: g++ -g -c -o test.o test.cc; g++ -static-libgcc -o test test.o Using g++ version 3.4.6 I run it through valgrind and nothing is reported wrong. However the output has two extra control characters and looks like this: .... Thats a control-X and a control-G at the end. If you change the value of bogus2 you get different control characters. If you change the number of periods in the array the issue goes away or changes. I suspect it is a memory corruption bug in the compiler or iostream package. What is going on here?

    Read the article

  • Is it Possible to Use Constraints on Hierarchical Data in a Self-Referential Table?

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to cascade on update and prevent deletion of a record if it has any "children." So I used the following: CREATE TABLE `test`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • Calling from C# to C function which accept a struct array allocated by caller

    - by lifey
    I have the following C struct struct XYZ { void *a; char fn[MAX_FN]; unsigned long l; unsigned long o; }; And I want to call the following function from C#: extern "C" int func(int handle, int *numEntries, XYZ *xyzTbl); Where xyzTbl is an array of XYZ of size numEntires which is allocated by the caller I have defined the following C# struct: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Ansi)] public struct XYZ { public System.IntPtr rva; [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst = 128)] public string fn; public uint l; public uint o; } and a method: [System.Runtime.InteropServices.DllImport(@"xyzdll.dll", CallingConvention = CallingConvention.Cdecl)] public static extern Int32 func(Int32 handle, ref Int32 numntries, [MarshalAs(UnmanagedType.LPArray)] XYZ[] arr); Then I try to call the function : XYZ xyz = new XYZ[numEntries]; for (...) xyz[i] = new XYZ(); func(handle,numEntries,xyz); Of course it does not work. Can someone shed light on what I am doing wrong ?

    Read the article

  • Generating Fibonacci Numbers Using variable-Length Arrays Code Compiler Error.

    - by Nano HE
    Compile error in vs2010(Win32 Console Application Template) for the code below. How can I fix it. unsigned long long int Fibonacci[numFibs]; // error occurred here error C2057: expected constant expression error C2466: cannot allocate an array of constant size 0 error C2133: 'Fibonacci' : unknown size Complete code attached(It's a sample code from programming In c -3E book. No any modify) int main() { int i, numFibs; printf("How may Fibonacci numbers do you want (between 1 to 75)? "); scanf("%i", &numFibs); if ( numFibs < 1 || numFibs > 75){ printf("Bad number, sorry!\n"); return 1; } unsigned long long int Fibonacci[numFibs]; Fibonacci[0] = 0; // by definition Fibonacci[1] = 1; // ditto for ( i = 2; i < numFibs; ++i) Fibonacci[i] = Fibonacci[i-2] + Fibonacci[i-1]; for ( i = 0; i < numFibs; ++i) printf("%11u",Fibonacci[i]); printf("\n"); return 0; }

    Read the article

  • add uchar values in ushort array with sse2 or sse3

    - by pompolus
    i have an unsigned short dst[16][16] matrix and a larger unsigned char src[m][n] matrix. Now i have to access in the src matrix and add a 16x16 submatrix to dst, using sse2 or ss3. In a my older implementation, I was sure that my summed values ??were never greater than 256, so i could do this: for (int row = 0; row < 16; ++row) { __m128i subMat = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src)); dst[row] = _mm_add_epi8(dst[row], subMat); src += W; // Step to next row i need to add } where W is an offset to reach the desired rows. This code works, but now my values in src are larger and summed could be greater than 256, so i need to store them as ushort. i've tried this: for (int row = 0; row < 16; ++row) { __m128i subMat = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src)); dst[row] = _mm_add_epi16(dst[row], subMat); src += W; // Step to next row i need to add } but it doesn't work. I'm not so good with sse, so any help will be appreciated.

    Read the article

  • Howcome some C++ functions with unspecified linkage build with C linkage?

    - by christoffer
    This is something that makes me fairly perplexed. I have a C++ file that implements a set of functions, and a header file that defines prototypes for them. When building with Visual Studio or MingW-gcc, I get linking errors on two of the functions, and adding an 'extern "C"' qualifier resolved the error. How is this possible? Header file, "some_header.h": // Definition of struct DEMO_GLOBAL_DATA omitted DWORD WINAPI ThreadFunction(LPVOID lpData); void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen); void CheckValid(DEMO_GLOBAL_DATA *pData); int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName); void HandleEnd(DEMO_GLOBAL_DATA *pData); C++ file, "some_implementation.cpp" #include "some_header.h" DWORD WINAPI ThreadFunction(LPVOID lpData) { /* omitted */ } void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen) { /* omitted */ } void CheckValid(DEMO_GLOBAL_DATA *pData) { /* omitted */ } int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName) { /* omitted */ } void HandleEnd(DEMO_GLOBAL_DATA *pData) { /* omitted */ } The implementations compile without warnings, but when linking with the UI code that calls these, I get a normal error LNK2001: unresolved external symbol "int __cdecl HandleStart(struct _DEMO_GLOBAL_DATA *, wchar_t *) error LNK2001: unresolved external symbol "void __cdecl CheckValid(struct _DEMO_MAIN_GLOBAL_DATA * What really confuses me, now, is that only these two functions (HandleStart and CheckValid) seems to be built with C linkage. Explicitly adding "extern 'C'" declarations for only these two resolved the linking error, and the application builds and runs. Adding "extern 'C'" on some other function, such as HandleEnd, introduces a new linking error, so that one is obviously compiled correctly. The implementation file is never modified in any of this, only the prototypes.

    Read the article

  • How do C++ compilers actually pass reference parameters?

    - by T.E.D.
    This question came about as a result of some mixed-langauge programming. I had a Fortran routine I wanted to call from C++ code. Fortran passes all its parameters by reference (unless you tell it otherwise). So I thought I'd be clever (bad start right there) in my C++ code and define the Fortran routine something like this: extern "C" void FORTRAN_ROUTINE (unsigned & flag); This code worked for a while but (of course right when I needed to leave) suddenly started blowing up on a return call. Clear indication of a munged call stack. Another engineer came behind me and fixed the problem, declaring that the routine had to be deinfed in C++ as extern "C" void FORTRAN_ROUTINE (unsigned * flag); I'd accept that except for two things. One is that it seems rather counter-intuitive for the compiler to not pass reference parameters by reference, and I can find no documentation anywhere that says that. The other is that he changed a whole raft of other code in there at the same time, so it theoretically could have been another change that fixed whatever the issue was. So the question is, how does C++ actually pass reference parameters? Is it perhaps free to do copy-in, copy-out for small values or something? In other words, are reference parameters utterly useless in mixed-language programming? I'd like to know so I don't make this same code-killing mistake ever again.

    Read the article

  • Is it necessary to mysql real escape when using alter table?

    - by cgwebprojects
    I noticed the other day that I cannot bind variables when using PDO with ALTER TABLE for example the following example will not work, $q = $dbc -> prepare("ALTER TABLE emblems ADD ? TINYINT(1) UNSIGNED NOT NULL DEFAULT '0', ADD ? DATETIME NOT NULL"); $q -> execute(array($emblemDB, $emblemDB . 'Date')); So is it necessary to use mysql_real_escape string and do it like below, // ESCAPE NAME FOR MYSQL INSERTION $emblemDB = mysql_real_escape_string($emblemDB); // INSERT EMBLEM DETAILS INTO DATABASE $q = $dbc -> prepare("ALTER TABLE emblems ADD " . $emblemDB . " TINYINT(1) UNSIGNED NOT NULL DEFAULT '0', ADD " . $emblemDB . "Date DATETIME NOT NULL"); $q -> execute(); Or do I not need to add in mysql_real_escape_string? As the only thing the query can do is ADD columns? Thanks

    Read the article

  • Threads are blocked in malloc and free, virtual size

    - by Albert Wang
    Hi, I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows: ntdll.dll!NtWaitForSingleObject() + 0xa bytes ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes ntdll.dll!RtlAllocateHeap() - 0x1711a bytes MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++ ntdll.dll!NtWaitForSingleObject() ntdll.dll!RtlpWaitOnCriticalSection() ntdll.dll!RtlEnterCriticalSection() ntdll.dll!RtlpDebugPageHeapFree() ntdll.dll!RtlDebugFreeHeap() ntdll.dll!RtlFreeHeapSlowly() ntdll.dll!RtlFreeHeap() MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C BTW, the param values passed to the new operator is not correct here maybe due to optimization. Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked. m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE); ...... VirtualFree(m_pBuf, 0, MEM_RELEASE); This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this. Does anyone has similar experience on this before? Could you share with me so I may get more hints? Thanks a lot!

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >