Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 20/59 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Binary Tree in C Insertion Error

    - by Paul
    I'm quite new to C and I'm trying to implement a Binary Tree in C which will store a number and a string and then print them off e.g. 1 : Bread 2 : WashingUpLiquid etc. The code I have so far is: #include <stdio.h> #include <stdlib.h> #define LENGTH 300 struct node { int data; char * definition; struct node *left; struct node *right; }; struct node *node_insert(struct node *p, int value, char * word); void print_preorder(struct node *p); int main(void) { int i = 0; int d = 0; char def[LENGTH]; struct node *root = NULL; for(i = 0; i < 2; i++) { printf("Please enter a number: \n"); scanf("%d", &d); printf("Please enter a definition for this word:\n"); scanf("%s", def); root = node_insert(root, d, def); printf("%s\n", def); } printf("preorder : "); print_preorder(root); printf("\n"); return 0; } struct node *node_insert(struct node *p, int value, char * word) { struct node *tmp_one = NULL; struct node *tmp_two = NULL; if(p == NULL) { p = (struct node *)malloc(sizeof(struct node)); p->data = value; p->definition = word; p->left = p->right = NULL; } else { tmp_one = p; while(tmp_one != NULL) { tmp_two = tmp_one; if(tmp_one->data > value) tmp_one = tmp_one->left; else tmp_one = tmp_one->right; } if(tmp_two->data > value) { tmp_two->left = (struct node *)malloc(sizeof(struct node)); tmp_two = tmp_two->left; tmp_two->data = value; tmp_two->definition = word; tmp_two->left = tmp_two->right = NULL; } else { tmp_two->right = (struct node *)malloc(sizeof(struct node)); tmp_two = tmp_two->right; tmp_two->data = value; tmp_two->definition = word; tmp_two->left = tmp_two->right = NULL; } } return(p); } void print_preorder(struct node *p) { if(p != NULL) { printf("%d : %s\n", p->data, p->definition); print_preorder(p->left); print_preorder(p->right); } } At the moment it seems to work for the ints but the description part only prints out for the last one entered. I assume it has something to do with pointers on the char array but I had no luck getting it to work. Any ideas or advice? Thanks

    Read the article

  • FreeType2 Bitmap to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); // as *.bmp Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; }

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • multiple valgrind errors: Conditional jump or move depends on uninitialised value(s)

    - by Hristo
    I'm running valgrind and I'm getting the following error (this is not the only one): ==21743== Conditional jump or move depends on uninitialised value(s) ==21743== at 0x4A06509: index (mc_replace_strmem.c:164) ==21743== by 0x33B7CBB3CD: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) My tunnelURL() function looks like this: char * tunnelURL(char *url) { char * a = strstr(url, "//"); a += 2; char * path = strstr(a, "/"); char host[256]; strncpy (host, a, strlen(a)-strlen(path)); /* * The following is courtesy of Beej's Guide */ int status; int proxySocketFD; struct addrinfo hints; struct addrinfo *servinfo; // will point to the results memset(&hints, 0, sizeof(hints)); // make sure the struct is empty hints.ai_family = AF_INET; // don't care IPv4 or IPv6 hints.ai_socktype = SOCK_STREAM; // TCP stream sockets hints.ai_flags = AI_PASSIVE; // fill in my IP for me if ((status = getaddrinfo(host, "80", &hints, &servinfo)) != 0) { perror("getaddrinfo() fail"); exit(1); } // create socket if ((proxySocketFD = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol)) == -1) { perror("proxy socket() fail"); exit(1); } // connect if (connect(proxySocketFD, servinfo->ai_addr, servinfo->ai_addrlen) != 0) { printf("connect() fail"); exit(1); } // construct request char request[strlen(path) + strlen(host) + 26]; sprintf(request, "GET %s HTTP/1.1\r\nHost: %s\r\n\r\n", path, host); printf("%s", request); // send request send(proxySocketFD, request, strlen(request), 0); // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, i * 4096 * sizeof(char)); } // close proxy socket close(proxySocketFD); // deallocate memory freeaddrinfo(servinfo); return pageContentBuffer; } Line 336 corresponds to the if statement with the getaddrinfo() function call. I'm not really sure what I haven't initialized. The string I'm passing in "should" be already set... I'm printing it out just fine. I also get another error corresponding to the same line of code: ==21743== Use of uninitialised value of size 8 ==21743== at 0x33B7D05816: __nscd_cache_search (in /lib64/libc-2.5.so) ==21743== by 0x33B7D0438B: nscd_gethst_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7D04B26: __nscd_gethostbyname2_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7CE9F5E: gethostbyname2_r@@GLIBC_2.2.5 (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBC522: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) Any ideas as to what might becausing this? This is written in C btw... Thanks, Hristo

    Read the article

  • 1>main.obj : error LNK2001: unresolved external symbol _D3D10CreateDeviceAndSwapChain@32

    - by numerical25
    having trouble getting my directx going I get the following error 1>Linking... 1>main.obj : error LNK2001: unresolved external symbol _D3D10CreateDeviceAndSwapChain@32 1>C:\Users\numerical25\Desktop\Intro ToDirectX\msdnTutorials\tutorial0\tutorial\Debug\tutorial.exe : fatal error LNK1120: 1 unresolved externals below is my code // include the basic windows header file #include <windows.h> #include <windowsx.h> #include <d3d10.h> ID3D10Device* g_pd3dDevice; IDXGISwapChain* g_pSwapChain; // the WindowProc function prototype LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); bool InitDirect3D(HWND); // the entry point for any Windows program int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // the handle for the window, filled by a function HWND hWnd; // this struct holds information for the window class WNDCLASSEX wc; // clear out the window class for use ZeroMemory(&wc, sizeof(WNDCLASSEX)); // fill in the struct with the needed information wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WindowProc; wc.hInstance = hInstance; wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)COLOR_WINDOW; wc.lpszClassName = L"WindowClass1"; // register the window class RegisterClassEx(&wc); // create the window and use the result as the handle hWnd = CreateWindowEx(NULL, L"WindowClass1", // name of the window class L"Our First Windowed Program", // title of the window WS_OVERLAPPEDWINDOW, // window style 300, // x-position of the window 300, // y-position of the window 640, // width of the window 480, // height of the window NULL, // we have no parent window, NULL NULL, // we aren't using menus, NULL hInstance, // application handle NULL); // used with multiple windows, NULL // display the window on the screen ShowWindow(hWnd, nCmdShow); // enter the main loop: // this struct holds Windows event messages MSG msg; bool finished = InitDirect3D(hWnd); // Enter the infinite message loop while(TRUE) { // Check to see if any messages are waiting in the queue while(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // Translate the message and dispatch it to WindowProc() TranslateMessage(&msg); DispatchMessage(&msg); } // If the message is WM_QUIT, exit the while loop if(msg.message == WM_QUIT) break; // Run game code here // ... // ... }; } // this is the main message handler for the program LRESULT CALLBACK WindowProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { // sort through and find what code to run for the message given switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; } // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } bool InitDirect3D(HWND g_hWnd) { DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof(sd) ); sd.BufferCount = 1; sd.BufferDesc.Width = 640; sd.BufferDesc.Height = 480; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.OutputWindow = g_hWnd; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = TRUE; if( FAILED( D3D10CreateDeviceAndSwapChain( NULL, D3D10_DRIVER_TYPE_REFERENCE, NULL, 0, D3D10_SDK_VERSION, &sd, &g_pSwapChain, &g_pd3dDevice ) ) ) { return FALSE; } return TRUE; }

    Read the article

  • CryptoExcercise Encryption/Decryption Problem

    - by venkat
    I am using apples "cryptoexcercise" (Security.Framework) in my application to encrypt and decrypt a data of numeric value. When I give the input 950,128 the values got encrypted, but it is not getting decrypted and exists with the encrypted value only. This happens only with the mentioned numeric values. Could you please check this issue and give the solution to solve this problem? here is my code (void)testAsymmetricEncryptionAndDecryption { uint8_t *plainBuffer; uint8_t *cipherBuffer; uint8_t *decryptedBuffer; const char inputString[] = "950"; int len = strlen(inputString); if (len > BUFFER_SIZE) len = BUFFER_SIZE-1; plainBuffer = (uint8_t *)calloc(BUFFER_SIZE, sizeof(uint8_t)); cipherBuffer = (uint8_t *)calloc(CIPHER_BUFFER_SIZE, sizeof(uint8_t)); decryptedBuffer = (uint8_t *)calloc(BUFFER_SIZE, sizeof(uint8_t)); strncpy( (char *)plainBuffer, inputString, len); NSLog(@"plain text : %s", plainBuffer); [self encryptWithPublicKey:(UInt8 *)plainBuffer cipherBuffer:cipherBuffer]; NSLog(@"encrypted data: %s", cipherBuffer); [self decryptWithPrivateKey:cipherBuffer plainBuffer:decryptedBuffer]; NSLog(@"decrypted data: %s", decryptedBuffer); free(plainBuffer); free(cipherBuffer); free(decryptedBuffer); } (void)encryptWithPublicKey:(uint8_t *)plainBuffer cipherBuffer:(uint8_t *)cipherBuffer { OSStatus status = noErr; size_t plainBufferSize = strlen((char *)plainBuffer); size_t cipherBufferSize = CIPHER_BUFFER_SIZE; NSLog(@"SecKeyGetBlockSize() public = %d", SecKeyGetBlockSize([self getPublicKeyRef])); // Error handling // Encrypt using the public. status = SecKeyEncrypt([self getPublicKeyRef], PADDING, plainBuffer, plainBufferSize, &cipherBuffer[0], &cipherBufferSize ); NSLog(@"encryption result code: %d (size: %d)", status, cipherBufferSize); NSLog(@"encrypted text: %s", cipherBuffer); } (void)decryptWithPrivateKey:(uint8_t *)cipherBuffer plainBuffer:(uint8_t *)plainBuffer { OSStatus status = noErr; size_t cipherBufferSize = strlen((char *)cipherBuffer); NSLog(@"decryptWithPrivateKey: length of buffer: %d", BUFFER_SIZE); NSLog(@"decryptWithPrivateKey: length of input: %d", cipherBufferSize); // DECRYPTION size_t plainBufferSize = BUFFER_SIZE; // Error handling status = SecKeyDecrypt([self getPrivateKeyRef], PADDING, &cipherBuffer[0], cipherBufferSize, &plainBuffer[0], &plainBufferSize ); NSLog(@"decryption result code: %d (size: %d)", status, plainBufferSize); NSLog(@"FINAL decrypted text: %s", plainBuffer); } (SecKeyRef)getPublicKeyRef { OSStatus sanityCheck = noErr; SecKeyRef publicKeyReference = NULL; if (publicKeyRef == NULL) { NSMutableDictionary *queryPublicKey = [[NSMutableDictionary alloc] init]; // Set the public key query dictionary. [queryPublicKey setObject:(id)kSecClassKey forKey:(id)kSecClass]; [queryPublicKey setObject:publicTag forKey:(id)kSecAttrApplicationTag]; [queryPublicKey setObject:(id)kSecAttrKeyTypeRSA forKey:(id)kSecAttrKeyType]; [queryPublicKey setObject:[NSNumber numberWithBool:YES] forKey:(id)kSecReturnRef]; // Get the key. sanityCheck = SecItemCopyMatching((CFDictionaryRef)queryPublicKey, (CFTypeRef *)&publicKeyReference); if (sanityCheck != noErr) { publicKeyReference = NULL; } [queryPublicKey release]; } else { publicKeyReference = publicKeyRef; } return publicKeyReference; } (SecKeyRef)getPrivateKeyRef { OSStatus resultCode = noErr; SecKeyRef privateKeyReference = NULL; if(privateKeyRef == NULL) { NSMutableDictionary * queryPrivateKey = [[NSMutableDictionary alloc] init]; // Set the private key query dictionary. [queryPrivateKey setObject:(id)kSecClassKey forKey:(id)kSecClass]; [queryPrivateKey setObject:privateTag forKey:(id)kSecAttrApplicationTag]; [queryPrivateKey setObject:(id)kSecAttrKeyTypeRSA forKey:(id)kSecAttrKeyType]; [queryPrivateKey setObject:[NSNumber numberWithBool:YES] forKey:(id)kSecReturnRef]; // Get the key. resultCode = SecItemCopyMatching((CFDictionaryRef)queryPrivateKey, (CFTypeRef *)&privateKeyReference); NSLog(@"getPrivateKey: result code: %d", resultCode); if(resultCode != noErr) { privateKeyReference = NULL; } [queryPrivateKey release]; } else { privateKeyReference = privateKeyRef; } return privateKeyReference; }

    Read the article

  • Padding error - when using AES Encryption in Java and Decryption in C

    - by user234445
    Hi All, I have a problem while decrypting the xl file in rijndael 'c' code (The file got encrypted in Java through JCE) and this problem is happening only for the excel files types which having formula's. Remaining all file type encryption/decryption is happening properly. (If i decrypt the same file in java the output is coming fine.) While i am dumped a file i can see the difference between java decryption and 'C' file decryption. od -c -b filename(file decrypted in C) 0034620 005 006 \0 \0 \0 \0 022 \0 022 \0 320 004 \0 \0 276 4 005 006 000 000 000 000 022 000 022 000 320 004 000 000 276 064 0034640 \0 \0 \0 \0 \f \f \f \f \f \f \f \f \f \f \f \f 000 000 000 000 014 014 014 014 014 014 014 014 014 014 014 014 0034660 od -c -b filename(file decrypted in Java) 0034620 005 006 \0 \0 \0 \0 022 \0 022 \0 320 004 \0 \0 276 4 005 006 000 000 000 000 022 000 022 000 320 004 000 000 276 064 0034640 \0 \0 \0 \0 000 000 000 000 0034644 (the above is the difference between the dumped files) The following java code i used to encrypt the file. public class AES { /** * Turns array of bytes into string * * @param buf Array of bytes to convert to hex string * @return Generated hex string */ public static void main(String[] args) throws Exception { File file = new File("testxls.xls"); byte[] lContents = new byte[(int) file.length()]; try { FileInputStream fileInputStream = new FileInputStream(file); fileInputStream.read(lContents); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } try { KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(256); // 192 and 256 bits may not be available // Generate the secret key specs. SecretKey skey = kgen.generateKey(); //byte[] raw = skey.getEncoded(); byte[] raw = "aabbccddeeffgghhaabbccddeeffgghh".getBytes(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); Cipher cipher = Cipher.getInstance("AES"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); byte[] encrypted = cipher.doFinal(lContents); cipher.init(Cipher.DECRYPT_MODE, skeySpec); byte[] original = cipher.doFinal(lContents); FileOutputStream f1 = new FileOutputStream("testxls_java.xls"); f1.write(original); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } I used the following file for decryption in 'C'. #include <stdio.h> #include "rijndael.h" #define KEYBITS 256 #include <stdio.h> #include "rijndael.h" #define KEYBITS 256 int main(int argc, char **argv) { unsigned long rk[RKLENGTH(KEYBITS)]; unsigned char key[KEYLENGTH(KEYBITS)]; int i; int nrounds; char dummy[100] = "aabbccddeeffgghhaabbccddeeffgghh"; char *password; FILE *input,*output; password = dummy; for (i = 0; i < sizeof(key); i++) key[i] = *password != 0 ? *password++ : 0; input = fopen("doc_for_logu.xlsb", "rb"); if (input == NULL) { fputs("File read error", stderr); return 1; } output = fopen("ori_c_res.xlsb","w"); nrounds = rijndaelSetupDecrypt(rk, key, 256); while (1) { unsigned char plaintext[16]; unsigned char ciphertext[16]; int j; if (fread(ciphertext, sizeof(ciphertext), 1, input) != 1) break; rijndaelDecrypt(rk, nrounds, ciphertext, plaintext); fwrite(plaintext, sizeof(plaintext), 1, output); } fclose(input); fclose(output); }

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

  • Lock-Free, Wait-Free and Wait-freedom algorithms for non-blocking multi-thread synchronization.

    - by GJ
    In multi thread programming we can find different terms for data transfer synchronization between two or more threads/tasks. When exactly we can say that some algorithem is: 1)Lock-Free 2)Wait-Free 3)Wait-Freedom I understand what means Lock-free but when we can say that some synchronization algorithm is Wait-Free or Wait-Freedom? I have made some code (ring buffer) for multi-thread synchronization and it use Lock-Free methods but: 1) Algorithm predicts maximum execution time of this routine. 2) Therad which call this routine at beginning set unique reference, what mean that is inside of this routine. 3) Other threads which are calling the same routine check this reference and if is set than count the CPU tick count (measure time) of first involved thread. If that time is to long interrupt the current work of involved thread and overrides him job. 4) Thread which not finished job because was interrupted from task scheduler (is reposed) at the end check the reference if not belongs to him repeat the job again. So this algorithm is not really Lock-free but there is no memory lock in use, and other involved threads can wait (or not) certain time before overide the job of reposed thread. Added RingBuffer.InsertLeft function: function TgjRingBuffer.InsertLeft(const link: pointer): integer; var AtStartReference: cardinal; CPUTimeStamp : int64; CurrentLeft : pointer; CurrentReference: cardinal; NewLeft : PReferencedPtr; Reference : cardinal; label TryAgain; begin Reference := GetThreadId + 1; //Reference.bit0 := 1 with rbRingBuffer^ do begin TryAgain: //Set Left.Reference with respect to all other cores :) CPUTimeStamp := GetCPUTimeStamp + LoopTicks; AtStartReference := Left.Reference OR 1; //Reference.bit0 := 1 repeat CurrentReference := Left.Reference; until (CurrentReference AND 1 = 0)or (GetCPUTimeStamp - CPUTimeStamp > 0); //No threads present in ring buffer or current thread timeout if ((CurrentReference AND 1 <> 0) and (AtStartReference <> CurrentReference)) or not CAS32(CurrentReference, Reference, Left.Reference) then goto TryAgain; //Calculate RingBuffer NewLeft address CurrentLeft := Left.Link; NewLeft := pointer(cardinal(CurrentLeft) - SizeOf(TReferencedPtr)); if cardinal(NewLeft) < cardinal(@Buffer) then NewLeft := EndBuffer; //Calcolate distance result := integer(Right.Link) - Integer(NewLeft); //Check buffer full if result = 0 then //Clear Reference if task still own reference if CAS32(Reference, 0, Left.Reference) then Exit else goto TryAgain; //Set NewLeft.Reference NewLeft^.Reference := Reference; SFence; //Try to set link and try to exchange NewLeft and clear Reference if task own reference if (Reference <> Left.Reference) or not CAS64(NewLeft^.Link, Reference, link, Reference, NewLeft^) or not CAS64(CurrentLeft, Reference, NewLeft, 0, Left) then goto TryAgain; //Calcolate result if result < 0 then result := Length - integer(cardinal(not Result) div SizeOf(TReferencedPtr)) else result := cardinal(result) div SizeOf(TReferencedPtr); end; //with end; { TgjRingBuffer.InsertLeft } RingBuffer unit you can find here: RingBuffer, CAS functions: FockFreePrimitives, and test program: RingBufferFlowTest Thanks in advance, GJ

    Read the article

  • calling calloc - memory leak valgrind

    - by Mike
    The following code is an example from the NCURSES menu library. I'm not sure what could be wrong with the code, but valgrind reports some problems. Any ideas... ==4803== 1,049 (72 direct, 977 indirect) bytes in 1 blocks are definitely lost in loss record 25 of 36 ==4803== at 0x4C24477: calloc (vg_replace_malloc.c:418) ==4803== by 0x400E93: main (in /home/gerardoj/a.out) ==4803== ==4803== LEAK SUMMARY: ==4803== definitely lost: 72 bytes in 1 blocks ==4803== indirectly lost: 977 bytes in 10 blocks ==4803== possibly lost: 0 bytes in 0 blocks ==4803== still reachable: 64,942 bytes in 262 blocks Source code: #include <curses.h> #include <menu.h> #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define CTRLD 4 char *choices[] = { "Choice 1", "Choice 2", "Choice 3", "Choice 4", "Choice 5", "Choice 6", "Choice 7", "Exit", } ; int main() { ITEM **my_items; int c; MENU *my_menu; int n_choices, i; ITEM *cur_item; /* Initialize curses */ initscr(); cbreak(); noecho(); keypad(stdscr, TRUE); /* Initialize items */ n_choices = ARRAY_SIZE(choices); my_items = (ITEM **)calloc(n_choices + 1, sizeof(ITEM *)); for (i = 0; i < n_choices; ++i) { my_items[i] = new_item(choices[i], choices[i]); } my_items[n_choices] = (ITEM *)NULL; my_menu = new_menu((ITEM **)my_items); /* Make the menu multi valued */ menu_opts_off(my_menu, O_ONEVALUE); mvprintw(LINES - 3, 0, "Use <SPACE> to select or unselect an item."); mvprintw(LINES - 2, 0, "<ENTER> to see presently selected items(F1 to Exit)"); post_menu(my_menu); refresh(); while ((c = getch()) != KEY_F(1)) { switch (c) { case KEY_DOWN: menu_driver(my_menu, REQ_DOWN_ITEM); break; case KEY_UP: menu_driver(my_menu, REQ_UP_ITEM); break; case ' ': menu_driver(my_menu, REQ_TOGGLE_ITEM); break; case 10: { char temp[200]; ITEM **items; items = menu_items(my_menu); temp[0] = '\0'; for (i = 0; i < item_count(my_menu); ++i) if(item_value(items[i]) == TRUE) { strcat(temp, item_name(items[i])); strcat(temp, " "); } move(20, 0); clrtoeol(); mvprintw(20, 0, temp); refresh(); } break; } } free_item(my_items[0]); free_item(my_items[1]); free_menu(my_menu); endwin(); }

    Read the article

  • How do I programmatically send email w/attachment to a known recipient using MAPI in C++? MAPISendM

    - by Tim
    This question is similar, but does not show how to add a recipient. How do I do both? We'd like the widest support possible for as many Windows platforms as possible (from XP and greater) We're using visual studio 2008 Essentially we want to send an email with: pre-filled destination address file attachment subject line from our program and give the user the ability to add any information or cancel it. EDIT I am trying to use MAPISendMail() I copied much of the code from the questions linked near the top, but I get no email dlg box and the error return I get from the call is: 0x000f - "The system cannot find the drive specified" If I comment out the lines to set the recipient, it works fine (of course then I have no recipient pre-filled in) Here is the code: #include <tchar.h> #include <windows.h> #include <mapi.h> #include <mapix.h> int _tmain( int argc, wchar_t *argv[] ) { HMODULE hMapiModule = LoadLibrary( _T( "mapi32.dll" ) ); if ( hMapiModule != NULL ) { LPMAPIINITIALIZE lpfnMAPIInitialize = NULL; LPMAPIUNINITIALIZE lpfnMAPIUninitialize = NULL; LPMAPILOGONEX lpfnMAPILogonEx = NULL; LPMAPISENDDOCUMENTS lpfnMAPISendDocuments = NULL; LPMAPISESSION lplhSession = NULL; LPMAPISENDMAIL lpfnMAPISendMail = NULL; lpfnMAPIInitialize = (LPMAPIINITIALIZE)GetProcAddress( hMapiModule, "MAPIInitialize" ); lpfnMAPIUninitialize = (LPMAPIUNINITIALIZE)GetProcAddress( hMapiModule, "MAPIUninitialize" ); lpfnMAPILogonEx = (LPMAPILOGONEX)GetProcAddress( hMapiModule, "MAPILogonEx" ); lpfnMAPISendDocuments = (LPMAPISENDDOCUMENTS)GetProcAddress( hMapiModule, "MAPISendDocuments" ); lpfnMAPISendMail = (LPMAPISENDMAIL)GetProcAddress( hMapiModule, "MAPISendMail" ); if ( lpfnMAPIInitialize && lpfnMAPIUninitialize && lpfnMAPILogonEx && lpfnMAPISendDocuments ) { HRESULT hr = (*lpfnMAPIInitialize)( NULL ); if ( SUCCEEDED( hr ) ) { hr = (*lpfnMAPILogonEx)( 0, NULL, NULL, MAPI_EXTENDED | MAPI_USE_DEFAULT, &lplhSession ); if ( SUCCEEDED( hr ) ) { // this opens the email client // create the msg. We need to add recipients AND subject AND the dmp file // file attachment MapiFileDesc filedesc; ::ZeroMemory(&filedesc, sizeof(filedesc)); filedesc.nPosition = (ULONG)-1; filedesc.lpszPathName = "E:\\Development\\Open\\testmail\\testmail.cpp"; // recipient(s) MapiRecipDesc recip; ::ZeroMemory(&recip, sizeof(recip)); recip.lpszName = "QA email"; recip.lpszAddress = "[email protected]"; // the message MapiMessage msg; ::ZeroMemory(&msg, sizeof(msg)); msg.lpszSubject = "Test"; msg.nRecipCount = 1; // if I comment out this line it works fine... msg.lpRecips = &recip; msg.nFileCount = 1; msg.lpFiles = &filedesc; hr = (*lpfnMAPISendMail)(0, NULL, &msg, MAPI_LOGON_UI|MAPI_DIALOG, 0); if ( SUCCEEDED( hr ) ) { hr = lplhSession->Logoff( 0, 0, 0 ); hr = lplhSession->Release(); lplhSession = NULL; } } } (*lpfnMAPIUninitialize)(); } FreeLibrary( hMapiModule ); } return 0; }

    Read the article

  • First time using select(), maybe a basic question?

    - by darkletter
    Hello people, i've been working for a few days with this server using select(). What it does, is that, i have two arrays of clients (one is "suppliers", and the other is "consumers"), and the mission of the server is to check whether the suppliers have something to send to the consumers, and in case affirmative, send it. The second part of the server is that, when the consumers have received the suppliers' info, they send a confirmation message to the same suppliers that sent the info. When a client connects, it gets recognized as "undefined", until it sends a message with the word "supplier" or "consumer" (in Spanish, as i'm from there), when the server puts it in the correct clients array. Well, what the server does is not very important here. What's important is that, i'm doing both parts with two different "for" loops, and that's where i'm getting the problems. When the first user connects to the server (be it a supplier or a consumer), the server gets stuck in the first or second loop, instead of just continuing its execution. As it's the first time i'm using select(), i may be missing something. Could you guys give me any sort of help? Thanks a lot in advance. for(;;) { rset=allset; nready=select(maxfd+1,&rset,NULL,NULL,NULL); if (FD_ISSET(sockfd, &rset)){ clilen=sizeof(cliente); if((connfd=accept(sockfd,(struct sockaddr *)&cliente,&clilen))<0) printf("Error"); IP=inet_ntoa(cliente.sin_addr); for(i=0;i<COLA;i++){if(indef[i]<0){indef[i]=connfd;IPind[i]=IP;break;}} FD_SET(connfd,&allset); if(connfd > maxfd) maxfd=connfd; if(i>maxii) maxii=i; if(--nready<=0) continue; }// Fin ISSET(sockfd) for(i=0;i<=maxii;i++){ if((sockfd1=indef[i])<0){ continue;} //! if(FD_ISSET(sockfd1,&rset)){ if((n=read(sockfd1,comp,MAXLINE))==0){close(sockfd1);FD_CLR(sockfd1,&allset);indef[i]=-1;printf("Cliente indefinido desconectado \n");} else{ comp[n]='\0'; if(strcmp(comp,"suministrador")==0){ for(j=0;j<=limite;j++){if(sumi[j]<0){IPsum[j]=IPind[i];sumi[j]=indef[i]; indef[i]=-1;if(j>maxis) {maxis=j;}break; } } } else if(strcmp(comp,"consumidor")==0){ for(o=0;j<=limite;j++){if(consum[o]<0){IPcons[o]=IPind[i];consum[o]=indef[i]; indef[o]=-1;if(o>maxic) {maxic=o;}break; } } } if(--nready <=0)break; } } }//fin bucle for maxii for(i=0;i<=maxis;i++){ if((sockfd2=sumi[i])<0){continue;} if(FD_ISSET(sockfd2,&rset)){ if((n=read(sockfd2,buffer2,MAXLINE))==0){close(sockfd2);FD_CLR(sockfd2,&allset);sumi[i]=-1;printf("Suministrador desconectado \n");} else{ buffer2[n]='\0'; for(j=0;j<=maxic;j++){ if((sockfd3=consum[j])<0){ continue;} else {strcpy(final,IPsum[i]);strcat(final,":");strcat(final,buffer2);write(sockfd3,final,sizeof(final));respuesta[i]=1;} } break; // ? } } }//fin for maxis for(i=miniic;i<=maxic;i++){ if((sockfd4=consum[i])<0){continue;} if(FD_ISSET(sockfd4,&rset)){ if((n=read(sockfd4,buffer3,MAXLINE))==0){close(sockfd4);FD_CLR(sockfd4,&allset);consum[i]=-1;printf("Consumidor desconectado \n");} else{ buffer3[n]='\0'; IP2=strtok(buffer3,":"); obj=strtok(NULL,":"); for(j=0;j<100;j++){ if((strcmp(IPsum[j],IP2)==0) && (respuesta[j]==1)) {write(sumi[j],obj,sizeof(obj)); miniic=i+1; respuesta[j]=0; break; } } } } }

    Read the article

  • [C++] Adding a string or char array to a byte vector

    - by xeross
    I'm currently working on a class to create and read out packets send through the network, so far I have it working with 16bit and 8bit integers (Well unsigned but still). Now the problem is I've tried numerous ways of copying it over but somehow the _buffer got mangled, it segfaulted, or the result was wrong. I'd appreciate if someone could show me a working example. My current code can be seen below. Thanks, Xeross Main #include <iostream> #include <stdio.h> #include "Packet.h" using namespace std; int main(int argc, char** argv) { cout << "#################################" << endl; cout << "# Internal Use Only #" << endl; cout << "# Codename PACKETSTORM #" << endl; cout << "#################################" << endl; cout << endl; Packet packet = Packet(); packet.SetOpcode(0x1f4d); cout << "Current opcode is: " << packet.GetOpcode() << endl << endl; packet.add(uint8_t(5)) .add(uint16_t(4000)) .add(uint8_t(5)); for(uint8_t i=0; i<10;i++) printf("Byte %u = %x\n", i, packet._buffer[i]); printf("\nReading them out: \n1 = %u\n2 = %u\n3 = %u\n4 = %s", packet.readUint8(), packet.readUint16(), packet.readUint8()); return 0; } Packet.h #ifndef _PACKET_H_ #define _PACKET_H_ #include <iostream> #include <vector> #include <stdio.h> #include <stdint.h> #include <string.h> using namespace std; class Packet { public: Packet() : m_opcode(0), _buffer(0), _wpos(0), _rpos(0) {} Packet(uint16_t opcode) : m_opcode(opcode), _buffer(0), _wpos(0), _rpos(0) {} uint16_t GetOpcode() { return m_opcode; } void SetOpcode(uint16_t opcode) { m_opcode = opcode; } Packet& add(uint8_t value) { if(_buffer.size() < _wpos + 1) _buffer.resize(_wpos + 1); memcpy(&_buffer[_wpos], &value, 1); _wpos += 1; return *this; } Packet& add(uint16_t value) { if(_buffer.size() < _wpos + 2) _buffer.resize(_wpos + 2); memcpy(&_buffer[_wpos], &value, 2); _wpos += 2; return *this; } uint8_t readUint8() { uint8_t result = _buffer[_rpos]; _rpos += sizeof(uint8_t); return result; } uint16_t readUint16() { uint16_t result; memcpy(&result, &_buffer[_rpos], sizeof(uint16_t)); _rpos += sizeof(uint16_t); return result; } uint16_t m_opcode; std::vector<uint8_t> _buffer; protected: size_t _wpos; // Write position size_t _rpos; // Read position }; #endif // _PACKET_H_

    Read the article

  • Generating 2-dimensional vla ends in segmentation fault

    - by Framester
    Hi, further developing the code from yesterday (seg fault caused by malloc and sscanf in a function), I tried with the help of some tutorials I found on the net to generate a 2-dim vla. But I get a segmentation fault at (*data)[i][j]=atof(p);. The program is supposed to read a matrix out of a text file and load it into a 2d array (cols 1-9) and a 1D array (col 10) [Example code] #include<stdio.h> #include<stdlib.h> #include<math.h> #include<string.h> const int LENGTH = 1024; void read_data(float ***data, int **classes, int *nrow,int *ncol, char *filename){ FILE *pfile = NULL; char line[LENGTH]; if(!( pfile=fopen(filename,"r"))){ printf("Error opening %s.", filename); exit(1); } int numlines=0; int numcols=0; char *p; fgets(line,LENGTH,pfile); p = strtok (line," "); while (p != NULL){ p = strtok (NULL, ", "); numcols++; } while(fgets(line,LENGTH,pfile)){ numlines++; } rewind(pfile); int numfeats=numcols-1; *data=(float**) malloc(numlines*sizeof(float*)); *classes=(int *)malloc(numlines*sizeof(int)); if(*classes == NULL){ printf("\nOut of memory."); exit(1); } int i=0; while(fgets(line,LENGTH,pfile)){ p = strtok (line," "); for(int j=0;j<numfeats;j++) { (data)[i]=malloc(numfeats*sizeof(float)); printf("%i ",i); (*data)[i][j]=atof(p); p = strtok (NULL, ", "); } (*classes)[i]=atoi(p); i++; } fclose(pfile); *nrow=numlines; *ncol=numfeats; } int main() { char *filename="somedatafile.txt"; float **data2; int *classes2; int r,c; read_data(&data2,&classes2, &r, &c,filename) ; for(int i=0;i<r;i++){ printf("\n"); for(int j=0;j<c;j++){ printf("%f",data2[i][j]); } } return 1; } [Content of somedatafile.txt] 50 21 77 0 28 0 27 48 22 2 55 0 92 0 0 26 36 92 56 4 53 0 82 0 52 -5 29 30 2 1 37 0 76 0 28 18 40 48 8 1 37 0 79 0 34 -26 43 46 2 1 85 0 88 -4 6 1 3 83 80 5 56 0 81 0 -4 11 25 86 62 4 55 -1 95 -3 54 -4 40 41 2 1 53 8 77 0 28 0 23 48 24 4 37 0 101 -7 28 0 64 73 8 1 ...

    Read the article

  • Testing shared memory ,strange thing happen

    - by barfatchen
    I have 2 program compiled in 4.1.2 running in RedHat 5.5 , It is a simple job to test shared memory , shmem1.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_EXCL | O_RDWR), (S_IREAD | S_IWRITE))) > 0 ) { first = 1; /* We are the first instance */ } else if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } if(first) { for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); shm_unlink(STATE_FILE); exit(0); }//main And shmem2.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } ftruncate(shm_fd, sizeof(SHARED_VAR)); if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } int idx ; for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); exit(0); } After compiled : gcc shmem1.c -lpthread -lrt -o shmem1.exe gcc shmem2.c -lpthread -lrt -o shmem2.exe And Run both program almost at the same time with 2 terminal : [test]$ ./shmem1.exe First creation of the shm. Setting up default values conf->heartbeat=(840825951) [test]$ ./shmem2.exe conf->heartbeat=(1215083817) I feel confused !! since shmem1.c is a loop 1,000,000,000 times , how can it be possible to have a answer like 840,825,951 ? I run shmem1.exe and shmem2.exe this way,most of the results are conf-heartbeat will larger than 1,000,000,000 , but seldom and randomly , I will see result conf-heartbeat will lesser than 1,000,000,000 , either in shmem1.exe or shmem2.exe !! if run shmem1.exe only , it is always print 1,000,000,000 , my question is , what is the reason cause conf-heartbeat=(840825951) in shmem1.exe ? Update: Although not sure , but I think I figure it out what is going on , If shmem1.exe run 10 times for example , then conf-heartbeat = 10 , in this time shmem1.exe take a rest and then back , shmem1.exe read from shared memory and conf-heartbeat = 8 , so shmem1.exe will continue from 8 , why conf-heartbeat = 8 ? I think it is because shmem2.exe update the shared memory data to 8 , shmem1.exe did not write 10 back to shared memory before it took a rest ....that is just my theory... i don't know how to prove it !!

    Read the article

  • Problems with 3D Array for Voxel Data

    - by Sean M.
    I'm trying to implement a voxel engine in C++ using OpenGL, and I've been working on the rendering of the world. In order to render, I have a 3D array of uint16's that hold that id of the block at the point. I also have a 3D array of uint8's that I am using to store the visibility data for that point, where each bit represents if a face is visible. I have it so the blocks render and all of the proper faces are hidden if needed, but all of the blocks are offset by a power of 2 from where they are stored in the array. So the block at [0][0][0] is rendered at (0, 0, 0), and the block at 11 is rendered at (1, 1, 1), but the block at [2][2][2] is rendered at (4, 4, 4) and the block at [3][3][3] is rendered at (8, 8, 8), and so on and so forth. This is the result of drawing the above situation: I'm still a little new to the more advanced concepts of C++, like triple pointers, which I'm using for the 3D array, so I think the error is somewhere in there. This is the code for creating the arrays: uint16*** _blockData; //Contains a 3D array of uint16s that are the ids of the blocks in the region uint8*** _visibilityData; //Contains a 3D array of bytes that hold the visibility data for the faces //Allocate memory for the world data _blockData = new uint16**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _blockData[i] = new uint16*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _blockData[i][j] = new uint16[REGION_DIM]; } //Allocate memory for the visibility _visibilityData = new uint8**[REGION_DIM]; for (int i = 0; i < REGION_DIM; i++) { _visibilityData[i] = new uint8*[REGION_DIM]; for (int j = 0; j < REGION_DIM; j++) _visibilityData[i][j] = new uint8[REGION_DIM]; } Here is the code used to create the block mesh for the region: //Check if the positive x face is visible, this happens for every face //Block::VERT_X_POS is just an array of non-transformed cube verts for one face //These checks are in a triple loop, which goes over every place in the array if (_visibilityData[x][y][z] & 0x01 > 0) { _vertexData->AddData(&(translateVertices(Block::VERT_X_POS, x, y, z)[0]), sizeof(Block::VERT_X_POS)); } //This is a seperate method, not in the loop glm::vec3* translateVertices(const glm::vec3 data[], uint16 x, uint16 y, uint16 z) { glm::vec3* copy = new glm::vec3[6]; memcpy(&copy, &data, sizeof(data)); for(int i = 0; i < 6; i++) copy[i] += glm::vec3(x, -y, z); //Make +y go down instead return copy; } I cannot see where the blocks may be getting offset by more than they should be, and certainly not why the offsets are a power of 2. Any help is greatly appreciated. Thanks.

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • How change LOD in geometry?

    - by ChaosDev
    Im looking for simple algorithm of LOD, for change geometry vertexes and decrease frame time. Im created octree, but now I want model or terrain vertex modify algorithm,not for increase(looking on tessellation later) but for decrease. I want something like this Questions: Is same algorithm can apply either to model and terrain correctly? Indexes need to be modified ? I must use octree or simple check distance between camera and object for desired effect ? New value of indexcount for DrawIndexed function needed ? Code: //m_LOD == 10 in the beginning //m_RawVerts - array of 3d Vector filled with values from vertex buffer. void DecreaseLOD() { m_LOD--; if(m_LOD<1)m_LOD=1; RebuildGeometry(); } void IncreaseLOD() { m_LOD++; if(m_LOD>10)m_LOD=10; RebuildGeometry(); } void RebuildGeometry() { void* vertexRawData = new byte[m_VertexBufferSize]; void* indexRawData = new DWORD[m_IndexCount]; auto context = mp_D3D->mp_Context; D3D11_MAPPED_SUBRESOURCE data; ZeroMemory(&data,sizeof(D3D11_MAPPED_SUBRESOURCE)); context->Map(mp_VertexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(vertexRawData,data.pData,m_VertexBufferSize); context->Unmap(mp_VertexBuffer->mp_buffer,0); context->Map(mp_IndexBuffer->mp_buffer,0,D3D11_MAP_READ,0,&data); memcpy(indexRawData,data.pData,m_IndexBufferSize); context->Unmap(mp_IndexBuffer->mp_buffer,0); DWORD* dwI = (DWORD*)indexRawData; int sz = (m_VertexStride/sizeof(float));//size of vertex element //algorithm must be here. std::vector<Vector3d> vertices; int i = 0; for(int j = 0; j < m_VertexCount; j++) { float x1 = (((float*)vertexRawData)[0+i]); float y1 = (((float*)vertexRawData)[1+i]); float z1 = (((float*)vertexRawData)[2+i]); Vector3d lv = Vector3d(x1,y1,z1); //my useless attempts if(j+m_LOD+1<m_RawVerts.size()) { float v1 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD]]); float v2 = VECTORHELPER::Distance(m_RawVerts[dwI[j]],m_RawVerts[dwI[j+m_LOD+1]]); if(v1>v2) lv = m_RawVerts[dwI[j+1]]; else if(v2<v1) lv = m_RawVerts[dwI[j+2]]; } (((float*)vertexRawData)[0+i]) = lv.x; (((float*)vertexRawData)[1+i]) = lv.y; (((float*)vertexRawData)[2+i]) = lv.z; i+=sz;//pass others vertex format values without change } for(int j = 0; j < m_IndexCount; j++) { //indices ? } //set vertexes to device UpdateVertexes(vertexRawData,mp_VertexBuffer->getSize()); delete[] vertexRawData; delete[] indexRawData; }

    Read the article

  • Improving the efficiency of frustum culling

    - by DeadMG
    I've got some code which performs frustum culling. However, this defines the "frustum" way too broadly- when I have ~10 objects on screen, the code returns 42 objects to be rendered. I've tried taking "slices" through the frustum to attempt to increase the accuracy of the technique, but it doesn't seem to have made much impact. I also significantly reduced the far plane, so that the objects are barely at the edge. Here's my code (where size is the size in screen space- the resolution of the client area of the window I'm rendering into). Any suggestions? auto&& size = GetDimensions(); D3DVIEWPORT9 vp = { 0, 0, size.x, size.y, 0, 1 }; D3DCALL(device->SetViewport(&vp)); static const int slices = 10; std::vector<Object*> result; for(int i = 0; i < slices; i++) { D3DXVECTOR3 WorldSpaceFrustrumPoints[8] = { D3DXVECTOR3(0, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i) / slices), D3DXVECTOR3(0, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, 0, static_cast<float>(i + 1) / slices), D3DXVECTOR3(size.x, size.y, static_cast<float>(i + 1) / slices), D3DXVECTOR3(0, size.y, static_cast<float>(i + 1) / slices) }; D3DXMATRIXA16 Identity; D3DXMatrixIdentity(&Identity); D3DXVec3UnprojectArray( WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), WorldSpaceFrustrumPoints, sizeof(D3DXVECTOR3), &vp, &Projection, &View, &Identity, 8 ); Math::AABB Frustrum; auto world_begin = std::begin(WorldSpaceFrustrumPoints); auto world_end = std::end(WorldSpaceFrustrumPoints); auto world_initial = WorldSpaceFrustrumPoints[0]; Frustrum.BottomLeftClosest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x < rhs.x ? lhs : rhs; }).x; Frustrum.BottomLeftClosest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y < rhs.y ? lhs : rhs; }).y; Frustrum.BottomLeftClosest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z < rhs.z ? lhs : rhs; }).z; Frustrum.TopRightFurthest.x = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.x > rhs.x ? lhs : rhs; }).x; Frustrum.TopRightFurthest.y = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.y > rhs.y ? lhs : rhs; }).y; Frustrum.TopRightFurthest.z = std::accumulate(world_begin, world_end, world_initial, [](D3DXVECTOR3 lhs, D3DXVECTOR3 rhs) { return lhs.z > rhs.z ? lhs : rhs; }).z; auto slices_result = ObjectTree.collision(Frustrum); result.insert(result.end(), slices_result.begin(), slices_result.end()); } return result;

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • Modern OpenGL context failure [migrated]

    - by user209347
    OK, I managed to create an OpenGL context with wglcreatecontextattribARB with version 3.2 in my attrib struct (So I have initialized a 3.2 opengl context). It works, but the strange thing is, when I use glBindBuffer e,g. I still get unreferenced linker error, shouldn't a newer context prevent this? I'm on windows BTW, Linux doesn't have to deal with older and newer contexts (it directly supports the core of its version). The code: PIXELFORMATDESCRIPTOR pfd; HGLRC tmpRC; int iFormat; if (!(hDC = GetDC(hWnd))) { CMsgBox("Unable to create a device context. Program will now close.", "Error"); return false; } ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = attribs->colorbits; pfd.cDepthBits = attribs->depthbits; pfd.iLayerType = PFD_MAIN_PLANE; if (!(iFormat = ChoosePixelFormat(hDC, &pfd))) { CMsgBox("Unable to find a suitable pixel format. Program will now close.", "Error"); return false; } if (!SetPixelFormat(hDC, iFormat, &pfd)) { CMsgBox("Unable to initialize the pixel formats. Program will now close.", "Error"); return false; } if (!(tmpRC=wglCreateContext(hDC))) { CMsgBox("Unable to create a rendering context. Program will now close.", "Error"); return false; } if (!wglMakeCurrent(hDC, tmpRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } strncpy(vers, (char*)glGetString(GL_VERSION), 3); vers[3] = '\0'; if (sscanf(vers, "%i.%i", &glv, &glsubv) != 2) { CMsgBox("Unable to retrieve the OpenGL version. Program will now close.", "Error"); return false; } hRC = NULL; if (glv > 2) // Have OpenGL 3.+ support { if ((wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"))) { int attribs[] = {WGL_CONTEXT_MAJOR_VERSION_ARB, glv, WGL_CONTEXT_MINOR_VERSION_ARB, glsubv,WGL_CONTEXT_FLAGS_ARB, 0,0}; hRC = wglCreateContextAttribsARB(hDC, 0, attribs); wglMakeCurrent(NULL, NULL); wglDeleteContext(tmpRC); if (!wglMakeCurrent(hDC, hRC)) { CMsgBox("Unable to activate the rendering context. Program will now close.", "Error"); return false; } moderncontext = true; } } if (hRC == NULL) { hRC = tmpRC; moderncontext = false; }

    Read the article

  • iPod/iPhone OpenGL ES UIView flashes when updating

    - by Dave Viner
    I have a simple iPhone application which uses OpenGL ES (v1) to draw a line based on the touches of the user. In the XCode Simulator, the code works perfectly. However, when I install the app onto an iPod or iPhone, the OpenGL ES view "flashes" when drawing the line. If I disable the line drawing, the flash disappears. By "flash", I mean that the background image (which is an OpenGL texture) disappears momentarily, and then reappears. It appears as if the entire scene is completely erased and redrawn. The code which handles the line drawing is the following: renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; //Allocate vertex array buffer if(vertexBuffer == NULL) vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); //Add points to the buffer so there are drawing points every X pixels count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1); for(i = 0; i < count; ++i) { if(vertexCount == vertexMax) { vertexMax = 2 * vertexMax; vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); } vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count); vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } //Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); //Display the buffer [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } (This function is based on the function of the same name from the GLPaint sample application.) For the life of me, I can not figure out why this causes the screen to flash. The line is drawn properly (both in the Simulator and in the iPod). But, the flash makes it unusable. Anyone have ideas on how to prevent the "flash"?

    Read the article

  • qsort on an array of pointers to Objective-C objects

    - by ElBueno
    I have an array of pointers to Objective-C objects. These objects have a sort key associated with them. I'm trying to use qsort to sort the array of pointers to these objects. However, the first time my comparator is called, the first argument points to the first element in my array, but the second argument points to garbage, giving me an EXC_BAD_ACCESS when I try to access its sort key. Here is my code (paraphrased): - (void)foo:(int)numThingies { Thingie **array; array = malloc(sizeof(deck[0])*numThingies); for(int i = 0; i < numThingies; i++) { array[i] = [[Thingie alloc] initWithSortKey:(float)random()/RAND_MAX]; } qsort(array[0], numThingies, sizeof(array[0]), thingieCmp); } int thingieCmp(const void *a, const void *b) { const Thingie *ia = (const Thingie *)a; const Thingie *ib = (const Thingie *)b; if (ia.sortKey > ib.sortKey) return 1; //ib point to garbage, so ib.sortKey produces the EXC_BAD_ACCESS else return -1; } Any ideas why this is happening?

    Read the article

  • What is the equivalent of Application.ProcessMessages, Application.Handle and Application.Terminated

    - by DavidB
    Hi, I am new to Writing Windows Service apps and having problems. Written in Delphi, I have written a normal windows application to check and debug the major parts of the code and now have to convert it to an NT Service. My code has to launch a windows application which I do using the following code. function Run_Program : boolean; var SEInfo : TShellExecuteInfo; ExitCode : DWORD; begin Result := false; FillChar(SEInfo, SizeOf(SEInfo),0); SEInfo.cbSize :=SizeOf(TShellExecuteInfo); With SEInfo do begin fMask := SEE_MASK_NOCLOSEPROCESS; Wnd := **Application.Handle**; lpFile := PChar(Exe_Prog); lpParameters := PChar(Exe_Param); nShow := SW_SHOWNORMAL; end; If ShellExecuteEx(@SEInfo) then begin repeat Application.ProcessMessages; GetExitCodeProcess(SEInfo.hProcess, ExitCode); until (ExitCode <> STILL_ACTIVE) OR Application.Terminated OR NOT q1.fieldbyName('Delay').AsBoolean; If ExitCode <> STILL_ACTIVE then Record_Event(Exe_Prog + ' completed ') else Record_Event(Exe_Prog + ' activated '); Result := true; end else Record_Event('Error Starting '+ Exe_Prog+ ' '); end; When this is put in the service app the compiler fails with 3 errors: Undeclared identifiers.. 1) Handle 2) ProcessMessages and 3) Terminated. My question is.. are there equivalent procedures that can be used in a service application or should I approach the problem differently in a service application? Any help would be appreciated

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >