Search Results

Search found 3516 results on 141 pages for 'malloc history'.

Page 113/141 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • C++ using cdb_read returns extra characters on some reads

    - by Moe Be
    Hi All, I am using the following function to loop through a couple of open CDB hash tables. Sometimes the value for a given key is returned along with an additional character (specifically a CTRL-P (a DLE character/0x16/0o020)). I have checked the cdb key/value pairs with a couple of different utilities and none of them show any additional characters appended to the values. I get the character if I use cdb_read() or cdb_getdata() (the commented out code below). If I had to guess I would say I am doing something wrong with the buffer I create to get the result from the cdb functions. Any advice or assistance is greatly appreciated. char* HashReducer::getValueFromDb(const string &id, vector <struct cdb *> &myHashFiles) { unsigned char hex_value[BUFSIZ]; size_t hex_len; //construct a real hex (not ascii-hex) value to use for database lookups atoh(id,hex_value,&hex_len); char *value = NULL; vector <struct cdb *>::iterator my_iter = myHashFiles.begin(); vector <struct cdb *>::iterator my_end = myHashFiles.end(); try { //while there are more databases to search and we have not found a match for(; my_iter != my_end && !value ; my_iter++) { //cerr << "\n looking for this MD5:" << id << " hex(" << hex_value << ") \n"; if (cdb_find(*my_iter, hex_value, hex_len)){ //cerr << "\n\nI found the key " << id << " and it is " << cdb_datalen(*my_iter) << " long\n\n"; value = (char *)malloc(cdb_datalen(*my_iter)); cdb_read(*my_iter,value,cdb_datalen(*my_iter),cdb_datapos(*my_iter)); //value = (char *)cdb_getdata(*my_iter); //cerr << "\n\nThe value is:" << value << " len is:" << strlen(value)<< "\n\n"; }; } } catch (...){} return value; }

    Read the article

  • How to resize an openGL window created with wglCreateContext?

    - by Nick
    Is it possible to resize an openGL window (or device context) created with wglCreateContext without disabling it? If so how? Right now I have a function which resizes the DC but the only way I could get it to work was to call DisableOpenGL and then re-enable. This causes any textures and other state changes to be lost. I would like to do this without the disable so that I do not have to go through the tedious task of recreating the openGL DC state. HWND hWnd; HDC hDC; void View_setSizeWin32(int width, int height) { // resize the window LPRECT rec = malloc(sizeof(RECT)); GetWindowRect(hWnd, rec); SetWindowPos( hWnd, HWND_TOP, rec->left, rec->top, rec->left+width, rec->left+height, SWP_NOMOVE ); free(rec); // sad panda DisableOpenGL( hWnd, hDC, hRC ); EnableOpenGL( hWnd, &hDC, &hRC ); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-(width/2), width/2, -(height/2), height/2, -1.0, 1.0); // have fun recreating the openGL state.... } void EnableOpenGL(HWND hWnd, HDC * hDC, HGLRC * hRC) { PIXELFORMATDESCRIPTOR pfd; int format; // get the device context (DC) *hDC = GetDC( hWnd ); // set the pixel format for the DC ZeroMemory( &pfd, sizeof( pfd ) ); pfd.nSize = sizeof( pfd ); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; format = ChoosePixelFormat( *hDC, &pfd ); SetPixelFormat( *hDC, format, &pfd ); // create and enable the render context (RC) *hRC = wglCreateContext( *hDC ); wglMakeCurrent( *hDC, *hRC ); } void DisableOpenGL(HWND hWnd, HDC hDC, HGLRC hRC) { wglMakeCurrent( NULL, NULL ); wglDeleteContext( hRC ); ReleaseDC( hWnd, hDC ); }

    Read the article

  • Implementing Object Oriented: ansi-C approach

    - by No Money
    Hey there, I am an Intermediate programmer in Java and know some of the basics in C++. I recently started to scam over "C language" [please note that i emphasized on C language and want to stick with C as i found it to be a perfect tool, so no need for suggestions focusing on why should i move back to C++ or Java or any other crappy language (e.g: C#)]. Moving on, I code an Object Oriented approach in C but kindda scramble with the pointers part. Please understand that I am just a noob trying to extend my knowledge beyond what i learned in High School. Here is my code..... #include <stdio.h> typedef struct per{ int privateint; char *privateString; struct per (*New) (); void (*deleteperOBJ) (struct t_person *); void (*setperNumber) ((struct*) t_person,int); void (*setperString) ((struct*) t_person,char *); void (*dumpperState) ((struct*) t_person); }t_person; void setperNumber(t_person *const per,int num){ if(per==NULL) return; per->privateint=num; } void setperString(t_person *const per,char *string){ if(per==NULL) return; per->privateString=string; } void dumpperState(t_person *const per){ if(per==NULL) return; printf("value of private int==%d\n", per->privateint); printf("value of private string==%s\n", per->privateString); } void deleteperOBJ(struct t_person *const per){ free((void*)t_person->per); t_person ->per = NULL; } main(){ t_person *const per = (struct*) malloc(sizeof(t_person)); per = t_person -> struct per -> New(); per -> setperNumber (t_person *per, 123); per -> setperString(t_person *per, "No money"); dumpperState(t_person *per); deleteperOBJ(t_person *per); } Just to warn you, this program has several errors and since I am a beginner I couldn't help except to post this thread as a question. I am looking forward for assistance. Thanks in advance.

    Read the article

  • Basic drawing with Quartz 2D on iPhone

    - by wwrob
    My goal is to make a program that will draw points whenever the screen is touched. This is what I have so far: The header file: #import <UIKit/UIKit.h> @interface ElSimView : UIView { CGPoint firstTouch; CGPoint lastTouch; UIColor *pointColor; CGRect *points; int npoints; } @property CGPoint firstTouch; @property CGPoint lastTouch; @property (nonatomic, retain) UIColor *pointColor; @property CGRect *points; @property int npoints; @end The implementation file: //@synths etc. - (id)initWithFrame:(CGRect)frame { return self; } - (id)initWithCoder:(NSCoder *)coder { if(self = [super initWithCoder:coder]) { self.npoints = 0; } return self; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; firstTouch = [touch locationInView:self]; lastTouch = [touch locationInView:self]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; lastTouch = [touch locationInView:self]; points = (CGRect *)malloc(sizeof(CGRect) * ++npoints); points[npoints-1] = CGRectMake(lastTouch.x-15, lastTouch.y-15,30,30); [self setNeedsDisplay]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; lastTouch = [touch locationInView:self]; [self setNeedsDisplay]; } - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor blackColor].CGColor); CGContextSetFillColorWithColor(context, pointColor.CGColor); for(int i=0; i<npoints; i++) CGContextAddEllipseInRect(context, points[i]); CGContextDrawPath(context, kCGPathFillStroke); } - (void)dealloc { free(points); [super dealloc]; } @end When I load this and click some points, it draws the first points normally, then then next points are drawn along with random ellipses (not even circles). Also I have another question: When is exactly drawRect executed?

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • Implications of trying to double free memory space in C

    - by SidNoob
    Here' my piece of code: #include <stdio.h> #include<stdlib.h> struct student{ char *name; }; int main() { struct student s; s.name = malloc(sizeof(char *)); // I hope this is the right way... printf("Name: "); scanf("%[^\n]", s.name); printf("You Entered: \n\n"); printf("%s\n", s.name); free(s.name); // This will cause my code to break } All I know is that dynamic allocation on the 'heap' needs to be freed. My question is, when I run the program, sometimes the code runs successfully. i.e. ./struct Name: Thisis Myname You Entered: Thisis Myname I tried reading this I've concluded that I'm trying to double-free a piece of memory i.e. I'm trying to free a piece of memory that is already free? (hope I'm correct here. If Yes, what could be the Security Implications of a double-free?) While it fails sometimes as its supposed to: ./struct Name: CrazyFishMotorhead Rider You Entered: CrazyFishMotorhead Rider *** glibc detected *** ./struct: free(): invalid next size (fast): 0x08adb008 *** ======= Backtrace: ========= /lib/tls/i686/cmov/libc.so.6(+0x6b161)[0xb7612161] /lib/tls/i686/cmov/libc.so.6(+0x6c9b8)[0xb76139b8] /lib/tls/i686/cmov/libc.so.6(cfree+0x6d)[0xb7616a9d] ./struct[0x8048533] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb75bdbd6] ./struct[0x8048441] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 08:01 288098 /root/struct 08049000-0804a000 r--p 00000000 08:01 288098 /root/struct 0804a000-0804b000 rw-p 00001000 08:01 288098 /root/struct 08adb000-08afc000 rw-p 00000000 00:00 0 [heap] b7400000-b7421000 rw-p 00000000 00:00 0 b7421000-b7500000 ---p 00000000 00:00 0 b7575000-b7592000 r-xp 00000000 08:01 788956 /lib/libgcc_s.so.1 b7592000-b7593000 r--p 0001c000 08:01 788956 /lib/libgcc_s.so.1 b7593000-b7594000 rw-p 0001d000 08:01 788956 /lib/libgcc_s.so.1 b75a6000-b75a7000 rw-p 00000000 00:00 0 b75a7000-b76fa000 r-xp 00000000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fa000-b76fc000 r--p 00153000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fc000-b76fd000 rw-p 00155000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fd000-b7700000 rw-p 00000000 00:00 0 b7710000-b7714000 rw-p 00000000 00:00 0 b7714000-b7715000 r-xp 00000000 00:00 0 [vdso] b7715000-b7730000 r-xp 00000000 08:01 788898 /lib/ld-2.11.1.so b7730000-b7731000 r--p 0001a000 08:01 788898 /lib/ld-2.11.1.so b7731000-b7732000 rw-p 0001b000 08:01 788898 /lib/ld-2.11.1.so bffd5000-bfff6000 rw-p 00000000 00:00 0 [stack] Aborted So why is it that my code does work sometimes? i.e. the compiler is not able to detect at times that I'm trying to free an already freed memory. Has it got to do something with my stack/heap size?

    Read the article

  • How to convert m4a file to aac adts file in Xcode?

    - by Bird Hsuie
    I have a mp4 file copied from iPod lib and saved to my Document for my next step, I need it to convert to .mp3 or .aac(ADTS type) I use this code and failed... -(IBAction)compressFile:(id)sender{ NSLog (@"handleConvertToPCMTapped"); // open an ExtAudioFile NSLog (@"opening %@", exportURL); ExtAudioFileRef inputFile; CheckResult (ExtAudioFileOpenURL((__bridge CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed"); // prepare to convert to a plain ol' PCM format AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 44100; // todo: or use source rate? myPCMFormat.mFormatID = kAudioFormatMPEGLayer3 ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 2; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 4; myPCMFormat.mBytesPerFrame = 4; CheckResult (ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof (myPCMFormat), &myPCMFormat), "ExtAudioFileSetProperty failed"); // allocate a big buffer. size can be arbitrary for ExtAudioFile. // you have 64 KB to spare, right? UInt32 outputBufferSize = 0x10000; void* ioBuf = malloc (outputBufferSize); UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket; UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket; // set up output file NSString *outputPath = [myDocumentsDirectory() stringByAppendingPathComponent:@"m_export.mp3"]; NSURL *outputURL = [NSURL fileURLWithPath:outputPath]; NSLog (@"creating output file %@", outputURL); AudioFileID outputFile; CheckResult(AudioFileCreateWithURL((__bridge CFURLRef)outputURL, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputFile), "AudioFileCreateWithURL failed"); // start convertin' UInt32 outputFilePacketPosition = 0; //in bytes while (true) { // wrap the destination buffer in an AudioBufferList AudioBufferList convertedData; convertedData.mNumberBuffers = 1; convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame; convertedData.mBuffers[0].mDataByteSize = outputBufferSize; convertedData.mBuffers[0].mData = ioBuf; UInt32 frameCount = packetsPerBuffer; // read from the extaudiofile CheckResult (ExtAudioFileRead(inputFile, &frameCount, &convertedData), "Couldn't read from input file"); if (frameCount == 0) { printf ("done reading from file"); break; } // write the converted data to the output file CheckResult (AudioFileWritePackets(outputFile, false, frameCount, NULL, outputFilePacketPosition / myPCMFormat.mBytesPerPacket, &frameCount, convertedData.mBuffers[0].mData), "Couldn't write packets to file"); NSLog (@"Converted %ld bytes", outputFilePacketPosition); // advance the output file write location outputFilePacketPosition += (frameCount * myPCMFormat.mBytesPerPacket); } // clean up ExtAudioFileDispose(inputFile); AudioFileClose(outputFile); // show size in label NSLog (@"checking file at %@", outputPath); [self transMitFile:outputPath]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) { NSError *fileManagerError = nil; unsigned long long fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:&fileManagerError] fileSize]; } any suggestion?.......thanks for your great help!

    Read the article

  • Problem with pointers and getstring function

    - by volting
    I am trying to write a function to get a string from the uart1. Its for an embedded system so I don't want to use malloc. The pointer that is passed to the getstring function seems to point to garbage after the gets_e_uart1() is called. I don't use pointers too often so I'm sure it is something really stupid and trivial that Im doing wrong. Regards, V int main() { char *ptr = 0; while(1) { gets_e_uart1(ptr, 100); puts_uart1(ptr); } return 0; }*end main*/ //------------------------------------------------------------------------- //gets a string and echos it //returns 0 if there is no error char getstring_e_uart1(char *stringPtr_, const int SIZE_) { char buffer_[SIZE_]; stringPtr_ = buffer_; int start_ = 0, end_ = SIZE_ - 1; char errorflag = 0; /*keep geting chars until newline char recieved*/ while((buffer_[start_++] = getchar_uart1())!= 0x0D) { putchar_uart1(buffer_[start_]);//echo it /*check for end of buffer wraparound if neccesary*/ if(start_ == end_) { start_ = 0; errorflag = 1; } } putchar_uart1('\n'); putchar_uart1('\r'); /*check for end of buffer wraparound if neccesary*/ if(start_ == end_) { buffer_[0] = '\0'; errorflag = 1; } else { buffer_[start_++] = '\0'; } return errorflag; } Update: I decided to go with approach of passing a pointer an array to the function. This works nicely, thanks to everyone for the informative answers. Updated Code: //------------------------------------------------------------------------- //argument 1 should be a pointer to an array, //and the second argument should be the size of the array //gets a string and echos it //returns 0 if there is no error char getstring_e_uart1(char *stringPtr_, const int SIZE_) { char *startPtr_ = stringPtr_; char *endPtr_ = startPtr_ + (SIZE_ - 1); char errorflag = 0; /*keep geting chars until newline char recieved*/ while((*stringPtr_ = getchar_uart1())!= 0x0D) { putchar_uart1(*stringPtr_);//echo it stringPtr_++; /*check for end of buffer wraparound if neccesary*/ if(stringPtr_ == endPtr_) { stringPtr_ = startPtr_; errorflag = 1; } } putchar_uart1('\n'); putchar_uart1('\r'); /*check for end of buffer wraparound if neccesary*/ if(stringPtr_ == endPtr_) { stringPtr_ = startPtr_; *stringPtr_ = '\0'; errorflag = 1; } else { *stringPtr_ = '\0'; } return errorflag; }

    Read the article

  • overloading new/delete problem

    - by hidayat
    This is my scenario, Im trying to overload new and delete globally. I have written my allocator class in a file called allocator.h. And what I am trying to achieve is that if a file is including this header file, my version of new and delete should be used. So in a header file "allocator.h" i have declared the two functions extern void* operator new(std::size_t size); extern void operator delete(void *p, std::size_t size); I the same header file I have a class that does all the allocator stuff, class SmallObjAllocator { ... }; I want to call this class from the new and delete functions and I would like the class to be static, so I have done this: template<unsigned dummy> struct My_SmallObjectAllocatorImpl { static SmallObjAllocator myAlloc; }; template<unsigned dummy> SmallObjAllocator My_SmallObjectAllocatorImpl<dummy>::myAlloc(DEFAULT_CHUNK_SIZE, MAX_OBJ_SIZE); typedef My_SmallObjectAllocatorImpl<0> My_SmallObjectAllocator; and in the cpp file it looks like this: allocator.cc void* operator new(std::size_t size) { std::cout << "using my new" << std::endl; if(size > MAX_OBJ_SIZE) return malloc(size); else return My_SmallObjectAllocator::myAlloc.allocate(size); } void operator delete(void *p, std::size_t size) { if(size > MAX_OBJ_SIZE) free(p); else My_SmallObjectAllocator::myAlloc.deallocate(p, size); } The problem is when I try to call the constructor for the class SmallObjAllocator which is a static object. For some reason the compiler are calling my overloaded function new when initializing it. So it then tries to use My_SmallObjectAllocator::myAlloc.deallocate(p, size); which is not defined so the program crashes. So why are the compiler calling new when I define a static object? and how can I solve it?

    Read the article

  • Storing a NTFS Security Descriptor in C

    - by Doori Bar
    My goal is to store a NTFS Security Descriptor in its identical native state. The purpose is to restore it on-demand. I managed to write the code for that purpose, I was wondering if anybody mind to validate a sample of it? (The for loop represents the way I store the native descriptor) This sample only contains the flag for "OWNER", but my intention is to apply the same method for all of the security descriptor flags. I'm just a beginner, would appreciate the heads up. Thanks, Doori Bar #define _WIN32_WINNT 0x0501 #define WINVER 0x0501 #include <stdio.h> #include <windows.h> #include "accctrl.h" #include "aclapi.h" #include "sddl.h" int main (void) { DWORD lasterror; PSECURITY_DESCRIPTOR PSecurityD1, PSecurityD2; HANDLE hFile; PSID owner; LPTSTR ownerstr; BOOL ownerdefault; int ret = 0; unsigned int i; hFile = CreateFile("c:\\boot.ini", GENERIC_READ | ACCESS_SYSTEM_SECURITY, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); if (hFile == INVALID_HANDLE_VALUE) { fprintf(stderr,"CreateFile() failed. Error: INVALID_HANDLE_VALUE\n"); return 1; } lasterror = GetSecurityInfo(hFile, SE_FILE_OBJECT, OWNER_SECURITY_INFORMATION , &owner, NULL, NULL, NULL, &PSecurityD1); if (lasterror != ERROR_SUCCESS) { fprintf(stderr,"GetSecurityInfo() failed. Error: %lu;\n", lasterror); ret = 1; goto ret1; } ConvertSidToStringSid(owner,&ownerstr); printf("ownerstr of PSecurityD1: %s\n", ownerstr); /* The for loop represents the way I store the native descriptor */ PSecurityD2 = malloc( GetSecurityDescriptorLength(PSecurityD1) * sizeof(unsigned char) ); for (i=0; i < GetSecurityDescriptorLength(PSecurityD1); i++) ((unsigned char *) PSecurityD2)[i] = ((unsigned char *) PSecurityD1)[i]; if (IsValidSecurityDescriptor(PSecurityD2) == 0) { fprintf(stderr,"IsValidSecurityDescriptor(PSecurityD2) failed.\n"); ret = 2; goto ret2; } if (GetSecurityDescriptorOwner(PSecurityD2,&owner,&ownerdefault) == 0) { fprintf(stderr,"GetSecurityDescriptorOwner() failed."); ret = 2; goto ret2; } ConvertSidToStringSid(owner,&ownerstr); printf("ownerstr of PSecurityD2: %s\n", ownerstr); ret2: free(owner); free(ownerstr); free(PSecurityD1); free(PSecurityD2); ret1: CloseHandle(hFile); return ret; }

    Read the article

  • No matter what, I can't get this stupid progress bar to update from a thread!

    - by Synthetix
    I have a Windows app written in C (using gcc/MinGW) that works pretty well except for a few UI problems. One, I simply cannot get the progress bar to update from a thread. In fact, I probably can't get ANY UI stuff to update. Basically, I have a spawned thread that does some processing, and from that thread I attempt to update the progress bar in the main thread. I tried this by using PostMessage() to the main hwnd, but no luck even though I can do other things like open message boxes. However, it's unclear whether the message box is getting called within the thread or on the main thread. Here's some code: //in header/globally accessible HWND wnd; //main application window HWND progress_bar; //progress bar typedef struct { //to pass to thread DWORD mainThreadId; HWND mainHwnd; char *filename; } THREADSTUFF; //callback function LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam){ switch(msg){ case WM_CREATE:{ //create progress bar progress_bar = CreateWindowEx( 0, PROGRESS_CLASS, (LPCTSTR)NULL, WS_CHILD | WS_VISIBLE, 79,164,455,15, hwnd, (HMENU)20, NULL, NULL); break; } case WM_COMMAND:{ if(LOWORD(wParam)==2){ //do some processing in a thread //struct of stuff I need to pass to thread THREADSTUFF *threadStuff; threadStuff = (THREADSTUFF*)malloc(sizeof(*threadStuff)); threadStuff->mainThreadId = GetCurrentThreadId(); threadStuff->mainHwnd = hwnd; threadStuff->filename = (void*)&filename; hThread1 = CreateThread(NULL,0,convertFile (LPVOID)threadStuff,0,NULL); }else if(LOWORD(wParam)==5){ //update progress bar MessageBox(hwnd,"I got a message!", "Message", MB_OK | MB_ICONINFORMATION); PostMessage(progress_bar,PBM_STEPIT,0,CLR_DEFAULT); } break; } } } This all seems to work okay. The problem is in the thread: DWORD WINAPI convertFile(LPVOID params){ //get passed params, this works perfectly fine THREADSTUFF *tData = (THREADSTUFF*)params; MessageBox(tData->mainHwnd,tData->filename,"File name",MB_OK | MB_ICONINFORMATION); //yep PostThreadMessage(tData->mainThreadId,WM_COMMAND,5,0); //only shows message PostMessage(tData->mainHwnd,WM_COMMAND,5,0); //only shows message } When I say, "only shows message," that means the MessageBox() function in the callback works, but not the PostMessage() to update the position of the progress bar. What am I missing?

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • FILE_NOT_FOUND when trying to open COM port C++

    - by Moutabreath
    I am trying to open a com port for reading and writing using C++ but I can't seem to pass the first stage of actually opening it. I get an INVALID_HANDLE_VALUE on the handle with GetLastError FILE_NOT_FOUND. I have searched around the web for a couple of days I'm fresh out of ideas. I have searched through all the questions regarding COM on this website too. I have scanned through the existing ports (or so I believe) to get the name of the port right. I also tried combinations of _T("COM1") with the slashes, without the slashes, with colon, without colon and without the _T I'm using windows 7 on 64 bit machine. this is the code i got I'll be glad for any input on this void SendToCom(char* data, int len) { DWORD cbNeeded = 0; DWORD dwPorts = 0; EnumPorts(NULL, 1, NULL, 0, &cbNeeded, &dwPorts); //What will be the return value BOOL bSuccess = FALSE; LPCSTR COM1 ; BYTE* pPorts = static_cast<BYTE*>(malloc(cbNeeded)); bSuccess = EnumPorts(NULL, 1, pPorts, cbNeeded, &cbNeeded, &dwPorts); if (bSuccess){ PORT_INFO_1* pPortInfo = reinterpret_cast<PORT_INFO_1*>(pPorts); for (DWORD i=0; i<dwPorts; i++) { //If it looks like "COMX" then size_t nLen = _tcslen(pPortInfo->pName); if (nLen > 3) { if ((_tcsnicmp(pPortInfo->pName, _T("COM"), 3) == 0) ){ COM1 =pPortInfo->pName; //COM1 ="\\\\.\\COM1"; HANDLE m_hCommPort = CreateFile( COM1 , GENERIC_READ|GENERIC_WRITE, // access ( read and write) 0, // (share) 0:cannot share the COM port NULL, // security (None) OPEN_EXISTING, // creation : open_existing FILE_FLAG_OVERLAPPED, // we want overlapped operation NULL // no templates file for COM port... ); if (m_hCommPort==INVALID_HANDLE_VALUE) { DWORD err = GetLastError(); if (err == ERROR_FILE_NOT_FOUND) { MessageBox(hWnd,"ERROR_FILE_NOT_FOUND",NULL,MB_ABORTRETRYIGNORE); } else if(err == ERROR_INVALID_NAME) { MessageBox(hWnd,"ERROR_INVALID_NAME",NULL,MB_ABORTRETRYIGNORE); } else { MessageBox(hWnd,"unkown error",NULL,MB_ABORTRETRYIGNORE); } } else{ WriteAndReadPort(m_hCommPort,data); } } pPortInfo++; } } } }

    Read the article

  • Is there anything wrong with my texture loading method ?

    - by José Joel.
    I'm a noob in openGL and trying to learn as much as possible. I'm using this method to load my openGL textures, loading every .png as RGBA4444. I'm doing anything incorrect ? - (void)loadTexture:(NSString*)nombre { CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage; if (textureImage == nil) { NSLog(@"Failed to load texture image"); return; } textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage)); textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage)); imageSizeX= CGImageGetWidth(textureImage); imageSizeY= CGImageGetHeight(textureImage); GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4); // Por 4 pues cada pixel necesita 4 bytes, RGBA CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast ); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage); //Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA" void *tempData = malloc(textureWidth * textureHeight * 2); unsigned int* inPixel32 = (unsigned int*)textureData; unsigned short* outPixel16 = (unsigned short*)tempData; for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32) *outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A free(textureData); textureData = tempData; CGContextRelease(textureContext); glGenTextures(1, &textures[0]); glBindTexture(GL_TEXTURE_2D, textures[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4 , textureData); free(textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } And this is my dealloc method: - (void)dealloc { glDeleteTextures(1,textures); [super dealloc]; }

    Read the article

  • Getting level values from PCM raw data using Core Audio

    - by John
    I am trying to extract level data from a PCM audio file using core audio. I have gotten as far as (I believe) getting the raw data into a byte array (UInt8) but it is 16 bit PCM data and I am having trouble reading the data out. The input is from the iPhone microphone, which I have set as: [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey]; [recordSetting setValue:[NSNumber numberWithInt:1] forKey:AVNumberOfChannelsKey]; [recordSetting setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [recordSetting setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; which is obviously 16 bits. I am then trying to just print out a few values to see if they look reasonable for debug purposes below, and they do not look reasonable (many 0's). ExtAudioFileRef inputFile = NULL; ExtAudioFileOpenURL(track.location, &inputFile); AudioStreamBasicDescription inputFileFormat; UInt32 dataSize = (UInt32)sizeof(inputFileFormat); ExtAudioFileGetProperty(inputFile, kExtAudioFileProperty_FileDataFormat, &dataSize, &inputFileFormat); UInt8 *buffer = malloc(BUFFER_SIZE); AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0].mNumberChannels = 1; bufferList.mBuffers[0].mData = buffer; //pointer to buffer of audio data bufferList.mBuffers[0].mDataByteSize = BUFFER_SIZE; //number of bytes in the buffer while(true) { UInt32 frameCount = (bufferList.mBuffers[0].mDataByteSize / inputFileFormat.mBytesPerFrame); // Read a chunk of input OSStatus status = ExtAudioFileRead(inputFile, &frameCount, &bufferList); // If no frames were returned, conversion is finished if(0 == frameCount) break; NSLog(@"---"); int16_t *bufferl = &buffer; for(int i=0;i<100;i++){ //const int16_t *bufferl = bufferl[i]; NSLog(@"%d",bufferl[i]); } } Not sure what I am doing wrong, I think it has to do with reading the byte array. Sorry for the long code post...

    Read the article

  • C problem, left of '->' must point to class/struct/union/generic type ??

    - by Patrick
    Hello! Trying to understand why this doesn't work. I keep getting the following errors: left of '-nextNode' must point to class/struct/union/generic type (Also all the lines with a - in the function new_math_struct) Header file #ifndef MSTRUCT_H #define MSTRUCT_H #define PLUS 0 #define MINUS 1 #define DIVIDE 2 #define MULTIPLY 3 #define NUMBER 4 typedef struct math_struct { int type_of_value; int value; int sum; int is_used; struct math_struct* nextNode; } ; typedef struct math_struct* math_struct_ptr; #endif C file int get_input(math_struct_ptr* startNode) { /* character, input by the user */ char input_ch; char* input_ptr; math_struct_ptr* ptr; math_struct_ptr* previousNode; input_ptr = &input_ch; previousNode = startNode; /* as long as input is not ok */ while (1) { input_ch = get_input_character(); if (input_ch == ',') // Carrage return return 1; else if (input_ch == '.') // Illegal character return 0; if (input_ch == '+') ptr = new_math_struct(PLUS, 0); else if (input_ch == '-') ptr = new_math_struct(MINUS, 0); else if (input_ch == '/') ptr = new_math_struct(DIVIDE, 0); else if (input_ch == '*') ptr = new_math_struct(MULTIPLY, 0); else ptr = new_math_struct(NUMBER, atoi(input_ptr)); if (startNode == NULL) { startNode = previousNode = ptr; } else { previousNode->nextNode = ptr; previousNode = ptr; } } return 0; } math_struct_ptr* new_math_struct(int symbol, int value) { math_struct_ptr* ptr; ptr = (math_struct_ptr*)malloc(sizeof(math_struct_ptr)); ptr->type_of_value = symbol; ptr->value = value; ptr->sum = 0; ptr->is_used = 0; return ptr; } char get_input_character() { /* character, input by the user */ char input_ch; /* get the character */ scanf("%c", &input_ch); if (input_ch == '+' || input_ch == '-' || input_ch == '*' || input_ch == '/' || input_ch == ')') return input_ch; // A special character else if (input_ch == '\n') return ','; // A carrage return else if (input_ch < '0' || input_ch > '9') return '.'; // Not a number else return input_ch; // Number } The header for the C file just contains a reference to the struct header and the definitions of the functions. Language C.

    Read the article

  • if non zero elements in same column count only once

    - by George
    I want to check the elements above the main diagonal and if I found non zero values , count one. If the non zero values are found in the same column ,then count just one ,not the number of the non zero values. For example , it should be count = 2 and not 3 in this example because 12 and 6 are in the same column. A= 1 11 12 4 5 6 0 7 0 #include <stdio.h> #include <stdlib.h> #include <math.h> int main( int argc, const char* argv[] ){ int Rows = 3 , Cols = 3; float *A = (float *) malloc ( Rows * Cols * sizeof (float) ); A[0] = 1.0; A[1] = 11.0; A[2] = 12.0; A[3] = 4.0; A[4] = 5.0; A[5] = 6.0; A[6] = 0.0; A[7] = 7.0; A[8] = 0.0; // print input matrix printf("\n Input matrix \n\n"); for ( int i = 0; i < Rows; i++ ) for ( int j = 0; j < Cols; j++ ) { printf("%f\t",A[ i * Cols + j ]); if( j == Cols-1 ) printf("\n"); } printf("\n"); int count = 0; for ( int j = 0 ; j < Cols; j++ ) { for ( int i = ( Rows - 1 ); i >= 0; i-- ) { // check the diagonal elements above the main diagonal if ( j > i ) { if ( ( A[ i * Cols + j ] != 0 ) ) { printf("\n Above nonzero Elmts = %f\n",( A[i * Cols + j] ) ); count++; } } } } printf("\ncount = %d\n",count ); return 0; }

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • makecontext segfault?

    - by cdietschrun
    I am working on a homework assignment that will be due in the next semester. It requires us to implement our own context switching/thread library using the ucontext API. The professor provides code that does it, but before a thread returns, he manually does some work and calls an ISR that finds another thread to use and swapcontexts to it or if none are left, exits. The point of the assignment is to use the uc_link field of the context so that when it hits a return it takes care of the work. I've created a function (type void/void args) that just does the work the functions did before (clean up and then calls ISR). The professor said he wanted this. So all that's left is to do a makecontext somewhere along the way on the context in the uc_link field so that it runs my thread, right? Well, when I do makecontext on seemingly any combination of ucontext_t's and function, I get a segfault and gdb provides no help.. I can skip the makecontext and my program exist 'normally' when it hits a return in the threads I created because (presumably) the uc_link field is not properly setup (which is what I'm trying to do). I also can't find anything on why makecontext would segfault. Can anyone help? stack2.ss_sp = (void *)(malloc(STACKSIZE)); if(stack2.ss_sp == NULL){ printf("thread failed to get stack space\n"); exit(8); } stack2.ss_size = STACKSIZE; stack2.ss_flags = 0; if(getcontext(&main_context) == -1){ perror("getcontext in t_init, rtn_env"); exit(5); } //main_context.uc_stack = t_state[i].mystk; main_context.uc_stack = stack2; main_context.uc_link = 0; makecontext(&main_context, (void (*)(void))thread_rtn, 0); I've also tried just thread_rtn, &thread_rtn and other things. thread_rtn is declared as void thread_rtn(void). later, in each thread. run_env is of type ucontext_t: ... t_state[i].run_env.uc_link = &main_context;

    Read the article

  • c permutation of a number

    - by Pkp
    I am writing a program for magic box. As the only way around it is brute force, I wrote a program to compute permutations of a given array using bells algorithm. I wrote in the lines similar to http://programminggeeks.com/c-code-for-permutation/. It does work for array of 3 and 4. It does not work for an arryay of 8 numbers (1,2,3,4,5,6,7,8). I see that the combination 1 2 3 4 5 6 7 8 gets repeated couple of times. Also there are other combinations that gets repeated. I see that certain combinations don't get displayed even. So could someone tell me what is wrong in the program below. Code: include<stdio.h> int len,numperm=1,count=0; display(int a[]){ int i; for(i=0;i<len;i++) printf("%d ",a[i]); printf("\n"); count++; } swap(int *a,int *b){ int temp; temp=*a; *a=*b; *b=temp; } no_of_perm(){ int x; for(x=1;x<=len;x++) numperm=numperm*x; } perm(int a[]){ int x,y; while(count < numperm){ for(x=0;x<len-1;x++){ swap(&a[x],&a[x+1]); display(a); } swap(&a[0],&a[1]); display(a); for(y=len-1;y>0;y--){ swap(&a[y],&a[y-1]); display(a); } swap(&a[len-1],&a[len-2]); display(a); } } main(int argc, char *argv[]){ if(argc<2){ printf("Error\n"); exit(0); } int i,*a=malloc(sizeof(int)*atoi(argv[1])); len=atoi(argv[1]); for(i=0;i<len;i++) a[i]=i+1; no_of_perm(); perm(a); }

    Read the article

  • safe dereferencing and deletion

    - by serejko
    Hi, I'm relatively new to C++ and OOP in general and currently trying to make such a class that allows to dereference and delete a dead or invalid pointer without any care of having undefined behavior or program fault in result, and I want to ask you is it a good idea and is there something similar which is already implemented by someone else? or maybe I'm doing something completely wrong? I've just started making it and here is the code I currently have: template<class T> class SafeDeref { public: T& operator *() { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) return *this; return theDefaultObject; } T* operator ->() { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) return this; return &theDefaultObject; } void* operator new(size_t size) { void* ptr = malloc(size * sizeof(T)); if (ptr != 0) theStore.insert(reinterpret_cast<T*>(ptr)); return ptr; } void operator delete(void* ptr) { hash_set<T*>::iterator it = theStore.find(reinterpret_cast<T*>(ptr)); if (it != theStore.end()) { theStore.erase(it); free(ptr); } } protected: static bool isInStore(T* ptr) { return theStore.find(ptr) != theStore.end(); } private: static T theDefaultObject; static hash_set<T*> theStore; }; The idea is that each class with the safe dereference should be inherited from it like this: class Foo : public SafeDeref<Foo> { void doSomething(); }; So... Any advices? Thanks in advance. P.S. If you're wondering why I need this... well, I'm creating a set of native functions for some scripting environment, and all of them use pointers to internally allocated objects as handles to them and they're able to delete them as well (input data can be wrong), so this is kinda protection from damaging host application's memory And I really sorry for my bad English

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • SQL SERVER – Auto Complete and Format T-SQL Code – Devart SQL Complete

    - by pinaldave
    Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. If we consider the history of the world, if we all wanted to keep traditional practices, we would have never invented the wheel.  But as time progressed, people wanted convenience and efficiency, which then led to laziness. Wanting a more efficient way to do something is not inherently lazy.  That’s how I see any efficiency tools. A few days ago I found Devart SQL Complete.  It took less than a minute to install, and after installation it just worked without needing any tweaking.  Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste.  You can start typing right away, and it will complete keywords, object names, and fragmentations. It completes statement expressions.  How many times do we write insert, update, delete?  Take this example: to alter a stored procedure name, we don’t remember the code written in it, you have to write it over again, or go back to SQL Server Studio Manager to create and alter which is very difficult.  With SQL Complete , you can write “alter stored procedure,” and it will finish it for you, and you can modify as needed. I love to write code, and I love well-written code.  When I am working with clients, and I find people whose code have not been written properly, I feel a little uncomfortable.  It is difficult to deal with code that is in the wrong case, with no line breaks, no white spaces, improper indents, and no text wrapping.  The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents.  SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter tool will make them better, not lazier. SQL Management Studio gives information about your code when you hover your mouse over it, however SQL Complete goes further in it, going into the work table, and the current rate idea, too. It gives you more information about the parameters; and last but not least, it will just take you to the help file of code navigation.  It will open object explorer in a document viewer.  You can start going through the various properties of your code – a very important thing to do. Here are are interesting Intellisense examples: 1) We are often very lazy to expand *however, when we are using SQL Complete we can just mouse over the * and it will give us all the the column names and we can select the appropriate columns. 2) We can put the cursor after * and it will give us option to expand it to all the column names by pressing the Tab key. 3) Here is one more Intellisense feature I really liked it. I always alias my tables and I always select the alias with special logic. When I was using SQL Complete I selected just a tablename (without schema name) and…(just like below image) … and it autocompleted the schema and alias name (the way I needed it). I believe using SQL Complete we can work faster.  It supports all versions of SQL Server, and works SQL formatting.  Many businesses perform code review and have code standards, so why not use an efficiency tool on everyone’s computer and make sure the code is written correctly from the first time?  If you’re interested in this tool, there are free editions available.  If you like it, you can buy it.  I bought it because it works.  I love it, and I want to hear all your opinions on it, too. You can get the product for FREE.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >