Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 16/59 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • static array in c++ forgets its size

    - by Karel Bílek
    In this small example, c++ forgets size of an array, passed to a constructor. I guess it is something simple, but I cannot see it. In classes.h, there is this code: #ifndef CLASSES_INC #define CLASSES_INC #include <iostream> class static_class { public: static_class(int array[]) { std::cout<<sizeof(array)/sizeof(int)<<"\n"; } }; class my_class{ public: static static_class s; static int array[4]; }; #endif In classes.cpp, there is this code: #include "classes.h" int my_class::array[4]={1, 2, 3, 4}; static_class my_class::s = static_class(my_class::array); In main.cpp, there is only simple #include "classes.h" int main () { return 0; } Now, the desired output (from the constructor of static_class) is 4. But what I get is 1. Why is that?

    Read the article

  • Construct a LPCWSTR on WinCE in C++ (Zune/ZDK)

    - by James Cadd
    What's a good way to construct an LPCWSTR on WinCE 6? I'd like to find something similar to String.Format() in C#. My attempt is: OSVERSIONINFO vi; memset (&vi, 0, sizeof vi); vi.dwOSVersionInfoSize = sizeof vi; GetVersionEx (&vi); char buffer[50]; int n = sprintf(buffer, "The OS version is: %d.%d", vi.dwMajorVersion, vi.dwMinorVersion); ZDKSystem_ShowMessageBox(buffer, MESSAGEBOX_TYPE_OK); That ZDKSystem_ShowMessageBox refers to the ZDK for hacked Zunes available at: http://zunedevwiki.org This line of code works well with the message box call: ZDKSystem_ShowMessageBox(L"Hello Zune", MESSAGEBOX_TYPE_OK); My basic goal is to look at the exact version of WinCE running on a Zune HD to see which features are available (i.e. is it R2 or earlier?). Also I haven't seen any tags for the ZDK so please edit if something is more fitting!

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • union marshalling issue in C#

    - by senthil
    I have union inside structure and the structure looks like struct tDeviceProperty { DWORD Tag; DWORD Size; union _DP value; }; typedef union _DP { short int i; LONG l; ULONG ul; float flt; double dbl; BOOL b; double at; FILETIME ft; LPSTR lpszA; LPWSTR lpszW; LARGE_INTEGER li; struct tBinary bin; BYTE reserved[40]; } __UDP; struct tBinary { ULONG size; BYTE * bin; }; from the tBinary structure bin has to be converted to tImage (structure is given below) struct tImage { DWORD x; DWORD y; DWORD z; DWORD Resolution; DWORD type; DWORD ID; diccid_t SourceID; const void *buffer; const char *Info; const char *UserImageID; }; to use the same in c# I have done marshaling but not giving proper values when converting the pointer to structure. The C# code is follows, tBinary tBin = new tBinary(); IntPtr tBinbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tBin)); Marshal.StructureToPtr(tBin.bin, tBinbuffer, false); tDeviceProperty tDevice = new tDeviceProperty(); tDevice.bin = tBinbuffer; IntPtr tDevicebuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tDevice)); Marshal.StructureToPtr(tDevice.bin, tDevicebuffer, false); Battary tbatt = new Battary(); tbatt.value = tDevicebuffer; IntPtr tbattbuffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(tbatt)); Marshal.StructureToPtr(tbatt.value, tbattbuffer, false); result = GetDeviceProperty(ref tbattbuffer); Battary v = (Battary)Marshal.PtrToStructure(tbattbuffer, typeof(Battary)); tDeviceProperty v2 = (tDeviceProperty)Marshal.PtrToStructure(tDevicebuffer, typeof(tDeviceProperty)); tBinary v3 = (tBinary)Marshal.PtrToStructure(tBinbuffer, typeof(tBinary));

    Read the article

  • Problem with GCC calling static templates functions in templated parent class.

    - by Adisak
    I have some code that compiles and runs on MSVC++ but will not compile on GCC. I have made a test snippet that follows. My goal was to move the static method from BFSMask to BFSMaskSized. Can someone explain what is going on with the errors (esp. the weird 'operator<' error)? Thank you. In the case of both #defines are 0, then the code compiles on GCC. #define DOESNT_COMPILE_WITH_GCC 0 #define FUNCTION_IN_PARENT 0 I get errors if I change either #define to 1. Here are the errors I see. #define DOESNT_COMPILE_WITH_GCC 0 #define FUNCTION_IN_PARENT 1 Test.cpp: In static member function 'static typename Snapper::BFSMask<T>::T_Parent::T_SINT Snapper::BFSMask<T>::Create_NEZ(TCMP)': Test.cpp(492): error: 'CreateMaskFromHighBitSized' was not declared in this scope #define DOESNT_COMPILE_WITH_GCC 1 #define FUNCTION_IN_PARENT 0 Test.cpp: In static member function 'static typename Snapper::BFSMask<T>::T_Parent::T_SINT Snapper::BFSMask<T>::Create_NEZ(TCMP) [with TCMP = int, T = int]': Test.cpp(500): instantiated from 'TVAL Snapper::BFWrappedInc(TVAL, TVAL, TVAL) [with TVAL = int]' Test.cpp(508): instantiated from here Test.cpp(490): error: invalid operands of types '<unresolved overloaded function type>' and 'unsigned int' to binary 'operator<' #define DOESNT_COMPILE_WITH_GCC 1 #define FUNCTION_IN_PARENT 1 Test.cpp: In static member function 'static typename Snapper::BFSMask<T>::T_Parent::T_SINT Snapper::BFSMask<T>::Create_NEZ(TCMP) [with TCMP = int, T = int]': Test.cpp(500): instantiated from 'TVAL Snapper::BFWrappedInc(TVAL, TVAL, TVAL) [with TVAL = int]' Test.cpp(508): instantiated from here Test.cpp(490): error: invalid operands of types '<unresolved overloaded function type>' and 'unsigned int' to binary 'operator<' Here is the code namespace Snapper { #define DOESNT_COMPILE_WITH_GCC 0 #define FUNCTION_IN_PARENT 0 // MASK TYPES // NEZ - Not Equal to Zero #define BFSMASK_NEZ(A) ( ( A ) | ( 0 - A ) ) #define BFSELECT_MASK(MASK,VTRUE,VFALSE) ( ((MASK)&(VTRUE)) | ((~(MASK))&(VFALSE)) ) template<typename TVAL> TVAL BFSelect_MASK(TVAL MASK,TVAL VTRUE,TVAL VFALSE) { return(BFSELECT_MASK(MASK,VTRUE,VFALSE)); } //----------------------------------------------------------------------------- // Branch Free Helpers template<int BYTESIZE> struct BFSMaskBase {}; template<> struct BFSMaskBase<2> { typedef UINT16 T_UINT; typedef SINT16 T_SINT; }; template<> struct BFSMaskBase<4> { typedef UINT32 T_UINT; typedef SINT32 T_SINT; }; template<int BYTESIZE> struct BFSMaskSized : public BFSMaskBase<BYTESIZE> { static const int SizeBytes = BYTESIZE; static const int SizeBits = SizeBytes*8; static const int MaskShift = SizeBits-1; typedef typename BFSMaskBase<BYTESIZE>::T_UINT T_UINT; typedef typename BFSMaskBase<BYTESIZE>::T_SINT T_SINT; #if FUNCTION_IN_PARENT template<int N> static T_SINT CreateMaskFromHighBitSized(typename BFSMaskBase<N>::T_SINT inmask); #endif }; template<typename T> struct BFSMask : public BFSMaskSized<sizeof(T)> { // BFSMask = -1 (all bits set) typedef BFSMask<T> T_This; // "Import" the Parent Class typedef BFSMaskSized<sizeof(T)> T_Parent; typedef typename T_Parent::T_SINT T_SINT; #if FUNCTION_IN_PARENT typedef T_Parent T_MaskGen; #else typedef T_This T_MaskGen; template<int N> static T_SINT CreateMaskFromHighBitSized(typename BFSMaskSized<N>::T_SINT inmask); #endif template<typename TCMP> static T_SINT Create_NEZ(TCMP A) { //ReDefineType(const typename BFSMask<TCMP>::T_SINT,SA,A); //const typename BFSMask<TCMP>::T_SINT cmpmask = BFSMASK_NEZ(SA); const typename BFSMask<TCMP>::T_SINT cmpmask = BFSMASK_NEZ(A); #if DOESNT_COMPILE_WITH_GCC return(T_MaskGen::CreateMaskFromHighBitSized<sizeof(TCMP)>(cmpmask)); #else return(CreateMaskFromHighBitSized<sizeof(TCMP)>(cmpmask)); #endif } }; template<typename TVAL> TVAL BFWrappedInc(TVAL x,TVAL minval,TVAL maxval) { const TVAL diff = maxval-x; const TVAL mask = BFSMask<TVAL>::Create_NEZ(diff); const TVAL incx = x + 1; return(BFSelect_MASK(mask,incx,minval)); } SINT32 currentsnap = 0; SINT32 SetSnapshot() { currentsnap=BFWrappedInc<SINT32>(currentsnap,0,20); return(currentsnap); } }

    Read the article

  • Same code... but warning!Any ideas?

    - by FILIaS
    I've a question for a warning message that i get. For this line,using qsort: qsort(catalog, MAX ,sizeof catalog, struct_cmp_by_amount); I get this warning: warning: passing argument 4 of ‘qsort’ makes pointer from integer without a cast struct_cmp_by_amount is another function on the program. BUT,for another program with the same code, with the same exactly struct_cmp_by_amount function, i dont get that warning for the 4th argument! qsort(structs, structs_len, sizeof(struct st_ex), struct_cmp_by_price); I'm wandering about why that;s happening. Have you any idea?

    Read the article

  • When do I need to deallocate memory?

    - by extintor
    I am using this code inside a class to make a webbrowser control visit a website: void myClass::visitWeb(const char *url) { WCHAR buffer[MAX_LEN]; ZeroMemory(buffer, sizeof(buffer)); MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, url, strlen(url), buffer, sizeof(buffer)-1); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(buffer); // webbrowser navigate code... VariantClear(&vURL); } I call visitWeb from another void function that gets called on the handlemessage() for the app. Do I need to do some memory deallocation here?, I see vURL is being deallocated by VariantClear but should I deallocate memory for buffer? I've been told that in another bool I have in the same app I shouldn't deallocate anything because everything clear out when the bool return true/false, but what happens on this void?

    Read the article

  • C++ Allocate Memory Without Activating Constructors

    - by schnozzinkobenstein
    I'm reading in values from a file which I will store in memory as I read them in. I've read on here that the correct way to handle memory location in C++ is to always use new/delete, but if I do: DataType* foo = new DataType[sizeof(DataType) * numDataTypes]; Then that's going to call the default constructor for each instance created, and I don't want that. I was going to do this: DataType* foo; char* tempBuffer=new char[sizeof(DataType) * numDataTypes]; foo=(DataType*) tempBuffer; But I figured that would be something poo-poo'd for some kind of type-unsafeness. So what should I do? And in researching for this question now I've seen that some people are saying arrays are bad and vectors are good. I was trying to use arrays more because I thought I was being a bad boy by filling my programs with (what I thought were) slower vectors. What should I be using???

    Read the article

  • Receving multiple multicast feeds on the same port - C, Linux

    - by Gigi
    I have an application that is receiving data from multiple multicast sources on the same port. I am able to receive the data. However, I am trying to account for statistics of each group (i.e. msgs received, bytes received) and all the data is getting mixed up. Does anyone know how to solved this problem? If I try to look at the sender's address, it is not the multicast address, but rather the IP of the sending machine. I am using the following socket options: struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr("224.1.2.3"); mreq.imr_interface.s_addr = INADDR_ANY; setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); and also: setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &reuse, sizeof(reuse)); I appreciate any help!!!

    Read the article

  • Using netlink to obtain arp entries only returns stale entries

    - by Ben
    I'm currently trying to retrieve reachable neighbors from the arp table in a user space program written in c. I've looked through the source code to the "ip neigh" command (ipneigh.c) and it appears that I should use the flag NUD_REACHABLE. struct { struct nlmsghdr n; struct ndmsg r; } req; memset(&req, 0, sizeof(req)); req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ndmsg)); req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_ROOT; req.n.nlmsg_type = RTM_GETNEIGH; req.r.ndm_state = NUD_REACHABLE; However, when I look at the data returned from the kernel, I only have stale entries. In fact, no matter what I put for req.r.ndm_state it seems to return only entries marked as stale by ip neigh. The remainder of my code is modeled after this example: http://linux-hacks.blogspot.com/2009/01/sample-code-to-learn-netlink.html

    Read the article

  • Function From C to Delphi

    - by XBasic3000
    I created a function similar below in delphi code. but it wont work. What is the proper way to convert this? char* ReadSpeechFile(char* pFileName, int *nFileSize) { char *szBuf, *pLinearPCM; int nSize; FILE* fp; //read wave data fp = fopen(pFileName, "rb"); if(fp == NULL) return NULL; fseek(fp, 0, SEEK_END); nSize = ftell(fp); //linear szBuf = (char *)calloc(nSize, sizeof(char)); fseek(fp, 0, SEEK_SET); fread(szBuf, sizeof(char), nSize, fp); fclose(fp); *nFileSize = nSize; return szBuf; }

    Read the article

  • another question about OpenGL ES rendering to texture

    - by ensoreus
    Hello, pros and gurus! Here is another question about rendering to texture. The whole stuff is all about saving texture between passing image into different filters. Maybe all iPhone developers knows about Apple's sample code with OpenGL processing where they used GL filters(functions), but pass into them the same source image. I need to edit an image by passing it sequentelly with saving the state of the image to edit. I am very noob in OpenGL, so I spent increadibly a lot of to solve the issue. So, I desided to create 2 FBO's and attach source image and temporary image as a textures to render in. Here is my init routine: glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, (GLint *)&SystemFBO); glImage = [self loadTexture:preparedImage]; //source image for (int i = 0; i < 4; i++) { fullquad[i].s *= glImage->s; fullquad[i].t *= glImage->t; flipquad[i].s *= glImage->s; flipquad[i].t *= glImage->t; } tmpImage = [self loadEmptyTexture]; //editing image glGenFramebuffersOES(1, &tmpImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, tmpImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, tmpImage->texID, 0); GLenum status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES); if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete tmp framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); glGenRenderbuffersOES(1, &glImageFBO); glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, glImage->texID, 0); status = glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) ; if(status != GL_FRAMEBUFFER_COMPLETE_OES) { NSLog(@"failed to make complete cur framebuffer object %x", status); } glBindTexture(GL_TEXTURE_2D, 0); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); When user drag the slider, this routine invokes to apply changes -(void)setContrast:(CGFloat)value{ contrast = value; if(flag!=mfContrast){ NSLog(@"contrast: dumped"); flag = mfContrast; glBindFramebufferOES(GL_FRAMEBUFFER_OES, glImageFBO); glClearColor(1,1,1,1); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, tmpImage->texID); glViewport(0, 0, 512, 512); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &fullquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); } glBindFramebufferOES(GL_FRAMEBUFFER_OES,tmpImageFBO); glClearColor(0,0,0,1); glClear(GL_COLOR_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glActiveTexture(GL_TEXTURE0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, 512, 0, 512, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(512, 512, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, 512, 512); [self contrastProc:fullquad value:contrast]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); [self redraw]; } Here are two cases: if it is the same filter(edit mode) to use, I bind tmpFBO to draw into tmpImage texture and edit glImage texture. contrastProc is a pure routine from Apples's sample. If it is another mode, than I save edited image by drawing tmpImage texture in source texture glImage, binded with glImageFBO. After that I call redraw: glBindFramebufferOES(GL_FRAMEBUFFER_OES, SystemFBO); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(0, kTexWidth, 0, kTexHeight, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glScalef(kTexWidth, kTexHeight, 1); glBindTexture(GL_TEXTURE_2D, glImage->texID); glViewport(0, 0, kTexWidth, kTexHeight); glVertexPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].x); glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &flipquad[0].s); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); And here it binds visual framebuffer and dispose glImage texture. So, the result is VERY aggresive filtering. Increasing contrast volume by just 0.2 brings image to state that comparable with 0.9 contrast volume in Apple's sample code project. I miss something obvious, I guess. Interesting, if I disabple line glBindTexture(GL_TEXTURE_2D, glImage->texID); in setContrast routine it brings no effect. At all. If I replace tmpImageFBO with SystemFBO to draw glImage directly on display(and disabling redraw invoking line), all works fine. Please, HELP ME!!! :(

    Read the article

  • Strange client's address returned throug accept(..) function.

    - by Negai
    Hi everyone, I'm a socket programming newbie. Here's a snippet: struct sockaddr_storage client_addr; ... client_addr_size = sizeof(client_addr); client_socket = accept( server_socket, (struct sockaddr *)&client_addr, &client_addr_size ); ... result = inet_ntop( AF_INET, &((struct sockaddr_in *)&client_addr)->sin_addr, client_addr_str, sizeof(client_addr_str) ); Whenever the client connects the address I get is 0.0.0.0 regardless from the host. Can anybody explain, what I'm doing wrong? Thanks.

    Read the article

  • Using WINAPI ReadConsole

    - by Jim Fell
    I am trying to use the WINAPI ReadConsole() to wait for any keypress at the end of my Win32 console application. CONSOLE_READCONSOLE_CONTROL tControl; char pStr[65536]; DWORD dwBufLen = 1; DWORD dwCtl; tControl_c.nLength = sizeof( CONSOLE_READCONSOLE_CONTROL ); tControl_c.nInitialChars = 0; tControl_c.dwControlKeyState = 0; tControl_c.dwCtrlWakeupMask = NULL; pBuf[0] = 0x00; do { ReadConsole( hConsole_c, pStr, (*pBufLen) * sizeof(TCHAR), pBufLen, &tControl ); } while ( pStr[0] == 0x00 ); The code executes without throwing an exception. However, when the ReadConsole() function executes the error code ERROR_INVALID_HANDLE (0x06) is flagged. I have verified hConsole_c to be a valid handle. Does anyone have any insight as to what I am doing wrongly? I am using Visual C++ 2008 Express Edition. Thanks.

    Read the article

  • Write pointer to file in C

    - by Sergey
    I have a stucture: typedef structure student { char *name; char *surname; int age; } Student; I need to write it to binary file. Student *s = malloc(sizeof(*s)); I fill my structure with data and then i write in to the file: fwrite(s, sizeof(*s), 1, fp); In my file doesnt exist a name and surname, it have an adresses of char*. How can i write to file a word, not an adresses?

    Read the article

  • C++ Memory Allocation & Linked List Implementation

    - by pws5068
    I'm writing software to simulate the "first-fit" memory allocation schema. Basically, I allocate a large X megabyte chunk of memory and subdivide it into blocks when chunks are requested according to the schema. I'm using a linked list called "node" as a header for each block of memory (so that we can find the next block without tediously looping through every address value. head_ptr = (char*) malloc(total_size + sizeof(node)); if(head_ptr == NULL) return -1; // Malloc Error .. :-( node* head_node = new node; // Build block header head_node->next = NULL; head_node->previous = NULL; // Header points to next block (which doesn't exist yet) memset(head_ptr,head_node, sizeof(node)); ` But this last line returns: error: invalid conversion from 'node*' to 'int' I understand why this is invalid.. but how can I place my node into the pointer location of my newly allocated memory?

    Read the article

  • Objective-C global array of ints not working as expected

    - by Fran
    In my MyConstants.h file... I have: int abc[3]; In my matching MyConstants.m file... I have: extern int abc[3] = {11, 22, 33}; In each of my other *.m files... I have #import "MyConstants.h" Inside 1 of my viewDidLoad{} methods, I have: extern int abc[]; NSLog(@"abc = (%d) (%d)", abc[1], sizeof(abc)/sizeof(int)); Why does it display "abc = (0) (3)" instead of "abc = (22) (3)"? How do I make this work as expected?

    Read the article

  • problem with fifos linux

    - by nunos
    I am having problem debugging why n_bytes in read_from_fifo function in client.c doesn't correspond to the value written to the fifo. It should only write 25 bytes but it tries to read a lot more (1836020505 bytes (!) to be exact). Any idea why this is happening? server.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> #include <sys/wait.h> #include <signal.h> #include <pthread.h> #include <sys/stat.h> typedef enum { false, true } bool; //first read the int with the number of bytes the data will have //then read that number of bytes bool read_from_fifo(int fd, char* var) { int n_bytes; if (read(fd, &n_bytes, sizeof(int))) { printf("going to read %d bytes\n", n_bytes); if (read(fd, var, n_bytes)) printf("read var\n"); else { printf("error in read var. errno: %d\n", errno); exit(-1); } } return true; } int main() { mkfifo("/tmp/foo", 0660); int fd = open("/tmp/foo", O_RDONLY); char var[100]; read_from_fifo(fd, var); printf("var: %s\n", var); return 0; } client.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> typedef enum { false, true } bool; //first write to fd a int with the number of bytes that will be written afterwards bool write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)) * sizeof(char); printf("going to write %d bytes\n", n_bytes); if (write(fd, &n_bytes, sizeof(int) != -1)) if (write(fd, data, n_bytes) != -1) return true; return false; } int main() { int fd = open("/tmp/foo", O_WRONLY); char data[] = "some random string abcdef"; write_to_fifo(fd, data); return 0; } Help is greatly appreciated. Thanks in advance.

    Read the article

  • No-overflow cast on x64

    - by Cheeso
    I have an existing C codebase that works on x86. I'm now compiling it for x64. What I'd like to do is cast a size_t to a DWORD, and throw an exception if there's a loss of data. Q: Is there an idiom for this? Here's why I'm doing this: A bunch of Windows APIs accept DWORDs as arguments, and the code currently assumes sizeof(DWORD)==sizeof(size_t). That assumption holds for x86, but not for x64. So when compiling for x64, passing size_t in place of a DWORD argument, generates a compile-time warning. In virtually all of these cases the actual size is not going to exceed 2^32. But I want to code it defensively and explicitly. This is my first x64 project, so... be gentle.

    Read the article

  • trying to copy a bitmap into the WMP Renderer -> upside down!

    - by Roey
    Hi All. I'm writing a video DMO decoder and trying to return a bitmap to the WMP renderer for display ... but WMP displays it upside down! This is the code : HBITMAP* hBmp = new HBITMAP(); int result; m_pScrRenderer->CreateFrame(hBmp, &result); ///This returns the HBITMAP handle. BITMAP bmStruct; memset(&bmStruct, 0, sizeof(BITMAP)); GetObject(*hBmp, sizeof(BITMAP), &bmStruct); int size = bmStruct.bmWidthBytes * bmStruct.bmHeight; memcpy(pbOutData, bmStruct.bmBits, size); //PBoutData is WMP's renderer buffer. This produces an upside down image. What should I change in this code? Thank You! Roey.

    Read the article

  • Data loss between conversion

    - by Alex Brooks
    Why is it that I loose data between the conversions below even though both types take up the same amount of space? If the conversion was done bitwise, it should be true that x = z unless data is being stripped during the conversion, right? Is there a way to do the two conversions without losing data (i.e. so that x = z)? main.cpp: #include <stdio.h> #include <stdint.h> int main() { double x = 5.5; uint64_t y = static_cast<uint64_t>(x); double z = static_cast<double>(y) // Desire : z = 5.5; printf("Size of double: %lu\nSize of uint64_t: %lu\n", sizeof(double), sizeof(uint64_t)); printf("%f\n%lu\n%f\n", x, y, z); } Results: Size of double: 8 Size of uint64_t: 8 5.500000 5 5.000000

    Read the article

  • How to resize an openGL window created with wglCreateContext?

    - by Nick
    Is it possible to resize an openGL window (or device context) created with wglCreateContext without disabling it? If so how? Right now I have a function which resizes the DC but the only way I could get it to work was to call DisableOpenGL and then re-enable. This causes any textures and other state changes to be lost. I would like to do this without the disable so that I do not have to go through the tedious task of recreating the openGL DC state. HWND hWnd; HDC hDC; void View_setSizeWin32(int width, int height) { // resize the window LPRECT rec = malloc(sizeof(RECT)); GetWindowRect(hWnd, rec); SetWindowPos( hWnd, HWND_TOP, rec->left, rec->top, rec->left+width, rec->left+height, SWP_NOMOVE ); free(rec); // sad panda DisableOpenGL( hWnd, hDC, hRC ); EnableOpenGL( hWnd, &hDC, &hRC ); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-(width/2), width/2, -(height/2), height/2, -1.0, 1.0); // have fun recreating the openGL state.... } void EnableOpenGL(HWND hWnd, HDC * hDC, HGLRC * hRC) { PIXELFORMATDESCRIPTOR pfd; int format; // get the device context (DC) *hDC = GetDC( hWnd ); // set the pixel format for the DC ZeroMemory( &pfd, sizeof( pfd ) ); pfd.nSize = sizeof( pfd ); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; format = ChoosePixelFormat( *hDC, &pfd ); SetPixelFormat( *hDC, format, &pfd ); // create and enable the render context (RC) *hRC = wglCreateContext( *hDC ); wglMakeCurrent( *hDC, *hRC ); } void DisableOpenGL(HWND hWnd, HDC hDC, HGLRC hRC) { wglMakeCurrent( NULL, NULL ); wglDeleteContext( hRC ); ReleaseDC( hWnd, hDC ); }

    Read the article

  • Differences between matrix implementation in C

    - by tempy
    I created two 2D arrays (matrix) in C in two different ways. I don't understand the difference between the way they're represented in the memory, and the reason why I can't refer to them in the same way: scanf("%d", &intMatrix1[i][j]); //can't refer as &intMatrix1[(i * lines)+j]) scanf("%d", &intMatrix2[(i * lines)+j]); //can't refer as &intMatrix2[i][j]) What is the difference between the ways these two arrays are implemented and why do I have to refer to them differently? How do I refer to an element in each of the arrays in the same way (?????? in my printMatrix function)? int main() { int **intMatrix1; int *intMatrix2; int i, j, lines, columns; lines = 3; columns = 2; /************************* intMatrix1 ****************************/ intMatrix1 = (int **)malloc(lines * sizeof(int *)); for (i = 0; i < lines; ++i) intMatrix1[i] = (int *)malloc(columns * sizeof(int)); for (i = 0; i < lines; ++i) { for (j = 0; j < columns; ++j) { printf("Type a number for intMatrix1[%d][%d]\t", i, j); scanf("%d", &intMatrix1[i][j]); } } /************************* intMatrix2 ****************************/ intMatrix2 = (int *)malloc(lines * columns * sizeof(int)); for (i = 0; i < lines; ++i) { for (j = 0; j < columns; ++j) { printf("Type a number for intMatrix2[%d][%d]\t", i, j); scanf("%d", &intMatrix2[(i * lines)+j]); } } /************** printing intMatrix1 & intMatrix2 ****************/ printf("intMatrix1:\n\n"); printMatrix(*intMatrix1, lines, columns); printf("intMatrix2:\n\n"); printMatrix(intMatrix2, lines, columns); } /************************* printMatrix ****************************/ void printMatrix(int *ptArray, int h, int w) { int i, j; printf("Printing matrix...\n\n\n"); for (i = 0; i < h; ++i) for (j = 0; j < w; ++j) printf("array[%d][%d] ==============> %d\n, i, j, ??????); }

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • Storing CLLocationCoordinates2D in NSMutableArray

    - by Amarsh
    After some searching, I got the following solution : ref: http://stackoverflow.com/questions/1392909/nsmutablearray-addobject-with-mallocd-struct CLLocationCoordinate2D* new_coordinate = malloc(sizeof(CLLocationCoordinate2D)); new_coordinate->latitude = latitude; new_coordinate->longitude = longitude; [points addObject:[NSData dataWithBytes:(void *)new_coordinate length:sizeof(CLLocationCoordinate2D)]]; free(new_coordinate); And access it as: CLLocationCoordinate2D* c = (CLLocationCoordinate2D*) [[points objectAtIndex:0] bytes]; However, someone claims that there is a memory leak here? Can anyone suggest me where is the leak and how to fix it. Further, is there a better way of storing a list of CLLocationCoordinate2D in NSMutableArray? Please give sample code since I am an Objective C newbie.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >