Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 10/59 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • error C2440: 'initializing' : cannot convert from 'const wchar_t [9]' to 'LPCSTR'

    - by numerical25
    When I add the following to my code. // Define the input layout D3D10_INPUT_ELEMENT_DESC layout[] = { { L"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 }, }; UINT numElements = sizeof(layout)/sizeof(layout[0]); I get the following error 1>c:\users\numerical25\desktop\intro todirectx\msdntutorials\tutorial0\tutorial\tutorial\main.cpp(43) : error C2440: 'initializing' : cannot convert from 'const wchar_t [9]' to 'LPCSTR' The error points straight to that line of code. if i remove the code, everything compiles correctly.

    Read the article

  • how to send classes defined in .proto (protocol-buffers) over a socket

    - by make
    Hi, I am trying to send a proto over a socket, but i am getting segmentation error. Could someone please help and tell me what is wrong with this example? file.proto message data{ required string x1 = 1; required uint32 x2 = 2; required float x3 = 3; } client.cpp ... // class defined in proto data data_snd; data data_rec; char *y1 = "operation1"; uint32_t y2 = 123 ; float y3 = 3.14; // assigning data to send() data_snd.set_x1(y1); data_snd.set_x2(y2); data_snd.set_x3(y3); //sending data to the server if (send(socket, &data_snd, sizeof(data_snd), 0) < 0) { cerr << "send() failed" ; exit(1); } //receiving data from the client if (recv(socket, &data_rec, sizeof(data_rec), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << data_rec.x1() << "\n"; cout << data_rec.x2() << "\n"; cout << data_rec.x3() << "\n"; ... server.cpp ... //receiving data from the client if (recv(socket, &data_rec, sizeof(data_rec), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << data_rec.x1() << "\n"; cout << data_rec.x2() << "\n"; cout << data_rec.x3() << "\n"; // assigning data to send() data_snd.set_x1(data_rec.x1()); data_snd.set_x2(data_rec.x2()); data_snd.set_x3(data_rec.x3()); //sending data to the server if (send(socket, &data_snd, sizeof(data_snd), 0) < 0) { cerr << "send() failed" ; exit(1); } ... Thanks for help and replies-

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • Bidirectional FIFO

    - by nunos
    I would like to implement a bidirectional fifo. The code below is functioning but it is not using bidirectional fifo. I have searched all over the internet, but haven't found any good example... How can I do that? Thanks, WRITER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main (int argc, char** argv) { int a, b, fd; do { fd=open("/tmp/myfifo",O_WRONLY); if (fd==-1) sleep(1); } while (fd==-1); while (1) { scanf("%d", &a); scanf("%d", &b); write(fd,&a,sizeof(int)); write(fd,&b,sizeof(int)); if (a == 0 && b == 0) { break; } } close(fd); return 0; } READER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #include <sys/stat.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main(void) { int n1, n2; int fd; mkfifo("/tmp/myfifo",0660); fd=open("/tmp/myfifo",O_RDONLY); while(read(fd, &n1, sizeof(int) )) { read(fd, &n2, sizeof(int)); if (n1 == 0 && n2 == 0) { break; } printf("soma: %d\n",n1+n2); printf("diferenca: %d\n", n1-n2); printf("divisao: %f\n", n1/(double)n2); printf("multiplicacao: %d\n", n1*n2); } close(fd); return 0; }

    Read the article

  • NSKeyedUnarchiver chokes when trying to unarchive more than one object

    - by ajduff574
    We've got a custom matrix class, and we're attempting to archive and unarchive an NSArray containing four of them. The first seems to get unarchived fine (we can see that initWithCoder is called once), but then the program simply hangs, using 100% CPU. It doesn't continue or output any errors. These are the relevant methods from the matrix class (rows, columns, and matrix are our only instance variables): -(void)encodeWithCoder:(NSCoder*) coder { float temp[rows * columns]; for(int i = 0; i < rows; i++) { for(int j = 0; j < columns; j++) { temp[columns * i + j] = matrix[i][j]; } } [coder encodeBytes:(const void *)temp length:rows*columns*sizeof(float) forKey:@"matrix"]; [coder encodeInteger:rows forKey:@"rows"]; [coder encodeInteger:columns forKey:@"columns"]; } -(id)initWithCoder:(NSCoder *) coder { if (self = [super init]) { rows = [coder decodeIntegerForKey:@"rows"]; columns = [coder decodeIntegerForKey:@"columns"]; NSUInteger * len; *len = (unsigned int)(rows * columns * sizeof(float)); float * temp = (float * )[coder decodeBytesForKey:@"matrix" returnedLength:len]; matrix = (float ** )calloc(rows, sizeof(float*)); for (int i = 0; i < rows; i++) { matrix[i] = (float*)calloc(columns, sizeof(float)); } for(int i = 0; i < rows *columns; i++) { matrix[i / columns][i % columns] = temp[i]; } } return self; } And this is really all we're trying to do: NSArray * weightMatrices = [NSArray arrayWithObjects:w1,w2,w3,w4,nil]; [NSKeyedArchiver archiveRootObject:weightMatrices toFile:@"weights.archive"]; NSArray * newWeights = [NSKeyedUnarchiver unarchiveObjectWithFile:@"weights.archive"]; What's driving us crazy is that we can archive and unarchive a single matrix just fine. We've done so (successfully) with a matrix many times larger than these four combined.

    Read the article

  • HttpSendRequest not getting latest file from server

    - by Doug Kavendek
    I am having an issue with my HTTP requests in my app, such that if the remote file is the same size as the local file (even though its modified time is different, as its contents have been changed), attempts to download it return quickly and the newer file is not downloaded. In short, the process I am following is: Setting up an HTTP connection with the INTERNET_FLAG_RESYNCHRONIZE flag and calling HttpSendRequest(); then checking the HTTP status code and finding it to be "200". If the remote file is updated, but remains the same size as the local copy: The local file is unchanged after running the app. If I call HttpQueryInfo() with HTTP_QUERY_LAST_MODIFIED after sending the request, it gives me the actual last modified time of the server's file, which I can see is different from the local file I am trying to have it overwrite. If the remote file is updated, and the file size becomes different from the local copy: It is downloaded and overwrites the local copy as expected. Here's a fairly abridged version of the code, to cut out helpers and error checking: // szAppName = our app name HINTERNET hInternetHandle = InternetOpen( szAppName, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0 ); // szServerName = our server name hInternetHandle = InternetConnect( hInternetHandle, szServerName, INTERNET_DEFAULT_HTTP_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, NULL, 0 ); // szPath = the file to download LPCSTR aszDefault[2] = { "*/*", NULL }; DWORD dwFlags = 0 | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTP | INTERNET_FLAG_IGNORE_REDIRECT_TO_HTTPS | INTERNET_FLAG_KEEP_CONNECTION | INTERNET_FLAG_NO_AUTH | INTERNET_FLAG_NO_AUTO_REDIRECT | INTERNET_FLAG_NO_COOKIES | INTERNET_FLAG_NO_UI | INTERNET_FLAG_RESYNCHRONIZE; HINTERNET hHandle = HttpOpenRequest( hInternetHandle, "GET", szPath, NULL, NULL, aszDefault, dwFlags, 0 ); DWORD dwTimeOut = 10 * 1000; // In milliseconds InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_RECEIVE_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); InternetSetOption( hInternetHandle, INTERNET_OPTION_SEND_TIMEOUT, &dwTimeOut, sizeof( dwTimeOut ) ); DWORD dwRetries = 5; InternetSetOption( hInternetHandle, INTERNET_OPTION_CONNECT_RETRIES, &dwRetries, sizeof( dwRetries ) ); HttpSendRequest( hInternetHandle, NULL, 0, NULL, 0 ); Since I have found I can query the remote file's last modified time, and find it to be accurate, I know it's actually getting to the server. I thought that specifying INTERNET_FLAG_RESYNCHRONIZE would force the file to resynch if it's out of date. Do I have it all wrong? Is this just how it's supposed to work?

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • realloc - converting int to char

    - by Mike
    I'm converting an array of integers into a char by iterating through the whole array, and then I'm adding the resulting string to ncurses's method new_item. For some reason I'm doing something wrong the way I reallocate memory, thus I get the the first column as: -4 Choice 1 0 Choice 1 4 Choice 2 1 Choice 1 4 Choice 3 - Instead of - 2 Choice 1 4 Choice 4 3 Choice 1 4 Exit 4 Choice 1 - #include <stdio.h> #include <stdlib.h> #include <curses.h> #include <menu.h> #define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0])) #define CTRLD 4 char *choices[] = { "Choice 1", "Choice 2", "Choice 3", "Choice 4", "Exit", }; int table[5]={0,1,2,3,4}; int main() { ITEM **my_items; int c; MENU *my_menu; int n_choices, i; ITEM *cur_item; initscr(); cbreak(); noecho(); keypad(stdscr, TRUE); n_choices = ARRAY_SIZE(choices); my_items = (ITEM **)calloc(n_choices + 1, sizeof(ITEM *)); // right here char *convert = NULL; for(i = 0; i < n_choices; ++i){ convert = (char *) realloc (convert, sizeof(char) * 4); sprintf(convert, "%i", table[i]); my_items[i] = new_item(convert, choices[i]); } my_items[n_choices] = (ITEM *)NULL; my_menu = new_menu((ITEM **)my_items); mvprintw(LINES - 2, 0, "F1 to Exit"); post_menu(my_menu); refresh(); while((c = getch()) != KEY_F(1)) { switch(c) { case KEY_DOWN: menu_driver(my_menu, REQ_DOWN_ITEM); break; case KEY_UP: menu_driver(my_menu, REQ_UP_ITEM); break; } } free(convert); unpost_menu(my_menu); free_item(my_items[0]); free_item(my_items[1]); free_item(my_items[2]); free_item(my_items[3]); free_item(my_items[4]); free_menu(my_menu); endwin(); }

    Read the article

  • OpenCL: Strange buffer or image bahaviour with NVidia but not Amd

    - by Alex R.
    I have a big problem (on Linux): I create a buffer with defined data, then an OpenCL kernel takes this data and puts it into an image2d_t. When working on an AMD C50 (Fusion CPU/GPU) the program works as desired, but on my GeForce 9500 GT the given kernel computes the correct result very rarely. Sometimes the result is correct, but very often it is incorrect. Sometimes it depends on very strange changes like removing unused variable declarations or adding a newline. I realized that disabling the optimization will increase the probability to fail. I have the most actual display driver in both systems. Here is my reduced code: #include <CL/cl.h> #include <string> #include <iostream> #include <sstream> #include <cmath> void checkOpenCLErr(cl_int err, std::string name){ const char* errorString[] = { "CL_SUCCESS", "CL_DEVICE_NOT_FOUND", "CL_DEVICE_NOT_AVAILABLE", "CL_COMPILER_NOT_AVAILABLE", "CL_MEM_OBJECT_ALLOCATION_FAILURE", "CL_OUT_OF_RESOURCES", "CL_OUT_OF_HOST_MEMORY", "CL_PROFILING_INFO_NOT_AVAILABLE", "CL_MEM_COPY_OVERLAP", "CL_IMAGE_FORMAT_MISMATCH", "CL_IMAGE_FORMAT_NOT_SUPPORTED", "CL_BUILD_PROGRAM_FAILURE", "CL_MAP_FAILURE", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "CL_INVALID_VALUE", "CL_INVALID_DEVICE_TYPE", "CL_INVALID_PLATFORM", "CL_INVALID_DEVICE", "CL_INVALID_CONTEXT", "CL_INVALID_QUEUE_PROPERTIES", "CL_INVALID_COMMAND_QUEUE", "CL_INVALID_HOST_PTR", "CL_INVALID_MEM_OBJECT", "CL_INVALID_IMAGE_FORMAT_DESCRIPTOR", "CL_INVALID_IMAGE_SIZE", "CL_INVALID_SAMPLER", "CL_INVALID_BINARY", "CL_INVALID_BUILD_OPTIONS", "CL_INVALID_PROGRAM", "CL_INVALID_PROGRAM_EXECUTABLE", "CL_INVALID_KERNEL_NAME", "CL_INVALID_KERNEL_DEFINITION", "CL_INVALID_KERNEL", "CL_INVALID_ARG_INDEX", "CL_INVALID_ARG_VALUE", "CL_INVALID_ARG_SIZE", "CL_INVALID_KERNEL_ARGS", "CL_INVALID_WORK_DIMENSION", "CL_INVALID_WORK_GROUP_SIZE", "CL_INVALID_WORK_ITEM_SIZE", "CL_INVALID_GLOBAL_OFFSET", "CL_INVALID_EVENT_WAIT_LIST", "CL_INVALID_EVENT", "CL_INVALID_OPERATION", "CL_INVALID_GL_OBJECT", "CL_INVALID_BUFFER_SIZE", "CL_INVALID_MIP_LEVEL", "CL_INVALID_GLOBAL_WORK_SIZE", }; if (err != CL_SUCCESS) { std::stringstream str; str << errorString[-err] << " (" << err << ")"; throw std::string(name)+(str.str()); } } int main(){ try{ cl_context m_context; cl_platform_id* m_platforms; unsigned int m_numPlatforms; cl_command_queue m_queue; cl_device_id m_device; cl_int error = 0; // Used to handle error codes clGetPlatformIDs(0,NULL,&m_numPlatforms); m_platforms = new cl_platform_id[m_numPlatforms]; error = clGetPlatformIDs(m_numPlatforms,m_platforms,&m_numPlatforms); checkOpenCLErr(error, "getPlatformIDs"); // Device error = clGetDeviceIDs(m_platforms[0], CL_DEVICE_TYPE_GPU, 1, &m_device, NULL); checkOpenCLErr(error, "getDeviceIDs"); // Context cl_context_properties properties[] = { CL_CONTEXT_PLATFORM, (cl_context_properties)(m_platforms[0]), 0}; m_context = clCreateContextFromType(properties, CL_DEVICE_TYPE_GPU, NULL, NULL, NULL); // m_private->m_context = clCreateContext(properties, 1, &m_private->m_device, NULL, NULL, &error); checkOpenCLErr(error, "Create context"); // Command-queue m_queue = clCreateCommandQueue(m_context, m_device, 0, &error); checkOpenCLErr(error, "Create command queue"); //Build program and kernel const char* source = "#pragma OPENCL EXTENSION cl_khr_byte_addressable_store : enable\n" "\n" "__kernel void bufToImage(__global unsigned char* in, __write_only image2d_t out, const unsigned int offset_x, const unsigned int image_width , const unsigned int maxval ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint width = get_global_size(0);\n" "\tint height = get_global_size(1);\n" "\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tif( maxval < 256 ){\n" "\t\tfloat4 c = (float4)(in[pos],in[pos+1],in[pos+2],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}else{\n" "\t\tfloat4 c = (float4)(255.0f*in[2*pos]+in[2*pos+1],255.0f*in[2*pos+2]+in[2*pos+3],255.0f*in[2*pos+4]+in[2*pos+5],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}\n" "}\n" "\n" "__constant sampler_t imageSampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;\n" "\n" "__kernel void imageToBuf(__read_only image2d_t in, __global unsigned char* out, const unsigned int offset_x, const unsigned int image_width ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tfloat4 c = read_imagef(in, imageSampler, (int2)(i,j));\n" "\tif( c.x <= 1.0f && c.y <= 1.0f && c.z <= 1.0f ){\n" "\t\tout[pos] = c.x*255.0f;\n" "\t\tout[pos+1] = c.y*255.0f;\n" "\t\tout[pos+2] = c.z*255.0f;\n" "\t}else{\n" "\t\tout[pos] = 200.0f;\n" "\t\tout[pos+1] = 0.0f;\n" "\t\tout[pos+2] = 255.0f;\n" "\t}\n" "}\n"; cl_int err; cl_program prog = clCreateProgramWithSource(m_context,1,&source,NULL,&err); if( -err != CL_SUCCESS ) throw std::string("clCreateProgramWithSources"); err = clBuildProgram(prog,0,NULL,"-cl-opt-disable",NULL,NULL); if( -err != CL_SUCCESS ) throw std::string("clBuildProgram(fromSources)"); cl_kernel kernel = clCreateKernel(prog,"bufToImage",&err); checkOpenCLErr(err,"CreateKernel"); cl_uint imageWidth = 8; cl_uint imageHeight = 9; //Initialize datas cl_uint maxVal = 255; cl_uint offsetX = 0; int size = imageWidth*imageHeight*3; int resSize = imageWidth*imageHeight*4; cl_uchar* data = new cl_uchar[size]; cl_float* expectedData = new cl_float[resSize]; for( int i = 0,j=0; i < size; i++,j++ ){ data[i] = (cl_uchar)i; expectedData[j] = (cl_float)i/255.0f; if ( i%3 == 2 ){ j++; expectedData[j] = 1.0f; } } cl_mem inBuffer = clCreateBuffer(m_context,CL_MEM_READ_ONLY|CL_MEM_COPY_HOST_PTR,size*sizeof(cl_uchar),data,&err); checkOpenCLErr(err, "clCreateBuffer()"); clFinish(m_queue); cl_image_format imgFormat; imgFormat.image_channel_order = CL_RGBA; imgFormat.image_channel_data_type = CL_FLOAT; cl_mem outImg = clCreateImage2D( m_context, CL_MEM_READ_WRITE, &imgFormat, imageWidth, imageHeight, 0, NULL, &err ); checkOpenCLErr(err,"get2DImage()"); clFinish(m_queue); size_t kernelRegion[]={imageWidth,imageHeight}; size_t kernelWorkgroup[]={1,1}; //Fill kernel with data clSetKernelArg(kernel,0,sizeof(cl_mem),&inBuffer); clSetKernelArg(kernel,1,sizeof(cl_mem),&outImg); clSetKernelArg(kernel,2,sizeof(cl_uint),&offsetX); clSetKernelArg(kernel,3,sizeof(cl_uint),&imageWidth); clSetKernelArg(kernel,4,sizeof(cl_uint),&maxVal); //Run kernel err = clEnqueueNDRangeKernel(m_queue,kernel,2,NULL,kernelRegion,kernelWorkgroup,0,NULL,NULL); checkOpenCLErr(err,"RunKernel"); clFinish(m_queue); //Check resulting data for validty cl_float* computedData = new cl_float[resSize];; size_t region[]={imageWidth,imageHeight,1}; const size_t offset[] = {0,0,0}; err = clEnqueueReadImage(m_queue,outImg,CL_TRUE,offset,region,0,0,computedData,0,NULL,NULL); checkOpenCLErr(err, "readDataFromImage()"); clFinish(m_queue); for( int i = 0; i < resSize; i++ ){ if( fabs(expectedData[i]-computedData[i])>0.1 ){ std::cout << "Expected: \n"; for( int j = 0; j < resSize; j++ ){ std::cout << expectedData[j] << " "; } std::cout << "\nComputed: \n"; std::cout << "\n"; for( int j = 0; j < resSize; j++ ){ std::cout << computedData[j] << " "; } std::cout << "\n"; throw std::string("Error, computed and expected data are not the same!\n"); } } }catch(std::string& e){ std::cout << "\nCaught an exception: " << e << "\n"; return 1; } std::cout << "Works fine\n"; return 0; } I also uploaded the source code for you to make it easier to test it: http://www.file-upload.net/download-3513797/strangeOpenCLError.cpp.html Please can you tell me if I've done wrong anything? Is there any mistake in the code or is this a bug in my driver? Best reagards, Alex

    Read the article

  • binary files writing/reading problems...

    - by ScReYm0
    Ok i have problem with my code for reading binary file... First i will show you my writing code: void book_saving(char *file_name, struct BOOK *current) { FILE *out; BOOK buf; out = fopen(file_name, "wb"); if(out != NULL) { printf_s("Writting to file..."); do { if(current != NULL) { strcpy(buf.catalog_number, current-catalog_number); strcpy(buf.author, current-author); buf.price = current-price; strcpy(buf.publisher, current-publisher); strcpy(buf.title, current-title); buf.price = current-year_published; fwrite(&buf, sizeof(BOOK), 1, out); } current = current-next; }while(current != NULL); printf_s("Done!\n"); fclose(out); } } and here is my "version" for reading: int book_open(struct BOOK *current, char *file_name) { FILE *in; BOOK buf; BOOK *vnext; int count; int i; in = fopen("west", "rb"); printf_s("Reading database from %s...", file_name); if(!in) { printf_s("\nERROR!"); return 1; } i = fread(&buf,sizeof(BOOK), 1, in); while(!feof(in)) { if(current != NULL) { current = malloc(sizeof(BOOK)); current-next = NULL; } strcpy(current-catalog_number, buf.catalog_number); strcpy(current-title, buf.title); strcpy(current-publisher, buf.publisher); current-price = buf.price; current-year_published = buf.year_published; fread(&buf, 1, sizeof(BOOK), in); while(current-next != NULL) current = current-next; fclose(in); } printf_s("Done!"); return 0; } I just need to save my linked list in binary file and to be able to read it back ... please help me. The program just don't read it or its crash every time different situation ...

    Read the article

  • Binary file reading problem

    - by ScReYm0
    Ok i have problem with my code for reading binary file... First i will show you my writing code: void book_saving(char *file_name, struct BOOK *current) { FILE *out; BOOK buf; out = fopen(file_name, "wb"); if(out != NULL) { printf_s("Writting to file..."); do { if(current != NULL) { strcpy(buf.catalog_number, current->catalog_number); strcpy(buf.author, current->author); buf.price = current->price; strcpy(buf.publisher, current->publisher); strcpy(buf.title, current->title); buf.price = current->year_published; fwrite(&buf, sizeof(BOOK), 1, out); } current = current->next; }while(current != NULL); printf_s("Done!\n"); fclose(out); } } and here is my "version" for reading it back: int book_open(struct BOOK *current, char *file_name) { FILE *in; BOOK buf; BOOK *vnext; int count; int i; in = fopen("west", "rb"); printf_s("Reading database from %s...", file_name); if(!in) { printf_s("\nERROR!"); return 1; } i = fread(&buf,sizeof(BOOK), 1, in); while(!feof(in)) { if(current != NULL) { current = malloc(sizeof(BOOK)); current->next = NULL; } strcpy(current->catalog_number, buf.catalog_number); strcpy(current->title, buf.title); strcpy(current->publisher, buf.publisher); current->price = buf.price; current->year_published = buf.year_published; fread(&buf, 1, sizeof(BOOK), in); while(current->next != NULL) current = current->next; fclose(in); } printf_s("Done!"); return 0; } I just need to save my linked list in binary file and to be able to read it back ... please help me. The program just don't read it or its crash every time different situation ...

    Read the article

  • std::ostream interface to an OLE IStream

    - by PaulH
    I have a Visual Studio 2008 C++ application using IStreams. I would like to use the IStream connection in a std::ostream. Something like this: IStream* stream = /*create valid IStream instance...*/; IStreamBuf< WIN32_FIND_DATA > sb( stream ); std::ostream os( &sb ); WIN32_FIND_DATA d = { 0 }; // send the structure along the IStream os << d; To accomplish this, I've implemented the following code: template< class _CharT, class _Traits > inline std::basic_ostream< _CharT, _Traits >& operator<<( std::basic_ostream< _CharT, _Traits >& os, const WIN32_FIND_DATA& i ) { const _CharT* c = reinterpret_cast< const _CharT* >( &i ); const _CharT* const end = c + sizeof( WIN32_FIND_DATA ) / sizeof( _CharT ); for( c; c < end; ++c ) os << *c; return os; } template< typename T > class IStreamBuf : public std::streambuf { public: IStreamBuf( IStream* stream ) : stream_( stream ) { setp( reinterpret_cast< char* >( &buffer_ ), reinterpret_cast< char* >( &buffer_ ) + sizeof( buffer_ ) ); }; virtual ~IStreamBuf() { sync(); }; protected: traits_type::int_type FlushBuffer() { int bytes = std::min< int >( pptr() - pbase(), sizeof( buffer_ ) ); DWORD written = 0; HRESULT hr = stream_->Write( &buffer_, bytes, &written ); if( FAILED( hr ) ) { return traits_type::eof(); } pbump( -bytes ); return bytes; }; virtual int sync() { if( FlushBuffer() == traits_type::eof() ) return -1; return 0; }; traits_type::int_type overflow( traits_type::int_type ch ) { if( FlushBuffer() == traits_type::eof() ) return traits_type::eof(); if( ch != traits_type::eof() ) { *pptr() = ch; pbump( 1 ); } return ch; }; private: /// data queued up to be sent T buffer_; /// output stream IStream* stream_; }; // class IStreamBuf Yes, the code compiles and seems to work, but I've not had the pleasure of implementing a std::streambuf before. So, I'd just like to know if it's correct and complete. Thanks, PaulH

    Read the article

  • Bluetooth service problem

    - by hara
    hi I need to create a custom bluetooth service and I have to develop it using c++. I read a lot of examples but I didn't success in publishing a new service with a custom UUID. I need to specify a UUID in order to be able to connect to the service from an android app. This is what i wrote: GUID service_UUID = { /* 00000003-0000-1000-8000-00805F9B34FB */ 0x00000003, 0x0000, 0x1000, {0x80, 0x00, 0x00, 0x80, 0x5F, 0x9B, 0x34, 0xFB} }; SOCKET s, s2; SOCKADDR_BTH sab if (WSAStartup(MAKEWORD(2, 2), &wsd) != 0) return 1; printf("installing a new service\n"); s = socket(AF_BTH, SOCK_STREAM, BTHPROTO_RFCOMM); if (s == INVALID_SOCKET) { printf ("Socket creation failed, error %d\n", WSAGetLastError()); return 1; } memset (&sab, 0, sizeof(sab)); sab.addressFamily = AF_BTH; sab.port = BT_PORT_ANY; sab.serviceClassId = service_UUID; if (0 != bind(s, (SOCKADDR *) &sab, sizeof(sab))) { printf ("bind() failed with error code %d\n", WSAGetLastError()); closesocket (s); return 1; } int result=sizeof(sab); getsockname(s,(SOCKADDR *) &sab, &result ); printSOCKADDR_BTH(sab); if(listen (s, 5) == 0) printf("listen() is OK! Listening for connection... :)\n"); else printf("listen() failed with error code %d\n", WSAGetLastError()); printf("waiting connection"); for ( ; ; ) { int ilen = sizeof(sab2); s2 = accept (s, (SOCKADDR *)&sab2, &ilen); printf ("accepted"); } if(closesocket(s) == 0) printf("closesocket() pretty fine!\n"); if(WSACleanup () == 0) printf("WSACleanup() is OK!\n"); return 0; When i print the SOCKADDR_BTH structure retrieved with get getsockname i get an UUID that is not the mine. Furthermore if i use the UUID read from getsockname to connect the Android application the connection fails with this exception: java.io.IOException: Service discovery failed Could you help me?? Thanks!

    Read the article

  • C programming doubt!!!

    - by aks
    Hi, I am having a programming doubt? Please have a look at the below mentioned code snippet and tell me the difference? int main() { struct sockaddr_in serv_addr, cli_addr; /* Initialize socket structure */ bzero((char *) &serv_addr, sizeof(serv_addr)); } Now, what if i do something similar without typecasting (char *), then also i feel it will do the same thing? Can someone clarify? /* Initialize socket structure */ bzero( &serv_addr, sizeof(serv_addr));

    Read the article

  • c++ object sizes

    - by anon
    Suppose I have: struct Foo: public Bar { .... } Foo introduces no new member varaibles. Foo only introduces a bunch of member functions & static functions. Does any part of the C++ standard now guarantee me that: sizeof(Foo) == sizeof(Bar) ? Thanks!

    Read the article

  • async_write/async_read problems while trying to implement question-answer logic

    - by Max
    Good day. I'm trying to implement a question - answer logic using boost::asio. On the Client I have: void Send_Message() { .... boost::asio::async_write(server_socket, boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Client::Handle_Write_Message, this, boost::asio::placeholders::error)); .... } void Handle_Write_Message(const boost::system::error_code& error) { .... std::cout << "Message was sent.\n"; .... boost::asio::async_read(server_socket_,boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Client::Handle_Read_Message, this, boost::asio::placeholders::error)); .... } void Handle_Read_Message(const boost::system::error_code& error) { .... std::cout << "I have a new message.\n"; .... } And on the Server i have the "same - logic" code: void Read_Message() { .... boost::asio::async_read(client_socket, boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Server::Handle_Read_Message, this, boost::asio::placeholders::error)); .... } void Handle_Read_Message(const boost::system::error_code& error) { .... std::cout << "I have a new message.\n"; .... boost::asio::async_write(client_socket_,boost::asio::buffer(&Message, sizeof(Message)), boost::bind(&Server::Handle_Write_Message, this, boost::asio::placeholders::error)); .... } void Handle_Write_Message(const boost::system::error_code& error) { .... std::cout << "Message was sent back.\n"; .... } Message it's just a structure. And the output on the Client is: Message was sent. Output on the Server is: I have a new message. And that's all. After this both programs are still working but nothing happens. I tried to implement code like: if (!error) { .... } else { // close sockets and etc. } But there are no errors in reading or writing. Both programs are just running normally, but doesn't interact with each other. This code is quite obvious but i can't understand why it's not working. Thanks in advance for any advice.

    Read the article

  • C Language - \n - creating virus

    - by sagar
    #include<stdio.h> #include<conio.h> union abc { int a; int x; float g; }; struct pqr { int a; int x; float g; } ; void main() { union abc b; struct pqr c; clrscr(); b.a=10; textbackground(2); textcolor(6); cprintf(" A = %d",b.a); printf("\nUnion = %d",sizeof(b)); printf("\nStructure = %d",sizeof(c)); getch(); } Now, Save this program as virus.cpp ( or any name that you like ) I am using Turbo C comiler to complie this program & run from trubo c. ( Ctrl + F9 ) I don't know weather to ask this question at stack over flow or at super user. I am using Windows 7 & I have installed Avira AntiVir virus system. I am not here for any kind of advertisement of microsoft or antivirus system. I am just here for solution of my query. When I tried to run above program - It creates a worm (DOS/Candy). I believe there is nothing wrong in program. Oke.. Now here is something special. Execute the same program with following difference. Here the only difference is space between \n #include<stdio.h> #include<conio.h> union abc { int a; int x; float g; }; struct pqr { int a; int x; float g; } ; void main() { union abc b; struct pqr c; clrscr(); b.a=10; textbackground(2); textcolor(6); cprintf(" A = %d",b.a); printf("\n Union = %d",sizeof(b)); printf("\n Structure = %d",sizeof(c)); getch(); } The difference is only \n and space. Question is "Why my simple program is detected as virus?? " Thanks in advance for sharing your knowledge. Sagar.

    Read the article

  • C segmentation fault before/during return statement

    - by wolfPack88
    I print the value that I'm returning right before my return statement, and tell my code to print the value that was returned right after the function call. However, I get a segmentation fault after my first print statement and before my second (also interesting to note, this always happens on the third time the function is called; never the first or the second, never fourth or later). I tried printing out all of the data that I'm working on to see if the rest of my code was doing something it maybe shouldn't, but my data up to that point looks fine. Here's the function: int findHydrogen(struct Amino* amino, int nPos, float* diff, int totRead) { struct Atom* atoms; int* bonds; int numBonds; int i; int retVal; int numAtoms; numAtoms = (*amino).numAtoms; atoms = (struct Atom *) malloc(sizeof(struct Atom) * numAtoms); atoms = (*amino).atoms; numBonds = atoms[nPos].numBonds; bonds = (int *) malloc(sizeof(int) * numBonds); bonds = atoms[nPos].bonds; for(i = 0; i < (*amino).numAtoms; i++) printf("ATOM\t\t%d %s\t0001\t%f\t%f\t%f\n", i + 1, atoms[i].type, atoms[i].x, atoms[i].y, atoms[i].z); for(i = 0; i < numBonds; i++) if(atoms[bonds[i] - totRead].type[0] == 'H') { diff[0] = atoms[bonds[i] - totRead].x - atoms[nPos].x; diff[1] = atoms[bonds[i] - totRead].y - atoms[nPos].y; diff[2] = atoms[bonds[i] - totRead].z - atoms[nPos].z; retVal = bonds[i] - totRead; bonds = (int *) malloc(sizeof(int)); free(bonds); atoms = (struct Atom *) malloc(sizeof(struct Atom)); free(atoms); printf("2 %d\n", retVal); return retVal; } } As I mentioned before, it works fine the first two times I run it, the third time it prints the correct value of retVal, then seg faults somewhere before it gets to where I called the function, which I do as: hPos = findHydrogen((&aminoAcid[i]), nPos, diff, totRead); printf("%d\n", hPos);

    Read the article

  • "Use of uninitialised value" despite of memset

    - by Framester
    Hi there, I allocate a 2d array and use memset to fill it with zeros. #include<stdio.h> #include<string.h> #include<stdlib.h> void main() { int m=10; int n =10; int **array_2d; array_2d = (int**) malloc(m*sizeof(int*)); if(array_2d==NULL) { printf("\n Could not malloc 2d array \n"); exit(1); } for(int i=0;i<m;i++) { ((array_2d)[i])=malloc(n*sizeof(int)); memset(((array_2d)[i]),0,sizeof(n*sizeof(int))); } for(int i=0; i<10;i++){ for(int j=0; j<10;j++){ printf("(%i,%i)=",i,j); fflush(stdout); printf("%i ", array_2d[i][j]); } printf("\n"); } } Afterwards I use valgrind [1] to check for memory errors. I get following error: Conditional jump or move depends on uninitialised value(s) for line 24 (printf("%i ", array_2d[i][j]);). I always thought memset is the function to initialize arrays. How can I get rid off this error? Thanks! Valgrind output: ==3485== Memcheck, a memory error detector ==3485== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==3485== Using Valgrind-3.5.0-Debian and LibVEX; rerun with -h for copyright info ==3485== Command: ./a.out ==3485== (0,0)=0 (0,1)===3485== Use of uninitialised value of size 4 ==3485== at 0x409E186: _itoa_word (_itoa.c:195) ==3485== by 0x40A1AD1: vfprintf (vfprintf.c:1613) ==3485== by 0x40A8FFF: printf (printf.c:35) ==3485== by 0x8048724: main (playing_with_valgrind.c:39) ==3485== ==3485== ==3485== ---- Attach to debugger ? --- [Return/N/n/Y/y/C/c] ---- ==3485== Conditional jump or move depends on uninitialised value(s) ==3485== at 0x409E18E: _itoa_word (_itoa.c:195) ==3485== by 0x40A1AD1: vfprintf (vfprintf.c:1613) ==3485== by 0x40A8FFF: printf (printf.c:35) ==3485== by 0x8048724: main (playing_with_valgrind.c:39) [1] valgrind --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes --db-attach=yes ./a.out [gcc-cmd] gcc -std=c99 -lm -Wall -g3 playing_with_valgrind.c

    Read the article

  • How to tell where I am in an array with pointer arythmetic?

    - by klez
    In C, I have declared a memory area like this: int cells = 512; int* memory = (int*) malloc ((sizeof (int)) * cells); And I place myself more or less in the middle int* current_cell = memory + ((cells / 2) * sizeof (int)); My question is, while I increment *current_cell, how do I know if I reached the end of the allocated memory area?

    Read the article

  • C - How to use both aio_read() and aio_write().

    - by Slav
    I implement game server where I need to both read and write. So I accept incoming connection and start reading from it using aio_read() but when I need to send something, I stop reading using aio_cancel() and then use aio_write(). Within write's callback I resume reading. So, I do read all the time but when I need to send something - I pause reading. It works for ~20% of time - in other case call to aio_cancel() fails with "Operation now in progress" - and I cannot cancel it (even within permanent while cycle). So, my added write operation never happens. How to use these functions well? What did I missed? EDIT: Used under Linux 2.6.35. Ubuntu 10 - 32 bit. Example code: void handle_read(union sigval sigev_value) { /* handle data or disconnection */ } void handle_write(union sigval sigev_value) { /* free writing buffer memory */ } void start() { const int acceptorSocket = socket(AF_INET, SOCK_STREAM, 0); struct sockaddr_in addr; memset(&addr, 0, sizeof(struct sockaddr_in)); addr.sin_family = AF_INET; addr.sin_addr.s_addr = INADDR_ANY; addr.sin_port = htons(port); bind(acceptorSocket, (struct sockaddr*)&addr, sizeof(struct sockaddr_in)); listen(acceptorSocket, SOMAXCONN); struct sockaddr_in address; socklen_t addressLen = sizeof(struct sockaddr_in); for(;;) { const int incomingSocket = accept(acceptorSocket, (struct sockaddr*)&address, &addressLen); if(incomingSocket == -1) { /* handle error ... */} else { //say socket to append outcoming messages at writing: const int currentFlags = fcntl(incomingSocket, F_GETFL, 0); if(currentFlags < 0) { /* handle error ... */ } if(fcntl(incomingSocket, F_SETFL, currentFlags | O_APPEND) == -1) { /* handle another error ... */ } //start reading: struct aiocb* readingAiocb = new struct aiocb; memset(readingAiocb, 0, sizeof(struct aiocb)); readingAiocb->aio_nbytes = MY_SOME_BUFFER_SIZE; readingAiocb->aio_fildes = socketDesc; readingAiocb->aio_buf = mySomeReadBuffer; readingAiocb->aio_sigevent.sigev_notify = SIGEV_THREAD; readingAiocb->aio_sigevent.sigev_value.sival_ptr = (void*)mySomeData; readingAiocb->aio_sigevent.sigev_notify_function = handle_read; if(aio_read(readingAiocb) != 0) { /* handle error ... */ } } } } //called at any time from server side: send(void* data, const size_t dataLength) { //... some thread-safety precautions not needed here ... const int cancellingResult = aio_cancel(socketDesc, readingAiocb); if(cancellingResult != AIO_CANCELED) { //this one happens ~80% of the time - embracing previous call to permanent while cycle does not help: if(cancellingResult == AIO_NOTCANCELED) { puts(strerror(aio_return(readingAiocb))); // "Operation now in progress" /* don't know what to do... */ } } //otherwise it's okay to send: else { aio_write(...); } }

    Read the article

  • number of elements in a static array of a predefined size

    - by cppb
    I have an array like this: int a[100]; I am filling only the first 4 elements in this array: a[0] = 1; a[1] = 2; a[2] = 3; a[3] = 4; When I do sizeof(a)/sizeof(a[0]) it returns 100. Is there a way I can get number of elements to which I have assinged a value and thus filtering out the remaining 96 unassigned elements? thanks

    Read the article

  • struct size is different from typedef version?

    - by samoz
    I have the following struct declaration and typedef in my code: struct blockHeaderStruct { bool allocated; unsigned int length; }; typedef struct blockHeaderStruct blockHeader; When I do sizeof(blockheader), I get the value of 4 bytes back, but when I do sizeof(blockHeaderStruct), I get 8 bytes. Why is this happening? Why am I not getting 5 back instead?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >