Search Results

Search found 1486 results on 60 pages for 'unsigned'.

Page 55/60 | < Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • c++ quick sort running time

    - by chnet
    I have a question about quick sort algorithm. I implement quick sort algorithm and play it. The elements in initial unsorted array are random numbers chosen from certain range. I find the range of random number effects the running time. For example, the running time for 1, 000, 000 random number chosen from the range (1 - 2000) takes 40 seconds. While it takes 9 seconds if the 1,000,000 number chosen from the range (1 - 10,000). But I do not know how to explain it. In class, we talk about the pivot value can effect the depth of recursion tree. For my implementation, the last value of the array is chosen as pivot value. I do not use randomized scheme to select pivot value. int partition( vector<int> &vec, int p, int r) { int x = vec[r]; int i = (p-1); int j = p; while(1) { if (vec[j] <= x){ i = (i+1); int temp = vec[j]; vec[j] = vec[i]; vec[i] = temp; } j=j+1; if (j==r) break; } int temp = vec[i+1]; vec[i+1] = vec[r]; vec[r] = temp; return i+1; } void quicksort ( vector<int> &vec, int p, int r) { if (p<r){ int q = partition(vec, p, r); quicksort(vec, p, q-1); quicksort(vec, q+1, r); } } void random_generator(int num, int * array) { srand((unsigned)time(0)); int random_integer; for(int index=0; index< num; index++){ random_integer = (rand()%10000)+1; *(array+index) = random_integer; } } int main() { int array_size = 1000000; int input_array[array_size]; random_generator(array_size, input_array); vector<int> vec(input_array, input_array+array_size); clock_t t1, t2; t1 = clock(); quicksort(vec, 0, (array_size - 1)); // call quick sort int length = vec.size(); t2 = clock(); float diff = ((float)t2 - (float)t1); cout << diff << endl; cout << diff/CLOCKS_PER_SEC <<endl; }

    Read the article

  • SWIG & Java Use of carrays.i and array_functions for C Array of Strings

    - by c12
    I have the below configuration where I'm trying to create a test C function that returns a pointer to an Array of Strings and then wrap that using SWIG's carrays.i and array_functions so that I can access the Array elements in Java. Uncertainties: %array_functions(char, SWIGArrayUtility); - not sure if char is correct inline char *getCharArray() - not sure if C function signature is correct String result = getCharArray(); - String return seems odd, but that's what is generated by SWIG SWIG.i: %module Test %{ #include "test.h" %} %include <carrays.i> %array_functions(char, SWIGArrayUtility); %include "test.h" %pragma(java) modulecode=%{ public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); } return ret; } %} Inline Header C Function: #ifndef TEST_H #define TEST_H inline static unsigned short numFoo() { return 3; } inline char *getCharArray(){ static char* foo[3]; foo[0]="ABC"; foo[1]="5CDE"; foo[2]="EEE6"; return foo; } #endif Java Main Tester: public class TestMain { public static void main(String[] args) { System.loadLibrary("TestJni"); char[] test = Test.getCharArrayImpl(); System.out.println("length=" + test.length); for(int i=0; i < test.length; i++){ System.out.println(test[i]); } } } Java Main Tester Output: length=3 ? ? , SWIG Generated Java APIs: public class Test { public static String new_SWIGArrayUtility(int nelements) { return TestJNI.new_SWIGArrayUtility(nelements); } public static void delete_SWIGArrayUtility(String ary) { TestJNI.delete_SWIGArrayUtility(ary); } public static char SWIGArrayUtility_getitem(String ary, int index) { return TestJNI.SWIGArrayUtility_getitem(ary, index); } public static void SWIGArrayUtility_setitem(String ary, int index, char value) { TestJNI.SWIGArrayUtility_setitem(ary, index, value); } public static int numFoo() { return TestJNI.numFoo(); } public static String getCharArray() { return TestJNI.getCharArray(); } public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); System.out.println("result=" + result); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); System.out.println("ret[" + i + "]=" + ret[i]); } return ret; } }

    Read the article

  • OpenGL, problems with GL_MODELVIEW GL_PROJECTION...

    - by Marcos Roriz
    Guys, I'm trying to finish up my homework but I'm having some problems here on these models on openGL... any Idea why is my draw not happening? One thing that strange is that if I change to gluPerspective it works.. #include <GL/glut.h> #include <stdlib.h> #include <stdio.h> static int shoulder = 0; static int elbow = 0; void init(void) { glClearColor(1.0, 1.0, 1.0, 0.0); } void display(void) { glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); /* BASE */ glRotatef((GLfloat) shoulder, 0.0, 0.0, 1.0); glTranslatef(1.0, 0.0, 0.0); glPushMatrix(); //glScalef(2.0, 0.4, 1.0); glBegin(GL_QUADS); glColor3f(0, 0, 0); glVertex2f(0.0, 0.0); glVertex2f(0.0, 10.0); glVertex2f(10.0, 10.0); glVertex2f(10.0, 0.0); glEnd(); glPopMatrix(); glPopMatrix(); glutSwapBuffers(); } void reshape(int w, int h) { glViewport(0, 0, (GLsizei) w, (GLsizei) h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho((GLfloat)-w/2, (GLfloat)w/2, (GLfloat)-h/2, (GLfloat)h/2, -1.0, 1.0); // modo de projecao ortogonal glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(0.0, 0.0, -5.0); } void keyboard(unsigned char key, int x, int y) { switch (key) { case 's': shoulder = (shoulder + 5) % 360; glutPostRedisplay(); break; case 'S': shoulder = (shoulder - 5) % 360; glutPostRedisplay(); break; case 'e': elbow = (elbow + 5) % 360; glutPostRedisplay(); break; case 'E': elbow = (elbow - 5) % 360; glutPostRedisplay(); break; case 27: exit(0); break; default: break; } } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(800, 400); glutInitWindowPosition(100, 100); glutCreateWindow(argv[0]); init(); glutDisplayFunc(display); glutReshapeFunc(reshape); glutKeyboardFunc(keyboard); glutMainLoop(); return 0; }

    Read the article

  • Memory management with Objective-C Distributed Objects: my temporary instances live forever!

    - by jkp
    I'm playing with Objective-C Distributed Objects and I'm having some problems understanding how memory management works under the system. The example given below illustrates my problem: Protocol.h #import <Foundation/Foundation.h> @protocol DOServer - (byref id)createTarget; @end Server.m #import <Foundation/Foundation.h> #import "Protocol.h" @interface DOTarget : NSObject @end @interface DOServer : NSObject < DOServer > @end @implementation DOTarget - (id)init { if ((self = [super init])) { NSLog(@"Target created"); } return self; } - (void)dealloc { NSLog(@"Target destroyed"); [super dealloc]; } @end @implementation DOServer - (byref id)createTarget { return [[[DOTarget alloc] init] autorelease]; } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DOServer *server = [[DOServer alloc] init]; NSConnection *connection = [[NSConnection new] autorelease]; [connection setRootObject:server]; if ([connection registerName:@"test-server"] == NO) { NSLog(@"Failed to vend server object"); } else [[NSRunLoop currentRunLoop] run]; [pool drain]; return 0; } Client.m #import <Foundation/Foundation.h> #import "Protocol.h" int main() { unsigned i = 0; for (; i < 3; i ++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; id server = [NSConnection rootProxyForConnectionWithRegisteredName:@"test-server" host:nil]; [server setProtocolForProxy:@protocol(DOServer)]; NSLog(@"Created target: %@", [server createTarget]); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; [pool drain]; } return 0; } The issue is that any remote objects created by the root proxy are not released when their proxy counterparts in the client go out of scope. According to the documentation: When an object’s remote proxy is deallocated, a message is sent back to the receiver to notify it that the local object is no longer shared over the connection. I would therefore expect that as each DOTarget goes out of scope (each time around the loop) it's remote counterpart would be dellocated, since there is no other reference to it being held on the remote side of the connection. In reality this does not happen: the temporary objects are only deallocate when the client application quits, or more accurately, when the connection is invalidated. I can force the temporary objects on the remote side to be deallocated by explicitly invalidating the NSConnection object I'm using each time around the loop and creating a new one but somehow this just feels wrong. Is this the correct behaviour from DO? Should all temporary objects live as long as the connection that created them? Are connections therefore to be treated as temporary objects which should be opened and closed with each series of requests against the server? Any insights would be appreciated.

    Read the article

  • Any socket programmers out there? How can I obtain the IPv4 address of the client?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code setup to listen for incoming TCP connection requests from clients after the parent socket has been created and is set to listen... int sockfd, newfd; unsigned int len; socklen_t sin_size; char msg[]="Test message sent"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char ip[INET6_ADDRSTRLEN]; . . //parent socket creation and listen code omitted for simplicity . //wait for connection requests from clients while(1) { //Returns the socketID and address of client connecting to socket if( ( newfd = accept(sockfd, (struct sockaddr *)&client, &len) ) == -1 ){ perror("Accept"); exit(-1); } if( (rv = recv(newfd, buf, MAXLEN-1, 0 )) == -1) { perror("Recv"); exit(-1); } struct sockaddr_in *clientAddr = ( struct sockaddr_in *) get_in_addr((struct sockaddr *)&client); inet_ntop(client.ss_family, clientAddr, ip, sizeof ip); printf("Receive from %s: query type is %s\n", ip, buf); if( ( st = send(newfd, msg, strlen(msg), 0)) == -1 ) { perror("Send"); exit(-1); } //ntohs is used to avoid big-endian and little endian compatibility issues printf("Send %d byte to port %d\n", ntohs(clientAddr->sin_port) ); close(newfd); } } I found the get_in_addr function online and placed it at the top of my code and use it to obtain the IP address of the client connecting... // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } but the function always returns the IPv6 IP address since thats what the sa_family property is set as. My question is, is the IPv4 IP address stored anywhere in the data I'm using and, if so, how can I access it? Thanks so much in advance for all your help!

    Read the article

  • user defined Copy ctor, and copy-ctors further down the chain - compiler bug ? programmers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D EDIT : more info : typedef ParticleId unsigned int; Particle has no user defined copyctor, but has a member of type : Particle { .... Resource<Texture> tex_res; ... } Resource is a smart-pointer class, and has all ctor's defined (also asignment operator) and it seems that Resource is copied bitwise. EDIT : henrik solved it... data(data) is stupid of course ! it should of course be rhs.data !!! sorry for huge amount of code, with a very little bug in it !!! (Guess you shouldn't code at 1 in the morning :D )

    Read the article

  • Symfony2 Entity to array

    - by Adriano Pedro
    I'm trying to migrate my flat php project to Symfony2, but its coming to be very hard. For instance, I have a table of Products specification that have several specifications and are distinguishables by its "cat" attribute in that Extraspecs DB table. Therefore I've created a Entity for that table and want to make an array of just the specifications with "cat" = 0... I supose the code is this one.. right? $typeavailable = $this->getDoctrine() ->getRepository('LabsCatalogBundle:ProductExtraspecsSpecs') ->findBy(array('cat' => '0')); Now how can i put this in an array to work with a form like this?: form = $this ->createFormBuilder($product) ->add('specs', 'choice', array('choices' => $typeavailableArray), 'multiple' => true) Thank you in advance :) # Thank you all.. But now I've came across with another problem.. In fact i'm building a form from an existing object: $form = $this ->createFormBuilder($product) ->add('name', 'text') ->add('genspec', 'choice', array('choices' => array('0' => 'None', '1' => 'General', '2' => 'Specific'))) ->add('isReg', 'choice', array('choices' => array('0' => 'Material', '1' => 'Reagent', '2' => 'Antibody', '3' => 'Growth Factors', '4' => 'Rodents', '5' => 'Lagomorphs'))) So.. in that case my current value is named "extraspecs", so i've added this like: ->add('extraspecs', 'entity', array( 'label' => 'desc', 'empty_value' => ' --- ', 'class' => 'LabsCatalogBundle:ProductExtraspecsSpecs', 'property' => 'specsid', 'query_builder' => function(EntityRepository $er) { return $er ->createQueryBuilder('e'); But "extraspecs" come from a relationship of oneToMany where every product has several extraspecs... Here is the ORM: Labs\CatalogBundle\Entity\Product: type: entity table: orders__regmat id: id: type: integer generator: { strategy: AUTO } fields: name: type: string length: 100 catnumber: type: string scale: 100 brand: type: integer scale: 10 company: type: integer scale: 10 size: type: decimal scale: 10 units: type: integer scale: 10 price: type: decimal scale: 10 reqcert: type: integer scale: 1 isReg: type: integer scale: 1 genspec: type: integer scale: 1 oneToMany: extraspecs: targetEntity: ProductExtraspecs mappedBy: product Labs\CatalogBundle\Entity\ProductExtraspecs: type: entity table: orders__regmat__extraspecs fields: extraspecid: id: true type: integer unsigned: false nullable: false generator: strategy: IDENTITY regmatid: type: integer scale: 11 spec: type: integer scale: 11 attrib: type: string length: 20 value: type: string length: 200 lifecycleCallbacks: { } manyToOne: product: targetEntity: Product inversedBy: extraspecs joinColumn: name: regmatid referencedColumnName: id HOw should I do this? Thank you!!!

    Read the article

  • Howto access thread data outside a thread

    - by Quandary
    Question: I start the MS Text-to-speech engine in a thread, in order to avoid a crash on DLL_attach. It starts fine, and the text to speech engine gets initialized, but I can't access ISpVoice outside the thread. How can I access ISpVoice outside the thread ? It's a global variable after all... #include <windows.h> #include <sapi.h> #include "XPThreads.h" ISpVoice * pVoice = NULL; unsigned long init_engine_thread(void* param) { Sleep(5000); printf("lolthread\n"); //HRESULT hr = CoInitializeEx(NULL, COINIT_MULTITHREADED); HRESULT hr = CoInitialize(NULL); if(FAILED(hr) ) { MessageBox(NULL, TEXT("Failed To Initialize"), TEXT("Error"), 0); char buffer[2000] ; sprintf(buffer, "An error occured: 0x%08X.\n", hr); FILE * pFile = fopen ( "c:\\temp\\CoInitialize_dll.txt" , "w" ); fwrite (buffer , 1 , strlen(buffer) , pFile ); fclose (pFile); } else { printf("trying to create instance.\n"); //HRESULT hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); //hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); //HRESULT hr = CoCreateInstance(__uuidof(ISpVoice), NULL, CLSCTX_INPROC_SERVER, IID_ISpVoice, (void **) &pVoice); HRESULT hr = CoCreateInstance(__uuidof(SpVoice), NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); if( SUCCEEDED( hr ) ) { printf("Succeeded\n"); hr = pVoice->Speak(L"The text to speech engine has been successfully initialized.", 0, NULL); } else { printf("failed\n"); MessageBox(NULL, TEXT("Failed To Create COM instance"), TEXT("Error"), 0); char buffer[2000] ; sprintf(buffer, "An error occured: 0x%08X.\n", hr); FILE * pFile = fopen ( "c:\\temp\\CoCreateInstance_dll.txt" , "w" ); fwrite (buffer , 1 , strlen(buffer) , pFile ); fclose (pFile); } } if(pVoice != NULL) { pVoice->Release(); pVoice = NULL; } CoUninitialize(); return NULL; } XPThreads* ptrThread = new XPThreads(init_engine_thread); BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: //init_engine(); LoadLibrary(TEXT("ole32.dll")); ptrThread->Run(); break; case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: break; } return TRUE; }

    Read the article

  • Python and C++ Sockets converting packet data

    - by yeus
    First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • How to calculate the maximum block length in C of a binary number

    - by user1664272
    I want to reiterate the fact that I am not asking for direct code to my problem rather than wanting information on how to reach that solution. I asked a problem earlier about counting specific integers in binary code. Now I would like to ask how one comes about counting the maximum block length within binary code. I honestly just want to know where to get started and what exactly the question means by writing code to figure out the "Maximum block length" of an inputted integers binary representation. Ex: Input 456 Output: 111001000 Number of 1's: 4 Maximum Block Length: ? Here is my code so far for reference if you need to see where I'm coming from. #include <stdio.h> int main(void) { int integer; // number to be entered by user int i, b, n; unsigned int ones; printf("Please type in a decimal integer\n"); // prompt fflush(stdout); scanf("%d", &integer); // read an integer if(integer < 0) { printf("Input value is negative!"); // if integer is less than fflush(stdout); return; // zero, print statement } else{ printf("Binary Representation:\n", integer); fflush(stdout);} //if integer is greater than zero, print statement for(i = 31; i >= 0; --i) //code to convert inputted integer to binary form { b = integer >> i; if(b&1){ printf("1"); fflush(stdout); } else{ printf("0"); fflush(stdout); } } printf("\n"); fflush(stdout); ones = 0; //empty value to store how many 1's are in binary code while(integer) //while loop to count number of 1's within binary code { ++ones; integer &= integer - 1; } printf("Number of 1's in Binary Representation: %d\n", ones); // prints number fflush(stdout); //of ones in binary code printf("Maximum Block Length: \n"); fflush(stdout); printf("\n"); fflush(stdout); return 0; }//end function main

    Read the article

  • C++ using cdb_read returns extra characters on some reads

    - by Moe Be
    Hi All, I am using the following function to loop through a couple of open CDB hash tables. Sometimes the value for a given key is returned along with an additional character (specifically a CTRL-P (a DLE character/0x16/0o020)). I have checked the cdb key/value pairs with a couple of different utilities and none of them show any additional characters appended to the values. I get the character if I use cdb_read() or cdb_getdata() (the commented out code below). If I had to guess I would say I am doing something wrong with the buffer I create to get the result from the cdb functions. Any advice or assistance is greatly appreciated. char* HashReducer::getValueFromDb(const string &id, vector <struct cdb *> &myHashFiles) { unsigned char hex_value[BUFSIZ]; size_t hex_len; //construct a real hex (not ascii-hex) value to use for database lookups atoh(id,hex_value,&hex_len); char *value = NULL; vector <struct cdb *>::iterator my_iter = myHashFiles.begin(); vector <struct cdb *>::iterator my_end = myHashFiles.end(); try { //while there are more databases to search and we have not found a match for(; my_iter != my_end && !value ; my_iter++) { //cerr << "\n looking for this MD5:" << id << " hex(" << hex_value << ") \n"; if (cdb_find(*my_iter, hex_value, hex_len)){ //cerr << "\n\nI found the key " << id << " and it is " << cdb_datalen(*my_iter) << " long\n\n"; value = (char *)malloc(cdb_datalen(*my_iter)); cdb_read(*my_iter,value,cdb_datalen(*my_iter),cdb_datapos(*my_iter)); //value = (char *)cdb_getdata(*my_iter); //cerr << "\n\nThe value is:" << value << " len is:" << strlen(value)<< "\n\n"; }; } } catch (...){} return value; }

    Read the article

  • How to set background in OpenGL captured image from OpenCV

    - by user325487
    Hey All, i'm relatively new to Artoolkitplus and openGL i'm having a tough time getting the image i capture through openCV to be set as the background image in OpenGL ... I also cannot convert the image i take through the camera using opencv to be scaled to 320x280 from 640x480 .. i also have to save my image and load if for things to work... here's my code //////////// int findMarker() { IplImage* image = cvQueryFrame( capture ); if( !capture ) { fprintf( stderr, "ERROR: capture is NULL \n" ); getchar(); return -1; } if( !image ) { fprintf( stderr, "ERROR: frame is null...\n" ); getchar(); } //cvShowImage( "Capture", frame ); //image = cvCloneImage( frame ); try{ if(!cvSaveImage("immagineTmp.jpg",image)) printf("Could not save\n"); } catch(void*) {} image = cvLoadImage("immagineTmp.jpg", 1); cvShowImage( "Image", image ); glLoadIdentity(); ////////////// glDisable(GL_DEPTH_TEST); glOrtho(0,640,0,480,-1,1); glGenTextures(1, &bgid); glBindTexture(GL_TEXTURE_2D, bgid); // Create Linear Filtered Texture glBindTexture(GL_TEXTURE_2D, bgid); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 3, image-width, image-height, 0, GL_RGB, GL_UNSIGNED_BYTE, image-imageData); glBindTexture(GL_TEXTURE_2D, bgid); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.2f, 1.0f, -2.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.2f, 1.0f, -2.0f); glEnd(); glEnable(GL_DEPTH_TEST); glLoadIdentity(); //////////// // do the OpenGL camera setup glMatrixMode(GL_PROJECTION); glLoadMatrixf(tracker-getProjectionMatrix()); int markerId = tracker-calc((unsigned char *)(image-imageData)); float conf = tracker-getConfidence(); // use the result of calc() to setup the OpenGL transformation glMatrixMode(GL_MODELVIEW); glLoadMatrixf(tracker-getModelViewMatrix()); if(markerId!=-1) { printf("\n\nFound marker %d (confidence %d%%)\n\nPose-Matrix:\n ", markerId, (int(conf*100.0f))); for(int i=0; i<16; i++) printf("%.2f %s", tracker-getModelViewMatrix()[i], (i%4==3)?"\n " : ""); } cvReleaseImage(&image); return 0; }

    Read the article

  • What's the C strategy to "imitate" a C++ template ?

    - by Andrei Ciobanu
    After reading some examples on stackoverflow, and following some of the answers for my previous questions (1), I've eventually come with a "strategy" for this. I've come to this: 1) Have a declare section in the .h file. Here I will define the data-structure, and the accesing interface. Eg.: /** * LIST DECLARATION. (DOUBLE LINKED LIST) */ #define NM_TEMPLATE_DECLARE_LIST(type) \ typedef struct nm_list_elem_##type##_s { \ type data; \ struct nm_list_elem_##type##_s *next; \ struct nm_list_elem_##type##_s *prev; \ } nm_list_elem_##type ; \ typedef struct nm_list_##type##_s { \ unsigned int size; \ nm_list_elem_##type *head; \ nm_list_elem_##type *tail; \ int (*cmp)(const type e1, const type e2); \ } nm_list_##type ; \ \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)); \ \ (...other functions ...) 2) Wrap the functions in the interface inside MACROS: /** * LIST INTERFACE */ #define nm_list(type) \ nm_list_##type #define nm_list_elem(type) \ nm_list_elem_##type #define nm_list_new(type,cmp) \ nm_list_new_##type##_(cmp) #define nm_list_delete(type, list, dst) \ nm_list_delete_##type##_(list, dst) #define nm_list_ins_next(type,list, elem, data) \ nm_list_ins_next_##type##_(list, elem, data) (...others...) 3) Implement the functions: /** * LIST FUNCTION DEFINITIONS */ #define NM_TEMPLATE_DEFINE_LIST(type) \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)) \ {\ nm_list_##type *list = NULL; \ list = nm_alloc(sizeof(*list)); \ list->size = 0; \ list->head = NULL; \ list->tail = NULL; \ list->cmp = cmp; \ }\ void nm_list_delete_##type##_(nm_list_##type *list, \ void (*destructor)(nm_list_elem_##type elem)) \ { \ type data; \ while(nm_list_size(list)){ \ data = nm_list_rem_##type(list, tail); \ if(destructor){ \ destructor(data); \ } \ } \ nm_free(list); \ } \ (...others...) In order to use those constructs, I have to create two files (let's call them templates.c and templates.h) . In templates.h I will have to NM_TEMPLATE_DECLARE_LIST(int), NM_TEMPLATE_DECLARE_LIST(double) , while in templates.c I will need to NM_TEMPLATE_DEFINE_LIST(int) , NM_TEMPLATE_DEFINE_LIST(double) , in order to have the code behind a list of ints, doubles and so on, generated. By following this strategy I will have to keep all my "template" declarations in two files, and in the same time, I will need to include templates.h whenever I need the data structures. It's a very "centralized" solution. Do you know other strategy in order to "imitate" (at some point) templates in C++ ? Do you know a way to improve this strategy, in order to keep things in more decentralized manner, so that I won't need the two files: templates.c and templates.h ?

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆín DvoÃâ¦Ãâ¢ÃÆák ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • OpenGL Coordinate system confusion

    - by user146780
    Maybe I set up GLUT wrong. Basically I want verticies to be reletive to their size in pixels. Ex:right now if I create a hexagon, it hakes up the whole screen even though the units are 6. #include <iostream> #include <stdlib.h> //Needed for "exit" function #include <cmath> //Include OpenGL header files, so that we can use OpenGL #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif using namespace std; //Called when a key is pressed void handleKeypress(unsigned char key, //The key that was pressed int x, int y) { //The current mouse coordinates switch (key) { case 27: //Escape key exit(0); //Exit the program } } //Initializes 3D rendering void initRendering() { //Makes 3D drawing work when something is in front of something else glEnable(GL_DEPTH_TEST); } //Called when the window is resized void handleResize(int w, int h) { //Tell OpenGL how to convert from coordinates to pixel values glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective //Set the camera perspective glLoadIdentity(); //Reset the camera gluPerspective(45.0, //The camera angle (double)w / (double)h, //The width-to-height ratio 1.0, //The near z clipping coordinate 200.0); //The far z clipping coordinate } //Draws the 3D scene void drawScene() { //Clear information from last draw glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); //Reset the drawing perspective glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glBegin(GL_POLYGON); //Begin quadrilateral coordinates //Trapezoid glColor3f(255,0,0); for(int i = 0; i < 6; ++i) { glVertex2d(sin(i/6.0*2* 3.1415), cos(i/6.0*2* 3.1415)); } glEnd(); //End quadrilateral coordinates glutSwapBuffers(); //Send the 3D scene to the screen } int main(int argc, char** argv) { //Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH); glutInitWindowSize(400, 400); //Set the window size //Create the window glutCreateWindow("Basic Shapes - videotutorialsrock.com"); initRendering(); //Initialize rendering //Set handler functions for drawing, keypresses, and window resizes glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize); glutMainLoop(); //Start the main loop. glutMainLoop doesn't return. return 0; //This line is never reached } How can I make it so that a polygon of 0,0 10,0 10,10 0,10 defines a polygon starting at the top left of the screen and is a width and height of 10 pixels? Thanks

    Read the article

  • multi-thread access MySQL error

    - by user188916
    I have written a simple multi-threaded C program to access MySQL,it works fine except when i add usleep() or sleep() function in each thread function. i created two pthreads in the main method, int main(){ mysql_library_init(0,NULL,NULL); printf("Hello world!\n"); init_pool(&p,100); pthread_t producer; pthread_t consumer_1; pthread_t consumer_2; pthread_create(&producer,NULL,produce_fun,NULL); pthread_create(&consumer_1,NULL,consume_fun,NULL); pthread_create(&consumer_2,NULL,consume_fun,NULL); mysql_library_end(); } void * produce_fun(void *arg){ pthread_detach(pthread_self()); //procedure while(1){ usleep(500000); printf("producer...\n"); produce(&p,cnt++); } pthread_exit(NULL); } void * consume_fun(void *arg){ pthread_detach(pthread_self()); MYSQL db; MYSQL *ptr_db=mysql_init(&db); mysql_real_connect(); //procedure while(1){ usleep(1000000); printf("consumer..."); int item=consume(&p); addRecord_d(ptr_db,"test",item); } mysql_thread_end(); pthread_exit(NULL); } void addRecord_d(MYSQL *ptr_db,const char *t_name,int item){ char query_buffer[100]; sprintf(query_buffer,"insert into %s values(0,%d)",t_name,item); //pthread_mutex_lock(&db_t_lock); int ret=mysql_query(ptr_db,query_buffer); if(ret){ fprintf(stderr,"%s%s\n","cannot add record to ",t_name); return; } unsigned long long update_id=mysql_insert_id(ptr_db); // pthread_mutex_unlock(&db_t_lock); printf("add record (%llu,%d) ok.",update_id,item); } the program output errors like: [Thread debugging using libthread_db enabled] [New Thread 0xb7ae3b70 (LWP 7712)] Hello world! [New Thread 0xb72d6b70 (LWP 7713)] [New Thread 0xb6ad5b70 (LWP 7714)] [New Thread 0xb62d4b70 (LWP 7715)] [Thread 0xb7ae3b70 (LWP 7712) exited] producer... producer... consumer...consumer...add record (31441,0) ok.add record (31442,1) ok.producer... producer... consumer...consumer...add record (31443,2) ok.add record (31444,3) ok.producer... producer... consumer...consumer...add record (31445,4) ok.add record (31446,5) ok.producer... producer... consumer...consumer...add record (31447,6) ok.add record (31448,7) ok.producer... Error in my_thread_global_end(): 2 threads didn't exit [Thread 0xb72d6b70 (LWP 7713) exited] [Thread 0xb6ad5b70 (LWP 7714) exited] [Thread 0xb62d4b70 (LWP 7715) exited] Program exited normally. and when i add pthread_mutex_lock in function addRecord_d,the error still exists. So what exactly the problem is?

    Read the article

  • How to convert m4a file to aac adts file in Xcode?

    - by Bird Hsuie
    I have a mp4 file copied from iPod lib and saved to my Document for my next step, I need it to convert to .mp3 or .aac(ADTS type) I use this code and failed... -(IBAction)compressFile:(id)sender{ NSLog (@"handleConvertToPCMTapped"); // open an ExtAudioFile NSLog (@"opening %@", exportURL); ExtAudioFileRef inputFile; CheckResult (ExtAudioFileOpenURL((__bridge CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed"); // prepare to convert to a plain ol' PCM format AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 44100; // todo: or use source rate? myPCMFormat.mFormatID = kAudioFormatMPEGLayer3 ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 2; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 4; myPCMFormat.mBytesPerFrame = 4; CheckResult (ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof (myPCMFormat), &myPCMFormat), "ExtAudioFileSetProperty failed"); // allocate a big buffer. size can be arbitrary for ExtAudioFile. // you have 64 KB to spare, right? UInt32 outputBufferSize = 0x10000; void* ioBuf = malloc (outputBufferSize); UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket; UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket; // set up output file NSString *outputPath = [myDocumentsDirectory() stringByAppendingPathComponent:@"m_export.mp3"]; NSURL *outputURL = [NSURL fileURLWithPath:outputPath]; NSLog (@"creating output file %@", outputURL); AudioFileID outputFile; CheckResult(AudioFileCreateWithURL((__bridge CFURLRef)outputURL, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputFile), "AudioFileCreateWithURL failed"); // start convertin' UInt32 outputFilePacketPosition = 0; //in bytes while (true) { // wrap the destination buffer in an AudioBufferList AudioBufferList convertedData; convertedData.mNumberBuffers = 1; convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame; convertedData.mBuffers[0].mDataByteSize = outputBufferSize; convertedData.mBuffers[0].mData = ioBuf; UInt32 frameCount = packetsPerBuffer; // read from the extaudiofile CheckResult (ExtAudioFileRead(inputFile, &frameCount, &convertedData), "Couldn't read from input file"); if (frameCount == 0) { printf ("done reading from file"); break; } // write the converted data to the output file CheckResult (AudioFileWritePackets(outputFile, false, frameCount, NULL, outputFilePacketPosition / myPCMFormat.mBytesPerPacket, &frameCount, convertedData.mBuffers[0].mData), "Couldn't write packets to file"); NSLog (@"Converted %ld bytes", outputFilePacketPosition); // advance the output file write location outputFilePacketPosition += (frameCount * myPCMFormat.mBytesPerPacket); } // clean up ExtAudioFileDispose(inputFile); AudioFileClose(outputFile); // show size in label NSLog (@"checking file at %@", outputPath); [self transMitFile:outputPath]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) { NSError *fileManagerError = nil; unsigned long long fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:&fileManagerError] fileSize]; } any suggestion?.......thanks for your great help!

    Read the article

  • Download a file using cocoa

    - by dododedodonl
    Hi All, I want to download a file to the downloads folder. I searched google for this and found the NSURLDownload class. I've read the page in the dev center and created this code (with some copy and pasting) this code: @implementation Downloader @synthesize downloadResponse; - (void)startDownloadingURL:(NSString*)downloadUrl destenation:(NSString*)destenation { // create the request NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:downloadUrl] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // create the connection with the request // and start loading the data NSURLDownload *theDownload=[[NSURLDownload alloc] initWithRequest:theRequest delegate:self]; if (!theDownload) { NSLog(@"Download could not be made..."); } } - (void)download:(NSURLDownload *)download decideDestinationWithSuggestedFilename:(NSString *)filename { NSString *destinationFilename; NSString *homeDirectory=NSHomeDirectory(); destinationFilename=[[homeDirectory stringByAppendingPathComponent:@"Desktop"] stringByAppendingPathComponent:filename]; [download setDestination:destinationFilename allowOverwrite:NO]; } - (void)download:(NSURLDownload *)download didFailWithError:(NSError *)error { // release the connection [download release]; // inform the user NSLog(@"Download failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSErrorFailingURLStringKey]); } - (void)downloadDidFinish:(NSURLDownload *)download { // release the connection [download release]; // do something with the data NSLog(@"downloadDidFinish"); } - (void)setDownloadResponse:(NSURLResponse *)aDownloadResponse { [aDownloadResponse retain]; [downloadResponse release]; downloadResponse = aDownloadResponse; } - (void)download:(NSURLDownload *)download didReceiveResponse:(NSURLResponse *)response { // reset the progress, this might be called multiple times bytesReceived = 0; // retain the response to use later [self setDownloadResponse:response]; } - (void)download:(NSURLDownload *)download didReceiveDataOfLength:(unsigned)length { long long expectedLength = [[self downloadResponse] expectedContentLength]; bytesReceived = bytesReceived+length; if (expectedLength != NSURLResponseUnknownLength) { percentComplete = (bytesReceived/(float)expectedLength)*100.0; NSLog(@"Percent - %f",percentComplete); } else { NSLog(@"Bytes received - %d",bytesReceived); } } -(NSURLRequest *)download:(NSURLDownload *)download willSendRequest:(NSURLRequest *)request redirectResponse:(NSURLResponse *)redirectResponse { NSURLRequest *newRequest=request; if (redirectResponse) { newRequest=nil; } return newRequest; } @end But my problem is now, it doesn't appear on the desktop as specified. And I want to put it in downloads and not on the desktop... What do I have to do?

    Read the article

  • vc++ - static member is showing error

    - by prabhakaran
    I am using vc++(2010). I am trying to create a class for server side socket. Here is the header file #include<winsock.h> #include<string> #include<iostream> using namespace std; class AcceptSocket { // static SOCKET s; protected: SOCKET acceptSocket; public: AcceptSocket(){}; void setSocket(SOCKET socket); static void EstablishConnection(int portNo,string&); static void closeConnection(); static void StartAccepting(); virtual void threadDeal(); static DWORD WINAPI MyThreadFunction(LPVOID lpParam); }; SOCKET AcceptSocket::s; and the corresponding source file #include<NetWorking.h> #include<string> void AcceptSocket::setSocket(SOCKET s) { acceptSocket=s; } void AcceptSocket::EstablishConnection(int portno,string &failure) { WSAData w; int error = WSAStartup(0x0202,&w); if(error) failure=failure+"\nWSAStartupFailure"; if(w.wVersion != 0x0202) { WSACleanup(); failure=failure+"\nVersion is different"; } SOCKADDR_IN addr; addr.sin_family=AF_INET; addr.sin_port=htons(portno); addr.sin_addr.s_addr=htonl(INADDR_ANY); AcceptSocket::s=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); if(AcceptSocket::s == INVALID_SOCKET) failure=failure+"\nsocket creating error"; if(bind(AcceptSocket::s,(LPSOCKADDR) &addr,sizeof(addr)) == SOCKET_ERROR) failure=failure+"\nbinding error"; listen(AcceptSocket::s,SOMAXCONN); } void AcceptSocket::closeConnection() { if(AcceptSocket::s) closesocket(AcceptSocket::s); WSACleanup(); } void AcceptSocket::StartAccepting() { sockaddr_in addrNew; int size=sizeof(addrNew); while(1) { SOCKET temp=accept(AcceptSocket::s,(sockaddr *)&addrNew,&size); AcceptSocket * tempAcceptSocket=new AcceptSocket(); tempAcceptSocket->setSocket(temp); DWORD threadId; HANDLE thread=CreateThread(NULL,0,MyThreadFunction,(LPVOID)tempAcceptSocket,0,&threadId); } } DWORD WINAPI AcceptSocket::MyThreadFunction(LPVOID lpParam) { AcceptSocket * acceptsocket=(AcceptSocket *) lpParam; acceptsocket->threadDeal(); return 1; } void AcceptSocket::threadDeal() { "You didn't define threadDeal in the derived class"; } Now the main.cpp is #include<Networking.h> int main() { } When I am compiling The error I got is Error 1 error LNK2005: "private: static unsigned int AcceptSocket::s" (?s@AcceptSocket@@0IA) already defined in NetWorking.obj C:\Documents and Settings\prabhakaran\Desktop\check\check\main.obj check Error 2 error LNK1169: one or more multiply defined symbols found C:\Documents and Settings\prabhakaran\Desktop\check\Debug\check.exe 1 1 check Now anybody please enlighten me about this issue

    Read the article

  • How to fix OpenGL/SDL runtime error which is probobly caused by adding textures [closed]

    - by Arturs Lapins
    Hello I've recently worked with OpenGL and SDL and I was adding textures to my GL_QUADS and when I ran my program I came across with a runtime error. I've searched all over the internet for a fix but I couldn't find anything so I guess I had one more option. Asking here. So here is some of my code. int loadTexture(std::string fileName){ SDL_Surface *image=IMG_Load(fileName.c_str()); SDL_DisplayFormatAlpha(image); unsigned int id; glGenTextures(1,&id); glBindTexture(GL_TEXTURE_2D,&id); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image >h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); SDL_FreeSurface(image); return id; } That's my loadTexture function. void init() { glClearColor(0.0, 0.0, 0.0, 1.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, 800.0 / 600.0, 1.0, 500.0); glMatrixMode(GL_MODELVIEW); glEnable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); tex=loadTexture("test.png"); } That's my init function for OpenGL. Btw I have declared my tex variable. void render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glRotatef(rotation, 1.0, 1.0, 1.0); glBindTexture(GL_TEXTURE_2D, tex); glBegin(GL_QUADS); glTexCoord2f(1.0, 0.0); glVertex3f(-2.0, 2.0, 0.0); glTexCoord2f(1.0, 1.0); glVertex3f(2.0, 2.0, 0.0); glTexCoord2f(1.0, 0.0); glVertex3f(2.0, -2.0, 0.0); glTexCoord2f(0.0, 0.0); glVertex3f(-2.0, -2.0, 0.0); glEnd(); } That's my render function for all my OpenGL render stuff... The render function is called in the main function which contains a game loop. Here is the runtime error when I run it with Visual C++ Unhandled exception at 0x004ffee9 in OpenGL Crap.exe: 0xC0000005: Access violation reading location 0x05c90000. So I have only had this error when I added textures... ... So I found where the error occured it was at this line glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,image->w,image->h,0,GL_RGBA,GL_UNSIGNED_BYTE,image->pixels); but I have totally no Idea what could it be. Update Only thanks to zero298

    Read the article

  • Vector iterators in for loops, return statements, warning, c++

    - by Crystal
    Had 3 questions regarding a hw assignment for C++. The goal was to create a simple palindrome method. Here is my template for that: #ifndef PALINDROME_H #define PALINDROME_H #include <vector> #include <iostream> #include <cmath> template <class T> static bool palindrome(const std::vector<T> &input) { std::vector<T>::const_iterator it = input.begin(); std::vector<T>::const_reverse_iterator rit = input.rbegin(); for (int i = 0; i < input.size()/2; i++, it++, rit++) { if (!(*it == *rit)) { return false; } } return true; } template <class T> static void showVector(const std::vector<T> &input) { for (std::vector<T>::const_iterator it = input.begin(); it != input.end(); it++) { std::cout << *it << " "; } } #endif Regarding the above code, can you have more than one iterator declared in the first part of the for loop? I tried defining both the "it" and "rit" in the palindrome() method, and I kept on getting an error about needing a "," before rit. But when I cut and paste outside the for loop, no errors from the compiler. (I'm using VS 2008). Second question, I pretty much just brain farted on this one. But is the way I have my return statements in the palindrome() method ok? In my head, I think it works like, once the *it and *rit do not equal each other, then the function returns false, and the method exits at this point. Otherwise if it goes all the way through the for loop, then it returns true at the end. I totally brain farted on how return statements work in if blocks and I tried looking up a good example in my book and I couldn't find one. Finally, I get this warnings: \palindrome.h(14) : warning C4018: '<' : signed/unsigned mismatch Now is that because I run my for loop until (i < input.size()/2) and the compiler is telling me that input can be negative? Thanks!

    Read the article

  • Special parameters for texture binding?

    - by user146780
    Do I have to set up my gl context in a certain way to bind textures. I'm following a tutorial. I start by doing: #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; void makeCheckImage(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; } } } void initt(void) { glClearColor (0.0, 0.0, 0.0, 0.0); makeCheckImage(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &texName); glBindTexture(GL_TEXTURE_2D, texName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); engineGL.current.tex = texName; } In my rendering I do: PolygonTesselator.Begin_Contour(); glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D, current.tex); if(layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size() > 0) { glColor4f( layer[currentlayer].Shapes[i].Color.r, layer[currentlayer].Shapes[i].Color.g, layer[currentlayer].Shapes[i].Color.b, layer[currentlayer].Shapes[i].Color.a); } for(unsigned int j = 0; j < layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size(); ++j) { gluTessVertex(PolygonTesselator.tobj,&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0], &layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0]); } PolygonTesselator.End_Contour(); } glDisable(GL_TEXTURE_2D); } However it still renders the color and not the texture at all. I'd atleast expect to see black or something but its as if the bind fails. Am I missing something? Thanks

    Read the article

  • What is wrong with this attempt of sending a break-signal?

    - by Jook
    I have quite a headache about this seemingly easy task: send a break signal to my device, like the wxTerm (or any similar Terminal application) does. This signal has to be 125ms long, according to my tests and the devices specification. It should result in a specific response, but what I get is a longer response than expected, and the transmitted date is false. e.g.: what it should respond 08 00 81 00 00 01 07 00 what it does respond 08 01 0A 0C 10 40 40 07 00 7F What really boggles me is, that after I have used wxTerm to look at my available com-ports (without connecting or sending anything), my code starts to work! I can send then as many breaks as I like, I get my response right from then on. I have to reset my PC in order to try it again. What the heck is going on here?! Here is my code for a reset through a break-signal: minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device) : active_(true), io_service_(io_service), serialPort(io_service, device) { if (!serialPort.is_open()) { cerr << "Failed to open serial port\n"; return; } boost::asio::serial_port_base::flow_control FLOW( boost::asio::serial_port_base::flow_control::hardware ); boost::asio::serial_port_base::baud_rate baud_option(baud); serialPort.set_option(FLOW); serialPort.set_option(baud_option); read_start(); std::cout << SetCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::ptime mst1 = boost::posix_time::microsec_clock::local_time(); boost::this_thread::sleep(boost::posix_time::millisec(125)); boost::posix_time::ptime mst2 = boost::posix_time::microsec_clock::local_time(); std::cout << ClearCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::time_duration msdiff = mst2 - mst1; std::cout << msdiff.total_milliseconds() << std::endl; } Edit: It was only necessary to look at the combo-box selection of com-ports of wxTerm - no active connection was needed to be established in order to make my code work. I am guessing, that there is some sort of initialisation missing, which is done, when wxTerm is creating the list for the serial-port combo-box.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >