Search Results

Search found 33496 results on 1340 pages for '32 vs 64 bit'.

Page 248/1340 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Google Map API v3 — set bounds and center

    - by Michael Bradley
    Hi, I've recently switched to Google Maps API V3. I'm working of a simple example which plots markers from an array, however I do not know how to center and zoom automatically with respect to the markers. I've searched the net high and low, including Google's own documentation, but have not found a clear answer. I know I could simply take an average of the co-ordinates, but how would I set the zoom accordingly? Could somebody please point me in the right direction? Perhaps you know of a good tutorial. Many thanks in advance, Michael function initialize() { var myOptions = { zoom: 10, center: new google.maps.LatLng(-33.9, 151.2), mapTypeId: google.maps.MapTypeId.ROADMAP } var map = new google.maps.Map(document.getElementById("map_canvas"),myOptions); setMarkers(map, beaches); } var beaches = [ ['Bondi Beach', -33.890542, 151.274856, 4], ['Coogee Beach', -33.423036, 151.259052, 5], ['Cronulla Beach', -34.028249, 121.157507, 3], ['Manly Beach', -33.80010128657071, 151.28747820854187, 2], ['Maroubra Beach', -33.450198, 151.259302, 1] ]; function setMarkers(map, locations) { var image = new google.maps.MarkerImage('images/beachflag.png', new google.maps.Size(20, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var shadow = new google.maps.MarkerImage('images/beachflag_shadow.png', new google.maps.Size(37, 32), new google.maps.Point(0,0), new google.maps.Point(0, 32)); var lat = map.getCenter().lat(); var lng = map.getCenter().lng(); var shape = { coord: [1, 1, 1, 20, 18, 20, 18 , 1], type: 'poly' }; for (var i = 0; i < locations.length; i++) { var beach = locations[i]; var myLatLng = new google.maps.LatLng(beach[1], beach[2]); var marker = new google.maps.Marker({ position: myLatLng, map: map, shadow: shadow, icon: image, shape: shape, title: beach[0], zIndex: beach[3] }); } }

    Read the article

  • Scaling a CBitmap - what am I doing wrong?

    - by Smashery
    I've written the following code, which attempts to take a 32x32 bitmap (loaded through MFC's Resource system) and turn it into a 16x16 bitmap, so they can be used as the big and small CImageLists for a CListCtrl. However, when I open the CListCtrl, all the icons are black (in both small and large view). Before I started playing with resizing, everything worked perfectly in Large View. What am I doing wrong? // Create the CImageLists if (!m_imageListL.Create(32,32,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } if (!m_imageListS.Create(16,16,ILC_COLOR24, 1, 1)) { throw std::exception("Failed to create CImageList"); } // Fill the CImageLists with items loaded from ResourceIDs int i = 0; for (std::vector<UINT>::iterator it = vec.begin(); it != vec.end(); it++, i++) { CBitmap* bmpBig = new CBitmap(); bmpBig->LoadBitmap(*it); CDC bigDC; bigDC.CreateCompatibleDC(m_itemList.GetDC()); bigDC.SelectObject(bmpBig); CBitmap* bmpSmall = new CBitmap(); bmpSmall->CreateBitmap(16, 16, 1, 24, 0); CDC smallDC; smallDC.CreateCompatibleDC(&bigDC); smallDC.SelectObject(bmpSmall); smallDC.StretchBlt(0, 0, 32, 32, &bigDC, 0, 0, 16, 16, SRCCOPY); m_imageListL.Add(bmpBig, RGB(0,0,0)); m_imageListS.Add(bmpSmall, RGB(0,0,0)); } m_itemList.SetImageList(&m_imageListS, LVSIL_SMALL); m_itemList.SetImageList(&m_imageListL, LVSIL_NORMAL);

    Read the article

  • Cache Simulator in C

    - by DuffDuff
    Ok this is only my second question, and it's quite a doozy. It's for a school assignment, but no one (including the TAs) seems to be able to help me. It's kind of a tall order but I'm not sure where else to turn. Essentially the assignment was to make a cache simulator. This version is direct mapping and is actually only a small portion of the whole project, but if I can't even get this down I have no chance with other associativities. I'm posting my whole code because I don't want to make any assumptions about where the problem is. This is the test case: http://www.mediafire.com/?ty5dnihydnw And you run the following command: ./sims 512 direct 32 fifo wt pinatrace.out You're supposed to get: hits: 604037 misses 138349 writes: 239269 reads: 138349 But I get: Hits: 587148 Misses: 155222 Writes: 239261 Reads: 155222 If anyone could at least point me in the right direction it would be greatly appreciated. I've been stuck on this for about 12 hours. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <math.h> struct myCache { int valid; char *tag; char *block; }; /* sim [-h] <cache size> <associativity> <block size> <replace alg> <write policy> <trace file> */ //God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s... void hex2bin(char input[], char output[]) { int i; int a = 0; int b = 1; int c = 2; int d = 3; int x = 4; int size; size = strlen(input); for (i = 0; i < size; i++) { if (input[i] =='0') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='1') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='2') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='3') { output[i*x +a] = '0'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='x') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='5') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='6') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='7') { output[i*x +a] = '0'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='8') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='9') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='a') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='b') { output[i*x +a] = '1'; output[i*x +b] = '0'; output[i*x +c] = '1'; output[i*x +d] = '1'; } else if (input[i] =='c') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '0'; } else if (input[i] =='d') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '0'; output[i*x +d] = '1'; } else if (input[i] =='e') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '0'; } else if (input[i] =='f') { output[i*x +a] = '1'; output[i*x +b] = '1'; output[i*x +c] = '1'; output[i*x +d] = '1'; } } output[32] = '\0'; } int main(int argc, char* argv[]) { FILE *tracefile; char readwrite; int trash; int cachesize; int blocksize; int setnumber; int blockbytes; int setbits; int blockbits; int tagsize; int m; int count = 0; int count2 = 0; int count3 = 0; int i; int j; int xindex; int jindex; int kindex; int lindex; int setadd; int totalset; int writeMiss = 0; int writeHit = 0; int cacheMiss = 0; int cacheHit = 0; int read = 0; int write = 0; int size; int extra; char bbits[100]; char sbits[100]; char tbits[100]; char output[100]; char input[100]; char origtag[100]; if (argc != 7) { if (strcmp(argv[0], "-h")) { printf("./sim2 <cache size> <associativity> <block size> <replace alg> <write policy> <trace file>\n"); return 0; } else { fprintf(stderr, "Error: wrong number of parameters.\n"); return -1; } } tracefile = fopen(argv[6], "r"); if(tracefile == NULL) { fprintf(stderr, "Error: File is NULL.\n"); return -1; } //Determining size of sbits, bbits, and tag cachesize = atoi(argv[1]); blocksize = atoi(argv[3]); setnumber = (cachesize/blocksize); printf("setnumber: %d\n", setnumber); setbits = (round((log(setnumber))/(log(2)))); printf("sbits: %d\n", setbits); blockbits = log(blocksize)/log(2); printf("bbits: %d\n", blockbits); tagsize = 32 - (blockbits + setbits); printf("t: %d\n", tagsize); struct myCache newCache[setnumber]; //Allocating Space for Tag Bits, initiating tag and valid to 0s for(i=0;i<setnumber;i++) { newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1)); for(j=0;j<tagsize;j++) { newCache[i].tag[j] = '0'; } newCache[i].valid = 0; } while(fgetc(tracefile)!='#') { setadd = 0; totalset = 0; //read in file fseek(tracefile,-1,SEEK_CUR); fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag); //shift input Hex size = strlen(origtag); extra = (10 - size); for(i=0; i<extra; i++) input[i] = '0'; for(i=extra, j=0; i<(size-(2-extra)); j++, i++) input[i]=origtag[j+2]; input[8] = '\0'; // Convert Hex to Binary hex2bin(input, output); //Resolving the Address into tbits, sbits, bbits for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++) { bbits[xindex] = output[jindex]; } bbits[xindex]='\0'; for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){ sbits[xindex] = output[kindex]; } sbits[xindex]='\0'; for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){ tbits[xindex] = output[lindex]; } tbits[xindex]='\0'; //Convert set bits from char array into ints for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--) { if (sbits[xindex] == '1') setadd = 1; if (sbits[xindex] == '0') setadd = 0; setadd = setadd * pow(2, kindex); totalset += setadd; } //Calculating Hits and Misses if (newCache[totalset].valid == 0) { newCache[totalset].valid = 1; strcpy(newCache[totalset].tag, tbits); } else if (newCache[totalset].valid == 1) { if(strcmp(newCache[totalset].tag, tbits) == 0) { if (readwrite == 'W') { cacheHit++; write++; } if (readwrite == 'R') cacheHit++; } else { if (readwrite == 'R') { cacheMiss++; read++; } if (readwrite == 'W') { cacheMiss++; read++; write++; } strcpy(newCache[totalset].tag, tbits); } } } printf("Hits: %d\n", cacheHit); printf("Misses: %d\n", cacheMiss); printf("Writes: %d\n", write); printf("Reads: %d\n", read); }

    Read the article

  • Can i use a different parser for Axis 1.4?

    - by NishM
    The current SAX parser takes a lot of time (20 minutes) and heap memory(around 400mb) to deserialize the response coming from the soap server as per the logs. Our response XMLs are of average size 4 mb. A part of the log when it runs the applicaiton out of heap is below DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@112c22) named {}name DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element name DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, name) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16d22f1 DEBUG (org.apache.axis.utils.NSStack) NSPop (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Popped element stack to org.apache.axis.message.MessageElement:property DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::endElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::startElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(pushHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(newElem00) DEBUG (org.apache.axis.message.MessageElement) New MessageElement (org.apache.axis.message.MessageElement@1db74af) named {}value DEBUG (org.apache.axis.encoding.DeserializationContext) Pushing element value DEBUG (org.apache.axis.utils.NSStack) NSPush (32) DEBUG (org.apache.axis.encoding.DeserializationContext) Exit: DeserializationContext::startElement() DEBUG (org.apache.axis.encoding.DeserializationContext) Enter: DeserializationContext::endElement(, value) DEBUG (org.apache.axis.i18n.ProjectResourceBundle) org.apache.axis.i18n.resource::handleGetObject(popHandler00) DEBUG (org.apache.axis.encoding.DeserializationContext) Popping handler org.apache.axis.message.SOAPHandler@16880ba DEBUG (org.apache.axis.utils.NSStack) NSPop (32) I cannot use Axis2 because of technical reasons. I have tried using HTTP Commons client instead of HTTP client but the response time remains the same. How can i link a different parser(example xerces 2.10.0 or xstream 1.3.1?) to Axis 1.4 framework in this context so that memory management and response time is favorable?.

    Read the article

  • Boost Mersenne Twister: how to seed with more than one value?

    - by Eamon Nerbonne
    I'm using the boost mt19937 implementation for a simulation. The simulation needs to be reproducible, and that means storing and potentially reusing the RNG seeds later. I'm using the windows crypto api to generate the seed values because I need an external source for the seeds and not because of any particular guarantees of randomness. The output of any simulation run will have a note including the RNG seed - so the seed needs to be reasonably short. On the other hand, as part of the analysis of the simulation, I'll be comparing several runs - but to be sure that these runs are actually different, I'll need to use different seeds - so the seed needs to be long enough to avoid accidental collisions. I've determined that 64-bits of seeding should suffice; the chance of a collision will reach 50% after about 2^32 runs - that probability is low enough that the average error caused by it is negligible to me. Using just 32-bits of seed is tricky; the chance of a collision reaches 50% already after 2^16 runs; and that's a little too likely for my tastes. Unfortunately, the boost implementation either seeds with a full state vector - which is far, far too long - or a single 32-bit unsigned long - which isn't ideal. How can I seed the generator with more than 32-bits but less than a full state vector? I tried just padding the vector or repeating the seeds to fill the state vector, but even a cursory glance at the results shows that that generates poor results.

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • Temperature anomaly calculation of time series data

    - by neel
    I have a time series like following: Data <- structure(list(Year = c(1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L), Month = c(8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L), Day = c(30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 28L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L), Hour = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), temperature = c(72.5, 64, 62.5, 64, 64, 53, 52, 52, 45.5, 49, 50, 50, 59, 63.5, 69.5, 61, 61, NaN, NaN, 39.5, 37, 45.5, 45, 39, 43.5, 52, 53, 56, 64, 66, 66.5, 73.5, 81, 85, 89.5, 87.5, 88.5, 83, 84.5, 74, 60.5, 59, 53, 60.5, 62.5, 64.5, 63, 62, 65.5)), .Names = c("Year", "Month", "Day", "Hour", "temperature"), class = "data.frame", row.names = c(NA, -49L)) and I have to calculate standardized anomaly. The steps to calculate the anomalies are following: Monthly premature departures from the long-term (1991-2007) average are obtained. Then standardized by dividing by the standard deviation of monthly temperature. The standardized monthly anomalies are then weighted by multiplying by the fraction of the average temperature for the given month. These weighted anomalies are then summed over 3 month time period. Can you please help me?

    Read the article

  • Epoll in EPOLLET mode returning 2 EPOLLIN before reading from the socket

    - by Arkaitz Jimenez
    The epoll manpage says that a fd registered with EPOLLET(edge triggered) shouldn't notify twice EPOLLIN if no read has been done. So after an EPOLLIN you need to empty the buffer before epoll_wait being able to return a new EPOLLIN on new data. However I'm experiencing problems with this approach as I'm seeing duplicated EPOLLIN events for untouched fds. This is the strace output, 0x200 is EPOLLRDHUP that is not defined yet in my glibc headers but defined in the kernel. 30285 epoll_ctl(3, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP|EPOLLET|0x2000, {u32=9, u64=9}}) = 0 30285 epoll_wait(3, {{EPOLLIN, {u32=9, u64=9}}}, 10, -1) = 1 30285 epoll_wait(3, {{EPOLLIN, {u32=9, u64=9}}}, 10, -1) = 1 30285 epoll_wait(3, <unfinished ...> 30349 epoll_ctl(3, EPOLL_CTL_DEL, 9, NULL) = 0 30306 recv(9, "7u\0\0\10\345\241\312\t\20\f\32\r\10\27\20\2\30\200\10 \31(C0\17\32\r\10\27\20\2\30"..., 20000, 0) = 20000 30349 epoll_ctl(3, EPOLL_CTL_DEL, 9, NULL) = -1 ENOENT (No such file or directory) 30305 recv(9, " \31(C0\17\32\r\10\27\20\2\30\200\10 \31(C0\17\32\r\10\27\20\2\30\200\10 \31("..., 20000, 0) = 10011 So, after adding fd number 9 I do receive 2 consecutive EPOLLIN events before having recving the file descriptor, the syscall trace shows how I do delete the fd before reading but it should only happen once, one per event. So either I am not reading the manpage properly or something is now working here.

    Read the article

  • Problem: Vectorizing Code with Intel Visual FORTRAN for X64

    - by user313209
    I'm compiling my fortran90 code using Intel Visual FORTRAN on Windows Server 2003 Enterprise X64 Edition. When I compile the code for 32 bit structure and using automatic and manual vectorizing options. The code will be compiled, vectorized. And when I run it on 8 core system the compiled code uses 70% of CPU that shows me that vectorizing is working. But when I compile the code with 64 Bit compiler, it says that the code is vectorized but when I run it it only shows CPU usage of about 12% that is full usage for one core out of 8, so it means that while the compiler says that code is vectorized, vectorization is not working. And it's strange for me because it's on a X64 Edition Windows and I was expecting to see the reverse result. I thought that it should be better to run a code that is compiled for 64 Bit architecture on a 64 bit windows. Anyone have any idea why the compiled code is not able to use the full power of multiple cores for 64 Bit Compiled version? Thanks in advance for your responses.

    Read the article

  • Strange behavior using getchar() and -O3

    - by Eduardo
    I have these two functions void set_dram_channel_width(int channel_width){ printf("one\n"); getchar(); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output: one f //my keyboard input two one f //keyboard input two one f //keyboard input //No more calls Then I change the functions to: void set_dram_channel_width(int channel_width){ printf("one\n"); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output one two f //keyboard input //No more calls Both functions are called by an external code, the code for both programs is the same, just changing the getchar() I get those two different outputs. Is this possible or there is something that is really wrong in my code? Thanks This is the output I get with GDB** For the first code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c810: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:381 Breakpoint 2 at 0x71c7b0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 381. (gdb) run -d ./tmp/MyBench2/ one f [Switching to Thread 47368811512112 (LWP 17507)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. two one f Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) 374 void set_dram_channel_width(int channel_width){ (gdb) c Continuing. two one f For the second code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c7b6: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:380 Breakpoint 2 at 0x71c7f0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 380. (gdb) run one two f [Switching to Thread 46985688772912 (LWP 17801)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) (gdb) c Continuing.

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • uniimageview not updating after views tranistion

    - by Fabio Ricci
    Hi I have one main view where I display an image, in the method viewDidLoad: ballRect = CGRectMake(posBallX, 144, 32.0f, 32.0f); theBall = [[UIImageView alloc] initWithFrame:ballRect]; [theBall setImage:[UIImage imageNamed:@"ball.png"]]; [self.view addSubview:theBall]; [laPalla release]; Obviously, the value of posBallX is defined and then update via a custom method call many times in the same class. theBall.frame = CGRectMake(posBallX, 144, 32, 32); Everything works, but when I go to another view with [self presentModalViewController:viewTwo animated:YES]; and come back with [self presentModalViewController:viewOne animated:YES]; the image is displayed correctly after the method viewDidLoad is called (I retrieve the values with NSUserDefaults) but no more in the second method. In the NSLog I can even see the new posBallX updating correctly, but the Image is simply no more shown... The same happens with a Label as well, which should print the value of posBallX. So, things are just not working if I come back to the viewOne from the viewTwo... Any idea??????? Thanks so much!

    Read the article

  • Linux: modpost does not build anything

    - by waffleman
    I am having problems getting any kernel modules to build on my machine. Whenever I build a module, modpost always says there are zero modules: MODPOST 0 modules To troubleshoot the problem, I wrote a test module (hello.c): #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_INFO */ #include <linux/init.h> /* Needed for the macros */ static int __init hello_start(void) { printk(KERN_INFO "Loading hello module...\n"); printk(KERN_INFO "Hello world\n"); return 0; } static void __exit hello_end(void) { printk(KERN_INFO "Goodbye Mr.\n"); } module_init(hello_start); module_exit(hello_end); Here is the Makefile for the module: obj-m = hello.o KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) clean When I build it on my machine, I get the following output: make -C /lib/modules/2.6.32-27-generic/build M=/home/waffleman/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-27-generic' CC [M] /home/waffleman/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-27-generic' When I make the module on another machine, it is successful: make -C /lib/modules/2.6.24-27-generic/build M=/home/somedude/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.24-27-generic' CC [M] /home/somedude/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/somedude/tmp/mod-test/hello.mod.o LD [M] /home/somedude/tmp/mod-test/hello.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.24-27-generic' I looked for any relevant documentation about modpost, but found little. Anyone know how modpost decides what to build? Is there an environment that I am possibly missing? BTW here is what I am running: uname -a Linux waffleman-desktop 2.6.32-27-generic #49-Ubuntu SMP Wed Dec 1 23:52:12 UTC 2010 i686 GNU/Linux

    Read the article

  • [C#] Three System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • [C#] Two System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • Classify data (cell array) based on years in MATLAB

    - by user2991243
    Suppose that we have this cell array of data : a={43 432 2006; 254 12 2008; 65 35 2000; 64 34 2000; 23 23 2006; 64 2 2010; 32 5 2006; 22 2 2010} Last column of this cell array is years. I want classify data(rows) based on years like this : a_2006 = {43 432 2006; 32 5 2006; 32 5 2006} a_2008 = {254 12 2008}; a_2000 = {65 35 2000; 64 34 2000} a_2010 = {64 2 2010; 22 2 2010} I have different years in column three in every cell array (this cell array is a sample) so I want an automatic method to determine the years and classify them to a_yearA , a_yearB etc. or other naming that I can distinguish years and call data years easily. How can I do this? Thanks.da

    Read the article

  • Django URL Conf Returns Incorrect "Current URL"

    - by natnit
    I have a django app that is mostly done, and the URLs work perfectly when I run it with the manage.py runserver command. However, I've recently tried to get it running via lighttpd, and many links have stopped working. For example: http://mysite.com/races/32 should work, but instead throws this error message. Page not found (404) Request Method: GET Request URL: http://mysite.com/races/32 Using the URLconf defined in racetrack.urls, Django tried these URL patterns, in this order: ^admin/ ^create/$ ^races/$ ^races/(?P<race_id>\d+)/$ ^races/(?P<race_id>\d+)/manage/$ ^races/(?P<text>\w+)/$ ^user/(?P<kol_id>\d+)/$ ^$ ^login/$ ^logout/$ The current URL, 32, didn't match any of these. The request URL is accurate, but the last line (which displays the current URL) is giving 32 instead of races/32 as expected. Here is my urlconf: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('racetrack.races.views', (r'^admin/', include(admin.site.urls)), (r'^create/$', 'create'), (r'^races/$', 'index'), (r'^races/(?P<race_id>\d+)/$', 'detail'), (r'^races/(?P<race_id>\d+)/manage/$', 'manage'), (r'^races/(?P<text>\w+)/$', 'index'), (r'^user/(?P<kol_id>\d+)/$', 'user'), # temporary for index page replace with welcome page (r'^$', 'index'), ) urlpatterns += patterns('django.contrib.auth.views', (r'^login/$', 'login', {'template_name': 'races/login.html'}), (r'^logout/$', 'logout', {'next_page': '/'}), ) Thank you.

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Ping broadcast on Win XP SP3

    - by PaulH
    I'm trying to ping the broadcast address 255.255.255.255 on WinXP SP3. If I use the command line, I get host error: C:\>ping 255.255.255.255 Ping request could not find host 255.255.255.255. Please check the name and try again. If I try a C++ program using the iphlpapi, IcmpSendEcho() fails and GetLastError returns 11010 IP_REQ_TIMED_OUT. HANDLE h = ::IcmpCreateFile(); IPAddr broadcast = inet_addr( "255.255.255.255" ); BYTE payload[ 32 ] = { 0 }; IP_OPTION_INFORMATION option = { 255, 0, 0, 0, 0 }; // a buffer with room for 32 replies each containing the full payload std::vector< BYTE > replies( 32 * ( sizeof( ICMP_ECHO_REPLY ) + 32 ) ); DWORD res = ::IcmpSendEcho( h, broadcast, payload, sizeof( payload ), &option, &replies[ 0 ], replies.size(), 1000 ); ::IcmpCloseHandle( h ); I can ping the local broadcast 192.168.0.255 with no problem. What do I need to do to ping the global broadcast? Thanks, PaulH

    Read the article

  • Access array of c-structs using Python ctypes

    - by sadris
    I have a C-function that allocates memory at the address passed to and is accessed via Python. The pointer contents does contain an array of structs in the C code, but I am unable to get ctypes to access the array properly beyond the 0th element. How can I get the proper memory offset to be able to access the non-zero elements? Python's ctypes.memset is complaining about TypeErrors if I try to use their ctypes.memset function. typedef struct td_Group { unsigned int group_id; char groupname[256]; char date_created[32]; char date_modified[32]; unsigned int user_modified; unsigned int user_created; } Group; int getGroups(LIBmanager * handler, Group ** unallocatedPointer); ############# python code below: class Group(Structure): _fields_ = [("group_id", c_uint), ("groupname", c_char*256), ("date_created", c_char*32), ("date_modified", c_char*32), ("user_modified", c_uint), ("user_created", c_uint)] myGroups = c_void_p() count = libnativetest.getGroups( nativePointer, byref(myGroups) ) casted = cast( myGroups, POINTER(Group*count) ) for x in range(0,count): theGroup = cast( casted[x], POINTER(Group) ) # this only works for the first entry in the array: print "~~~~~~~~~~" + theGroup.contents.groupname Related: Access c_char_p_Array_256 in Python using ctypes

    Read the article

  • How do you convert an unsigned int[16] of hexidecimal to an unsigned char array without losing any information?

    - by user1068636
    I have a unsigned int[16] array that when printed out looks like this: 4418703544ED3F688AC208F53343AA59 The code used to print it out is this: for (i = 0; i < 16; i++) printf("%X", CipherBlock[i] / 16), printf("%X",CipherBlock[i] % 16); printf("\n"); I need to pass this unsigned int array "CipherBlock" into a decrypt() method that only takes unsigned char *. How do correctly memcpy everything from the "CipherBlock" array into an unsigned char array without losing information? My understanding is an unsigned int is 4 bytes and unsigned char 1 byte. Since "CipherBlock" is 16 unsigned integers, the total size in bytes = 16 * 4 = 64 bytes. Does this mean my unsigned char[] array needs to be 64 in length? If so, would the following work? unsigned char arr[64] = { '\0' }; memcpy(arr,CipherBlock,64); This does not seem to work. For some reason it only copies the the first byte of "CipherBlock" into "arr". The rest of "arr" is '\0' thereafter.

    Read the article

  • Ways to access a 32bit DLL from a 64bit exe

    - by bufferz
    I have a project that must be compiled and run in 64 bit mode. Unfortunately, I am required to call upon a DLL that is only available in 32 bit mode, so there's no way I can house everything in a 1 Visual Studio project. I am working to find the best way to wrap the 32 bit DLL in its own exe/service and issue remote (although on the same machine) calls to that exe/service from my 64 bit app. My OS is Win7 Pro 64 bit. The required calls to this 32 bit process are several dozen per second, but low data volume. This is a realtime image analysis application so response time is critical despite low volume. Lots of sending/receiving single primitives. Ideally, I would host a WCF service to house this DLL, but in a 64 bit OS one cannot force the service to run as x86! Source. That is really unfortunate since I timed function calls to the WCF service to be only 4ms on my machine. I have experimented with named pipes is .net. I found them to be 40-50 times slower than WCF (unusable for me). Any other options or suggestions for the best way to approach my puzzle?

    Read the article

  • Cannot open database requested by the login. The login failed. Login failed for user

    - by Cipher
    Hi, I have copied a DB from one my computers and using it here. On trying to open the page which requires the fetching content from DB, on con.open I am getting this exception: Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp.mdf". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\DATA\cakephp_log.LDF". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open database "cakephp" requested by the login. The login failed. Login failed for user 'Sarin-PC\Sarin'. I have attached the database from Management Studio Express 2008 and I have also checcked the connection string. Here it is: <connectionStrings> <add name="cn" connectionString="server=.\sqlexpress;database=cakephp;integrated security=true;uid=sarin;pwd=******"/> </connectionStrings> In Visual Studio, when I test the connection, it says "Test connection succeeded". However, there is one strange thing going on. When I login to the Management Studio, there is no + sign with the newly attached database, as shown. If the full WebConfig is reqiured to be seen, I have pasted it here: http://pastebin.com/sVAuN0Ug

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >