Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 225/238 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Why is FLD1 loading NaN instead?

    - by Bernd Jendrissek
    I have a one-liner C function that is just return value * pow(1.+rate, -delay); - it discounts a future value to a present value. The interesting part of the disassembly is 0x080555b9 : neg %eax 0x080555bb : push %eax 0x080555bc : fildl (%esp) 0x080555bf : lea 0x4(%esp),%esp 0x080555c3 : fldl 0xfffffff0(%ebp) 0x080555c6 : fld1 0x080555c8 : faddp %st,%st(1) 0x080555ca : fxch %st(1) 0x080555cc : fstpl 0x8(%esp) 0x080555d0 : fstpl (%esp) 0x080555d3 : call 0x8051ce0 0x080555d8 : fmull 0xfffffff8(%ebp) While single-stepping through this function, gdb says (rate is 0.02, delay is 2; you can see them on the stack): (gdb) si 0x080555c6 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 =R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 R2: Valid 0x4004ff147ae147ae1800 +63.77000000000000313 R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1861 IE PE SF TOP: 3 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0000 Instruction Pointer: 0x73:0x080555c3 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xdd45 And after the fld1: (gdb) si 0x080555c8 30 return value * pow(1.+rate, -delay); (gdb) info float R7: Valid 0x4004a6c28f5c28f5c000 +41.68999999999999773 R6: Valid 0x4004e15c28f5c28f6000 +56.34000000000000341 R5: Valid 0x4004dceb851eb851e800 +55.22999999999999687 R4: Valid 0xc0008000000000000000 -2 R3: Valid 0x3ff9a3d70a3d70a3d800 +0.02000000000000000042 =R2: Special 0xffffc000000000000000 Real Indefinite (QNaN) R1: Valid 0x4004e17ae147ae147800 +56.36999999999999744 R0: Valid 0x4004efb851eb851eb800 +59.92999999999999972 Status Word: 0x1261 IE PE SF C1 TOP: 2 Control Word: 0x037f IM DM ZM OM UM PM PC: Extended Precision (64-bits) RC: Round to nearest Tag Word: 0x0020 Instruction Pointer: 0x73:0x080555c6 Operand Pointer: 0x7b:0xbff41d78 Opcode: 0xd9e8 After this, everything goes to hell. Things get grossly over or undervalued, so even if there were no other bugs in my freeciv AI attempt, it would choose all the wrong strategies. Like sending the whole army to the arctic. (Sigh, if only I were getting that far.) I must be missing something obvious, or getting blinded by something, because I can't believe that fld1 should ever possibly fail. Even less that it should fail only after a handful of passes through this function. On earlier passes the FPU correctly loads 1 into ST(0). The bytes at 0x080555c6 definitely encode fld1 - checked with x/... on the running process. What gives?

    Read the article

  • How to define and work with an array of bits in C?

    - by Eddy
    I want to create a very large array on which I write '0's and '1's. I'm trying to simulate a physical process called random sequential adsorption, where units of length 2, dimers, are deposited onto an n-dimensional lattice at a random location, without overlapping each other. The process stops when there is no more room left on the lattice for depositing more dimers (lattice is jammed). Initially I start with a lattice of zeroes, and the dimers are represented by a pair of '1's. As each dimer is deposited, the site on the left of the dimer is blocked, due to the fact that the dimers cannot overlap. So I simulate this process by depositing a triple of '1's on the lattice. I need to repeat the entire simulation a large number of times and then work out the average coverage %. I've already done this using an array of chars for 1D and 2D lattices. At the moment I'm trying to make the code as efficient as possible, before working on the 3D problem and more complicated generalisations. This is basically what the code looks like in 1D, simplified: int main() { /* Define lattice */ array = (char*)malloc(N * sizeof(char)); total_c = 0; /* Carry out RSA multiple times */ for (i = 0; i < 1000; i++) rand_seq_ads(); /* Calculate average coverage efficiency at jamming */ printf("coverage efficiency = %lf", total_c/1000); return 0; } void rand_seq_ads() { /* Initialise array, initial conditions */ memset(a, 0, N * sizeof(char)); available_sites = N; count = 0; /* While the lattice still has enough room... */ while(available_sites != 0) { /* Generate random site location */ x = rand(); /* Deposit dimer (if site is available) */ if(array[x] == 0) { array[x] = 1; array[x+1] = 1; count += 1; available_sites += -2; } /* Mark site left of dimer as unavailable (if its empty) */ if(array[x-1] == 0) { array[x-1] = 1; available_sites += -1; } } /* Calculate coverage %, and add to total */ c = count/N total_c += c; } For the actual project I'm doing, it involves not just dimers but trimers, quadrimers, and all sorts of shapes and sizes (for 2D and 3D). I was hoping that I would be able to work with individual bits instead of bytes, but I've been reading around and as far as I can tell you can only change 1 byte at a time, so either I need to do some complicated indexing or there is a simpler way to do it? Thanks for your answers

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Debugging a basic OpenGL texture fail? (iphone)

    - by Ben
    Hey all, I have a very basic texture map problem in GL on iPhone, and I'm wondering what strategies there are for debugging this kind of thing. (Frankly, just staring at state machine calls and wondering if any of them is wrong or misordered is no way to live-- are there tools for this?) I have a 512x512 PNG file that I'm loading up from disk (not specially packed), creating a CGBitmapContext, then calling CGContextDrawImage to get bytes out of it. (This code is essentially stolen from an Apple sample.) I'm trying to map the texture to a "quad", with code that looks essentially like this-- all flat 2D stuff, nothing fancy: glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glEnableClientState(GL_TEXTURE_COORD_ARRAY); GLfloat vertices[8] = { viewRect.origin.x, viewRect.size.height, viewRect.origin.x, viewRect.origin.y, viewRect.size.width, viewRect.origin.y, viewRect.size.width, viewRect.size.height }; GLfloat texCoords[8] = { 0, 1.0, 0, 0, 1.0, 0, 1.0, 1.0 }; glBindTexture(GL_TEXTURE_2D, myTextureRef); // This was previously bound to glVertexPointer(2, GL_FLOAT , 0, vertices); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisable(GL_TEXTURE_2D); My supposedly textured area comes out just black. I see no debug output from the CG calls to set up the texture. glGetError reports nothing. If I simplify this code block to just draw the verts, but set up a pure color, the quad area lights up exactly as expected. If I clear the whole context immediately beforehand to red, I don't see the red-- which means something is being rendered there, but not the contents of my PNG. What could I be doing wrong? And more importantly, what are the right tools and techniques for debugging this sort of thing, because running into this kind of problem and not being able to "step through it" in a debugger in any meaningful way is a bummer. Thanks!

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • Changing RGB color image to Grayscale image using Objective C

    - by user567167
    I was developing a application that changes color image to gray image. However, some how the picture comes out wrong. I dont know what is wrong with the code. maybe the parameter that i put in is wrong please help. UIImage *c = [UIImage imageNamed:@"downRed.png"]; CGImageRef cRef = CGImageRetain(c.CGImage); NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(cRef)); size_t w = CGImageGetWidth(cRef); size_t h = CGImageGetHeight(cRef); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; unsigned char* greyPixelData = (unsigned char*) malloc(w*h); for (int y = 0; y < h; y++) { for(int x = 0; x < w; x++){ int iter = 4*(w*y+x); int red = pixe lBytes[iter]; int green = pixelBytes[iter+1]; int blue = pixelBytes[iter+2]; greyPixelData[w*y+x] = (unsigned char)(red*0.3 + green*0.59+ blue*0.11); int value = greyPixelData[w*y+x]; } } CFDataRef imgData = CFDataCreate(NULL, greyPixelData, w*h); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(imgData); size_t width = CGImageGetWidth(cRef); size_t height = CGImageGetHeight(cRef); size_t bitsPerComponent = 8; size_t bitsPerPixel = 8; size_t bytesPerRow = CGImageGetWidth(cRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGBitmapInfo info = kCGImageAlphaNone; CGFloat *decode = NULL; BOOL shouldInteroplate = NO; CGColorRenderingIntent intent = kCGRenderingIntentDefault; CGDataProviderRelease(imgDataProvider); CGImageRef throughCGImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, info, imgDataProvider, decode, shouldInteroplate, intent); UIImage* newImage = [UIImage imageWithCGImage:throughCGImage]; CGImageRelease(throughCGImage); newImageView.image = newImage;

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • BufferedReader no longer buffering after a while?

    - by BobTurbo
    Sorry I can't post code but I have a bufferedreader with 50000000 bytes set as the buffer size. It works as you would expect for half an hour, the HDD light flashing every two minutes or so, reading in the big chunk of data, and then going quiet again as the CPU processes it. But after about half an hour (this is a very big file), the HDD starts thrashing as if it is reading one byte at a time. It is still in the same loop and I think I checked free ram to rule out swapping (heap size is default). Probably won't get any helpful answers, but worth a try. OK I have changed heap size to 768mb and still nothing. There is plenty of free memory and java.exe is only using about 300mb. Now I have profiled it and heap stays at about 200MB, well below what is available. CPU stays at 50%. Yet the HDD starts thrashing like crazy. I have.. no idea. I am going to rewrite the whole thing in c#, that is my solution. Here is the code (it is just a throw-away script, not pretty): BufferedReader s = null; HashMap<String, Integer> allWords = new HashMap<String, Integer>(); HashSet<String> pageWords = new HashSet<String>(); long[] pageCount = new long[78592]; long pages = 0; Scanner wordFile = new Scanner(new BufferedReader(new FileReader("allWords.txt"))); while (wordFile.hasNext()) { allWords.put(wordFile.next(), Integer.parseInt(wordFile.next())); } s = new BufferedReader(new FileReader("wikipedia/enwiki-latest-pages-articles.xml"), 50000000); StringBuilder words = new StringBuilder(); String nextLine = null; while ((nextLine = s.readLine()) != null) { if (a.matcher(nextLine).matches()) { continue; } else if (b.matcher(nextLine).matches()) { continue; } else if (c.matcher(nextLine).matches()) { continue; } else if (d.matcher(nextLine).matches()) { nextLine = s.readLine(); if (e.matcher(nextLine).matches()) { if (f.matcher(s.readLine()).matches()) { pageWords.addAll(Arrays.asList(words.toString().toLowerCase().split("[^a-zA-Z]"))); words.setLength(0); pages++; for (String word : pageWords) { if (allWords.containsKey(word)) { pageCount[allWords.get(word)]++; } else if (!word.isEmpty() && allWords.containsKey(word.substring(0, word.length() - 1))) { pageCount[allWords.get(word.substring(0, word.length() - 1))]++; } } pageWords.clear(); } } } else if (g.matcher(nextLine).matches()) { continue; } words.append(nextLine); words.append(" "); }

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Use JQuery to target unwrapped text inside a div

    - by Chris
    I'm trying to find a way to wrap just the inner text of an element, I don't want to target any other inner dom elements. For example. <ul> <li class="this-one"> this is my item <ul> <li> this is a sub element </li> </ul> </li> </ul> I want to use jQuery to do this. <ul> <li class="this-one"> <div class="tree-item-text">this is my item</div> <ul> <li> <div class="tree-item-text">this is a sub element</div> </li> </ul> </li> </ul> A little background is I need to make an in-house tree structure ui element, So I'm using the UL structure to represent this. But I don't want developers to have to do any special formatting to use the widget. update: I just wanted to add the purpose of this is I want to add a click listener to be able to expand the elements under the li, However, since those elements are within the li the click listener will activate even when clicking on the children, So I want to attach it to the text instead, to do this the text needs to be targetable, which is why I want to wrap it in a div of it's own. So far I've come up with wrapping all the inner elements of the li in a div and then moving all inner dom elements back to the original parent. But this code is pretty heavy for something that might be much simpler and not require so much DOM manipulation. EDIT: Want to share the first pseudo alternative I came up with but I think it is very tasking for what I want to accomplish. var innerTextThing = $("ul.tree ul").parents("li").wrapInner("<div class='tree-node-text'>"); $(innerTextThing.find(".tree-node-text")).each(function(){ $(this).after($(this).children("ul")); }); Answered: I ended up doing the following, FYI i only have to worry about FF and IE compatibility so it's untested in other browsers. //this will wrap all li textNodes in a div so we can target them. $(that).find("li").contents() .filter(function () { return this.nodeType == 3; }).each(function () { if ( //these are for IE and FF compatibility (this.textContent != undefined && this.textContent.trim() != "") || (this.innerText != undefined && this.innerText.trim() != "") ) { $(this).wrap("<div class='tree-node-text'>"); } });

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • when get pagecontent from URL the connect alway return nopermistion ?

    - by tiendv
    I have a methor to return pagecontent of link but when it run, alway return "Do not perrmisson ", plesea check it here is code to return string pagecontent public static String getPageContent(String targetURL) throws Exception { Hashtable contentHash = new Hashtable(); URL url; URLConnection conn; // The data streams used to read from and write to the URL connection. DataOutputStream out; DataInputStream in; // String returned as the result . String returnString = ""; // Create the URL object and make a connection to it. url = new URL(targetURL); conn = url.openConnection(); // check out permission of acess URL if (conn.getPermission() != null) { returnString = "Do not Permission access URL "; } else { // Set connection parameters. We need to perform input and output, // so set both as true. conn.setDoInput(true); conn.setDoOutput(true); // Disable use of caches. conn.setUseCaches(false); // Set the content type we are POSTing. We impersonate it as // encoded form data conn.setRequestProperty("Content-Type", "application/x-www-form-urlencoded"); // get the output stream . out = new DataOutputStream(conn.getOutputStream()); String content = ""; // Create a single String value pairs for all the keys // in the Hashtable passed to us. Enumeration e = contentHash.keys(); boolean first = true; while (e.hasMoreElements()) { // For each key and value pair in the hashtable Object key = e.nextElement(); Object value = contentHash.get(key); // If this is not the first key-value pair in the hashtable, // concantenate an "&" sign to the constructed String if (!first) content += "&"; // append to a single string. Encode the value portion content += (String) key + "=" + URLEncoder.encode((String) value); first = false; } // Write out the bytes of the content string to the stream. out.writeBytes(content); out.flush(); out.close(); // check if can't read from URL // Read input from the input stream. in = new DataInputStream(conn.getInputStream()); String str; while (null != ((str = in.readLine()))) { returnString += str + "\n"; } in.close(); } // return the string that was read. return returnString; }

    Read the article

  • Download a file using cocoa

    - by dododedodonl
    Hi All, I want to download a file to the downloads folder. I searched google for this and found the NSURLDownload class. I've read the page in the dev center and created this code (with some copy and pasting) this code: @implementation Downloader @synthesize downloadResponse; - (void)startDownloadingURL:(NSString*)downloadUrl destenation:(NSString*)destenation { // create the request NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:downloadUrl] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // create the connection with the request // and start loading the data NSURLDownload *theDownload=[[NSURLDownload alloc] initWithRequest:theRequest delegate:self]; if (!theDownload) { NSLog(@"Download could not be made..."); } } - (void)download:(NSURLDownload *)download decideDestinationWithSuggestedFilename:(NSString *)filename { NSString *destinationFilename; NSString *homeDirectory=NSHomeDirectory(); destinationFilename=[[homeDirectory stringByAppendingPathComponent:@"Desktop"] stringByAppendingPathComponent:filename]; [download setDestination:destinationFilename allowOverwrite:NO]; } - (void)download:(NSURLDownload *)download didFailWithError:(NSError *)error { // release the connection [download release]; // inform the user NSLog(@"Download failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSErrorFailingURLStringKey]); } - (void)downloadDidFinish:(NSURLDownload *)download { // release the connection [download release]; // do something with the data NSLog(@"downloadDidFinish"); } - (void)setDownloadResponse:(NSURLResponse *)aDownloadResponse { [aDownloadResponse retain]; [downloadResponse release]; downloadResponse = aDownloadResponse; } - (void)download:(NSURLDownload *)download didReceiveResponse:(NSURLResponse *)response { // reset the progress, this might be called multiple times bytesReceived = 0; // retain the response to use later [self setDownloadResponse:response]; } - (void)download:(NSURLDownload *)download didReceiveDataOfLength:(unsigned)length { long long expectedLength = [[self downloadResponse] expectedContentLength]; bytesReceived = bytesReceived+length; if (expectedLength != NSURLResponseUnknownLength) { percentComplete = (bytesReceived/(float)expectedLength)*100.0; NSLog(@"Percent - %f",percentComplete); } else { NSLog(@"Bytes received - %d",bytesReceived); } } -(NSURLRequest *)download:(NSURLDownload *)download willSendRequest:(NSURLRequest *)request redirectResponse:(NSURLResponse *)redirectResponse { NSURLRequest *newRequest=request; if (redirectResponse) { newRequest=nil; } return newRequest; } @end But my problem is now, it doesn't appear on the desktop as specified. And I want to put it in downloads and not on the desktop... What do I have to do?

    Read the article

  • C# performance varying due to memory

    - by user1107474
    Hope this is a valid post here, its a combination of C# issues and hardware. I am benchmarking our server because we have found problems with the performance of our quant library (written in C#). I have simulated the same performance issues with some simple C# code- performing very heavy memory-usage. The code below is in a function which is spawned from a threadpool, up to a maximum of 32 threads (because our server has 4x CPUs x 8 cores each). This is all on .Net 3.5 The problem is that we are getting wildly differing performance. I run the below function 1000 times. The average time taken for the code to run could be, say, 3.5s, but the fastest will only be 1.2s and the slowest will be 7s- for the exact same function! I have graphed the memory usage against the timings and there doesnt appear to be any correlation with the GC kicking in. One thing I did notice is that when running in a single thread the timings are identical and there is no wild deviation. I have also tested CPU-bound algorithms and the timings are identical too. This has made us wonder if the memory bus just cannot cope. I was wondering could this be another .net or C# problem, or is it something related to our hardware? Would this be the same experience if I had used C++, or Java?? We are using 4x Intel x7550 with 32GB ram. Is there any way around this problem in general? Stopwatch watch = new Stopwatch(); watch.Start(); List<byte> list1 = new List<byte>(); List<byte> list2 = new List<byte>(); List<byte> list3 = new List<byte>(); int Size1 = 10000000; int Size2 = 2 * Size1; int Size3 = Size1; for (int i = 0; i < Size1; i++) { list1.Add(57); } for (int i = 0; i < Size2; i = i + 2) { list2.Add(56); } for (int i = 0; i < Size3; i++) { byte temp = list1.ElementAt(i); byte temp2 = list2.ElementAt(i); list3.Add(temp); list2[i] = temp; list1[i] = temp2; } watch.Stop(); (the code is just meant to stress out the memory) I would include the threadpool code, but we used a non-standard threadpool library. EDIT: I have reduced "size1" to 100000, which basically doesn't use much memory and I still get a lot of jitter. This suggests it's not the amount of memory being transferred, but the frequency of memory grabs?

    Read the article

  • how to access camera.java in on cick event?

    - by Srikanth Naidu
    hi , i am making a app which takes photo on button click i have camera.java which operates camera and takes photo how to i call it on the below event? public void onClick(DialogInterface arg0, int arg1) { setContentView(R.layout.startcamera); } Camera .java package neuro.com; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import android.app.Activity; import android.hardware.Camera; import android.hardware.Camera.PictureCallback; import android.hardware.Camera.ShutterCallback; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.FrameLayout; public class CameraDemo extends Activity { private static final String TAG = "CameraDemo"; Camera camera; Preview preview; Button buttonClick; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.startcamera); preview = new Preview(this); ((FrameLayout) findViewById(R.id.preview)).addView(preview); buttonClick = (Button) findViewById(R.id.buttonClick); buttonClick.setOnClickListener( new OnClickListener() { public void onClick(View v) { preview.camera.takePicture(shutterCallback, rawCallback, jpegCallback); } }); Log.d(TAG, "onCreate'd"); } ShutterCallback shutterCallback = new ShutterCallback() { public void onShutter() { Log.d(TAG, "onShutter'd"); } }; /** Handles data for raw picture */ PictureCallback rawCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { Log.d(TAG, "onPictureTaken - raw"); } }; /** Handles data for jpeg picture */ PictureCallback jpegCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { FileOutputStream outStream = null; try { // write to local sandbox file system // outStream = CameraDemo.this.openFileOutput(String.format("%d.jpg", System.currentTimeMillis()), 0); // Or write to sdcard outStream = new FileOutputStream(String.format("/sdcard/%d.jpg", System.currentTimeMillis())); outStream.write(data); outStream.close(); Log.d(TAG, "onPictureTaken - wrote bytes: " + data.length); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { } Log.d(TAG, "onPictureTaken - jpeg"); } }; }

    Read the article

  • non blocking TCP-acceptor not reading from socket

    - by Abruzzo Forte e Gentile
    I have the code below implementing a NON-Blocking TCP acceptor. Clients are able to connect without any problem and the writing seems occurring as well, but the acceptor doesn't read anything from the socket and the call to read() blocks indefinitely. Am I using some wrong setting for the acceptor? Kind Regards AFG int main(){ create_programming_socket(); poll_programming_connect(); while(1){ poll_programming_read(); } } int create_programming_socket(){ int cnt = 0; p_listen_socket = socket( AF_INET, SOCK_STREAM, 0 ); if( p_listen_socket < 0 ){ return 1; } int flags = fcntl( p_listen_socket, F_GETFL, 0 ); if( fcntl( p_listen_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ return 1; } bzero( (char*)&p_serv_addr, sizeof(p_serv_addr) ); p_serv_addr.sin_family = AF_INET; p_serv_addr.sin_addr.s_addr = INADDR_ANY; p_serv_addr.sin_port = htons( p_port ); if( bind( p_listen_socket, (struct sockaddr*)&p_serv_addr , sizeof(p_serv_addr) ) < 0 ) { return 1; } listen( p_listen_socket, 5 ); return 0; } int poll_programming_connect(){ int retval = 0; static socklen_t p_clilen = sizeof(p_cli_addr); int res = accept( p_listen_socket, (struct sockaddr*)&p_cli_addr, &p_clilen ); if( res > 0 ){ p_conn_socket = res; int flags = fcntl( p_conn_socket, F_GETFL, 0 ); if( fcntl( p_conn_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ retval = 1; }else{ p_connected = true; } }else if( res == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): accept(c_listen_socket) would block\n"); }else{ retval = 1; } return retval; } int poll_programming_read(){ int retval = 0; bzero( p_buffer, 256 ); int numbytes = read( p_conn_socket, p_buffer, 255 ); if( numbytes > 0 ) { fprintf( stderr, "poll_sock(): read() read %d bytes\n", numbytes ); pkt_struct2_t tx_buf; int fred; int i; } else if( numbytes == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): read() would block\n"); } else { close( p_conn_socket ); p_connected = false; retval = 1; } return retval; }

    Read the article

  • Is there anything wrong with my texture loading method ?

    - by José Joel.
    I'm a noob in openGL and trying to learn as much as possible. I'm using this method to load my openGL textures, loading every .png as RGBA4444. I'm doing anything incorrect ? - (void)loadTexture:(NSString*)nombre { CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage; if (textureImage == nil) { NSLog(@"Failed to load texture image"); return; } textureWidth = NextPowerOfTwo(CGImageGetWidth(textureImage)); textureHeight = NextPowerOfTwo(CGImageGetHeight(textureImage)); imageSizeX= CGImageGetWidth(textureImage); imageSizeY= CGImageGetHeight(textureImage); GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4); // Por 4 pues cada pixel necesita 4 bytes, RGBA CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast ); CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage); //Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA" void *tempData = malloc(textureWidth * textureHeight * 2); unsigned int* inPixel32 = (unsigned int*)textureData; unsigned short* outPixel16 = (unsigned short*)tempData; for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32) *outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R ((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G ((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B ((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A free(textureData); textureData = tempData; CGContextRelease(textureContext); glGenTextures(1, &textures[0]); glBindTexture(GL_TEXTURE_2D, textures[0]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4 , textureData); free(textureData); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } And this is my dealloc method: - (void)dealloc { glDeleteTextures(1,textures); [super dealloc]; }

    Read the article

  • XML Outputting - PHP vs JS vs Anything Else?

    - by itsphil
    Hi everyone, I am working on developing a Travel website which uses XML API's to get the data. However i am relatively new to XML and outputting it. I have been experimenting with using PHP to output a test XML file, but currently the furthest iv got is to only output a few records. As it the questions states i need to know which technology will be best for this project. Below iv included some points to take into consideration. The website is going to be a large sized, heavy traffic site (expedia/lastminute size) My skillset is PHP (intermediate/high skilled) & Javascript (intermediate/high skilled) Below is an example of the XML that the API is outputting: <?xml version="1.0"?> <response method="###" success="Y"> <errors> </errors> <request> <auth password="test" username="test" /> <method action="###" sitename="###" /> </request> <results> <line id="6" logourl="###" name="Line 1" smalllogourl="###"> <ships> <ship id="16" name="Ship 1" /> <ship id="453" name="Ship 2" /> <ship id="468" name="Ship 3" /> <ship id="356" name="Ship 4" /> </ships> </line> <line id="63" logourl="###" name="Line 2" smalllogourl="###"> <ships> <ship id="492" name="Ship 1" /> <ship id="454" name="Ship 2" /> <ship id="455" name="Ship 3" /> <ship id="421" name="Ship 4" /> <ship id="401" name="Ship 5" /> <ship id="404" name="Ship 6" /> <ship id="405" name="Ship 7" /> <ship id="406" name="Ship 8" /> <ship id="407" name="Ship 9" /> <ship id="408" name="Ship 10" /> </ships> </line> <line id="41" logourl="###"> <ships> <ship id="229" name="Ship 1" /> <ship id="230" name="Ship 2" /> <ship id="231" name="Ship 3" /> <ship id="445" name="Ship 4" /> <ship id="570" name="Ship 5" /> <ship id="571" name="Ship 6" /> </ships> </line> </results> </response> If possible when suggesting which technlogy is best for this project, if you could provide some getting started guides or any information would be very much appreciated. Thank you for taking the time to read this.

    Read the article

  • Router Alert options on IGMPv2 packets

    - by Scakko
    I'm trying to forge an IGMPv2 Membership Request packet and send it on a RAW socket. The RFC 3376 states: IGMP messages are encapsulated in IPv4 datagrams, with an IP protocol number of 2. Every IGMP message described in this document is sent with an IP Time-to-Live of 1, IP Precedence of Internetwork Control (e.g., Type of Service 0xc0), and carries an IP Router Alert option [RFC-2113] in its IP header So the IP_ROUTER_ALERT flag must be set. I'm trying to forge the strict necessary of the packet (e.g. only the IGMP header & payload), so i'm using the setsockopt to edit the IP options. some useful variables: #define C_IP_MULTICAST_TTL 1 #define C_IP_ROUTER_ALERT 1 int sockfd = 0; int ecsockopt = 0; int bytes_num = 0; int ip_multicast_ttl = C_IP_MULTICAST_TTL; int ip_router_alert = C_IP_ROUTER_ALERT; Here's how I open the RAW socket: sock_domain = AF_INET; sock_type = SOCK_RAW; sock_proto = IPPROTO_IGMP; if ((ecsockopt = socket(sock_domain,sock_type,sock_proto)) < 0) { printf("Error %d: Can't open socket.\n", errno); return 1; } else { printf("** Socket opened.\n"); } sockfd = ecsockopt; Then I set the TTL and Router Alert option: // Set the sent packets TTL if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_MULTICAST_TTL, &ip_multicast_ttl, sizeof(ip_multicast_ttl))) < 0) { printf("Error %d: Can't set TTL.\n", ecsockopt); return 1; } else { printf("** TTL set.\n"); } // Set the Router Alert if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_ROUTER_ALERT, &ip_router_alert, sizeof(ip_router_alert))) < 0) { printf("Error %d: Can't set Router Alert.\n", ecsockopt); return 1; } else { printf("** Router Alert set.\n"); } The setsockopt of IP_ROUTER_ALERT returns 0. After forging the packet, i send it with sendto in this way: // Send the packet if((bytes_num = sendto(sockfd, packet, packet_size, 0, (struct sockaddr*) &mgroup1_addr, sizeof(mgroup1_addr))) < 0) { printf("Error %d: Can't send Membership report message.\n", bytes_num); return 1; } else { printf("** Membership report message sent. (bytes=%d)\n",bytes_num); } The packet is sent, but the IP_ROUTER_ALERT option (checked with wireshark) is missing. Am i doing something wrong? is there some other methods to set the IP_ROUTER_ALERT option? Thanks in advance.

    Read the article

  • C# - periodic data reading and Thread.Sleep()

    - by CaldonCZE
    Hello, my C# application reads data from special USB device. The data are read as so-called "messages", each of them having 24 bytes. The amount of messages that must be read per second may differ (maximal frequency is quite high, about 700 messages per second), but the application must read them all. The only way to read the messages is by calling function "ReadMessage", that returns one message read from the device. The function is from external DLL and I cannot modify it. My solution: I've got a seperate thread, that is running all the time during the program run and it's only job is to read the messages in cycle. The received messages are then processed in main application thread. The function executed in the "reading thread" is the following: private void ReadingThreadFunction() { int cycleCount; try { while (this.keepReceivingMessages) { cycleCount++; TRxMsg receivedMessage; ReadMessage(devHandle, out receivedMessage); //...do something with the message... } } catch { //... catch exception if reading failed... } } This solution works fine and all messages are correctly received. However, the application consumes too much resources, the CPU of my computer runs at more than 80%. Therefore I'd like to reduce it. Thanks to the "cycleCount" variable I know that the "cycling speed" of the thread is about 40 000 cycles per second. This is unnecessarily too much, since I need to receive maximum 700 messagges/sec. (and the device has buffer for about 100 messages, so the cycle speed can be even a little lower) I tried to reduce the cycle speed by suspending the thread for 1 ms by Thread.Sleep(1); command. Of course, this didn't work and the cycle speed became about 70 cycles/second which was not enough to read all messages. I know that this attempt was silly, that putting the thread to sleep and then waking him up takes much longer than 1 ms. However, I don't know what else to do: Is there some other way how to slow the thread execution down (to reduce CPU consumption) other than Thread.Sleep? Or am I completely wrong and should I use something different for this task instead of Thread, maybe Threading.Timer or ThreadPool? Thanks a lot in advance for all suggestions. This is my first question here and I'm a beginner at using threads, so please excuse me if it's not clear enough.

    Read the article

  • sqlite eatingup memory on iPhone when doing insert

    - by kviksilver
    I am having problem with inserting data to sqlite database. char *update="INSERT OR REPLACE INTO ct_subject (id,id_parent, title, description, link, address, phone, pos_lat, pos_long, no_votes, avg_vote, photo, id_comerc, id_city, placement, type, timestamp, mail) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?);"; sqlite3_stmt *stmt; if(sqlite3_prepare_v2(database, update, -1, &stmt, nil) == SQLITE_OK){ sqlite3_bind_int(stmt, 1, [[[newCategories objectAtIndex:i] valueForKey:@"id"] intValue]); sqlite3_bind_int(stmt, 2, [[[newCategories objectAtIndex:i] valueForKey:@"id_parent"] intValue]); sqlite3_bind_text(stmt, 3, [[[newCategories objectAtIndex:i] valueForKey:@"title"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 4, [[[newCategories objectAtIndex:i] valueForKey:@"description"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 5, [[[newCategories objectAtIndex:i] valueForKey:@"link"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 6, [[[newCategories objectAtIndex:i] valueForKey:@"address"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 7, [[[newCategories objectAtIndex:i] valueForKey:@"phone"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 8, [[[newCategories objectAtIndex:i] valueForKey:@"pos_lat"] UTF8String], -1, NULL); sqlite3_bind_text(stmt, 9, [[[newCategories objectAtIndex:i] valueForKey:@"pos_long"] UTF8String], -1, NULL); sqlite3_bind_int(stmt, 10, [[[newCategories objectAtIndex:i] valueForKey:@"no_votes"] intValue]); sqlite3_bind_text(stmt, 11, [[[newCategories objectAtIndex:i] valueForKey:@"avg_vote"] UTF8String], -1, NULL); if ([[[newCategories objectAtIndex:i] valueForKey:@"photo"] length]!=0) { NSMutableString *webUrl = (NSMutableString *)[[NSMutableString alloc] initWithString:@"http://www.crotune.com/public/images/subjects/"]; [webUrl appendString:[[newCategories objectAtIndex:i] valueForKey:@"photo"]]; UIImage *myImage = [self getWebImage:webUrl]; if(myImage != nil){ sqlite3_bind_blob(stmt, 12, [UIImagePNGRepresentation(myImage) bytes], [UIImagePNGRepresentation(myImage) length], NULL); } else { sqlite3_bind_blob(stmt, 12, nil, -1, NULL); } [webUrl release]; [myImage release]; } else { sqlite3_bind_blob(stmt, 12, nil, -1, NULL); //NSLog(@" ne dodajem sliku2"); } sqlite3_bind_int(stmt, 13, [[[newCategories objectAtIndex:i] valueForKey:@"id_comerc"] intValue]); sqlite3_bind_int(stmt, 14, [[[newCategories objectAtIndex:i] valueForKey:@"id_city"] intValue]); sqlite3_bind_int(stmt, 15, [[[newCategories objectAtIndex:i] valueForKey:@"placement"] intValue]); sqlite3_bind_int(stmt, 16, [[[newCategories objectAtIndex:i] valueForKey:@"type"] intValue]); sqlite3_bind_int(stmt, 17, [[[newCategories objectAtIndex:i] valueForKey:@"timestamp"] intValue]); sqlite3_bind_text(stmt, 18, [[[newCategories objectAtIndex:i] valueForKey:@"mail"] UTF8String], -1, NULL); } if (sqlite3_step(stmt) != SQLITE_DONE) { NSLog(@"%s", sqlite3_errmsg(database)); NSAssert1(0,@"nemogu updateat table %s", errorMsg); } else { NSLog(@"Ubacio %d",i); }sqlite3_finalize(stmt); What happens is that it starts to eat memory until it finaly quits... On memory warning i close and open database again, I have set cache size to 50 as mentioned in some posts here, and tried putting query into statement - same result.. it just garbles mamory and app quits after 300 inserts on iphone or somewhere around 900 inserts on iPad... Any help would be appreciated..

    Read the article

  • GET request, iOS

    - by phnmnn
    I need to do this GET request: http://api.testmy.co.il/api/sync?BID=1049&ClientCode=3847&Discount=2.34&Service=0&Items=[{"Name":"Tax","Price":"2.11","Quantity":"1","SerialID":"1","Remarks":"","Toppings":""}]&Payments=[] In browser I get response: { "Success":true, "Atava":[], "Pending":[], "CallWaiter":false } But in iOS it not work. i try: NSString *requestedURL=[NSString stringWithFormat:@"http://api.testmy.co.il/api/sync?BID=%i&ClientCode=%i&Discount=2.34&Service=0&Items=[{\"Name\":\"Tax\",\"Price\":\"2.11\",\"Quantity\":\"1\",\"SerialID\":\"1\",\"Remarks\":\"\",\"Toppings\":\"\"}]&Payments=[]",BID,num]; NSURL *url = [NSURL URLWithString:requestedURL]; NSURLResponse *response; NSData *GETReply = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:nil]; NSString *theReply = [[NSString alloc] initWithBytes:[GETReply bytes] length:[GETReply length] encoding: NSASCIIStringEncoding]; NSLog(@"Reply: %@", theReply); OR NSString *requestedURL=[NSString stringWithFormat:@"http://api.testmy.co.il/api/sync?BID=%i&ClientCode=%i&Discount=2.34&Service=0&Items=[{'Name':'Tax','Price':'2.11','Quantity':'1','SerialID':'1','Remarks':'','Toppings':''}]&Payments=[]",BID,num]; OR NSMutableDictionary *params = [[NSMutableDictionary alloc] init]; [params setObject:@"Tax" forKey:@"Name"]; [params setObject:@"2.11" forKey:@"Price"]; [params setObject:@"1" forKey:@"Quantity"]; [params setObject:@"1" forKey:@"SerialID"]; [params setObject:@"" forKey:@"Remarks"]; [params setObject:@"" forKey:@"Toppings"]; NSData *jsonData = nil; NSString *jsonString = nil; if([NSJSONSerialization isValidJSONObject:params]) { jsonData = [NSJSONSerialization dataWithJSONObject:params options:0 error:nil]; jsonString = [[NSString alloc]initWithData:jsonData encoding:NSUTF8StringEncoding]; NSLog(@"%@",jsonString); } NSString *get=[NSString stringWithFormat: @"&Items=%@", jsonString]; NSData *getData = [get dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL:url]; [request setHTTPMethod:@"GET"]; [request setTimeoutInterval:8]; [request setHTTPBody:getData]; [request setValue:@"application/json;charset=UTF-8" forHTTPHeaderField:@"Content-Type"]; nothing doesn't work. How to fix it? Sorry for bad english.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >