Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 85/173 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Is my method for avoiding dynamic_cast<> faster than dynamic_cast<> itself ?

    - by ereOn
    Hi, I was answering a question a few minutes ago and it raised to me another one: In one of my projects, I do some network message parsing. The messages are in the form of: [1 byte message type][2 bytes payload length][x bytes payload] The format and content of the payload are determined by the message type. I have a class hierarchy, based on a common class Message. To instanciate my messages, i have a static parsing method which gives back a Message* depending on the message type byte. Something like: Message* parse(const char* frame) { // This is sample code, in real life I obviously check that the buffer // is not NULL, and the size, and so on. switch(frame[0]) { case 0x01: return new FooMessage(); case 0x02: return new BarMessage(); } // Throw an exception here because the mesage type is unknown. } I sometimes need to access the methods of the subclasses. Since my network message handling must be fast, I decived to avoid dynamic_cast<> and I added a method to the base Message class that gives back the message type. Depending on this return value, I use a static_cast<> to the right child type instead. I did this mainly because I was told once that dynamic_cast<> was slow. However, I don't know exactly what it really does and how slow it is, thus, my method might be as just as slow (or slower) but far more complicated. What do you guys think of this design ? Is it common ? Is it really faster than using dynamic_cast<> ? Any detailed explanation of what happen under the hood when one use dynamic_cast<> is welcome !

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Valgrind 'noise', what does it mean?

    - by Chris Huang-Leaver
    When I used valgrind to help debug an app I was working on I notice a huge about of noise which seems to be complaining about standard libraries. As a test I did this; echo 'int main() {return 0;}' | gcc -x c -o test - Then I did this; valgrind ./test ==1096== Use of uninitialised value of size 8 ==1096== at 0x400A202: _dl_new_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400607F: _dl_map_object_from_fd (in /lib64/ld-2.10.1.so) ==1096== by 0x4007A2C: _dl_map_object (in /lib64/ld-2.10.1.so) ==1096== by 0x400199A: map_doit (in /lib64/ld-2.10.1.so) ==1096== by 0x400D495: _dl_catch_error (in /lib64/ld-2.10.1.so) ==1096== by 0x400189E: do_preload (in /lib64/ld-2.10.1.so) ==1096== by 0x4003CCD: dl_main (in /lib64/ld-2.10.1.so) ==1096== by 0x401404B: _dl_sysdep_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4001471: _dl_start (in /lib64/ld-2.10.1.so) ==1096== by 0x4000BA7: (within /lib64/ld-2.10.1.so) * large block of similar snipped * ==1096== Use of uninitialised value of size 8 ==1096== at 0x4F35FDD: (within /lib64/libc-2.10.1.so) ==1096== by 0x4F35B11: (within /lib64/libc-2.10.1.so) ==1096== by 0x4A1E61C: _vgnU_freeres (vg_preloaded.c:60) ==1096== by 0x4E5F2E4: __run_exit_handlers (in /lib64/libc-2.10.1.so) ==1096== by 0x4E5F354: exit (in /lib64/libc-2.10.1.so) ==1096== by 0x4E48A2C: (below main) (in /lib64/libc-2.10.1.so) ==1096== ==1096== ERROR SUMMARY: 3819 errors from 298 contexts (suppressed: 876 from 4) ==1096== malloc/free: in use at exit: 0 bytes in 0 blocks. ==1096== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==1096== For counts of detected errors, rerun with: -v ==1096== Use --track-origins=yes to see where uninitialised values come from ==1096== All heap blocks were freed -- no leaks are possible. You can see the full result here: http://pastebin.com/gcTN8xGp I have two questions; firstly is there a way to suppress all the noise? --show-below-main is set to no by default, but there doesn't appear to be a --show-after-main equivalent.

    Read the article

  • how to send binary data within an xml string

    - by daemonkid
    I want to send a binary file to .net c# component in the following xml format <BinaryFileString fileType='pdf'> <!--binary file data string here--> </BinaryFileString> In the component that is called I will use the above xml string and convert the binary string recieved within the BinaryFileString tag, into a file as specified by the filetype='' attribute. The file type could be doc/pdf/xls/rtf I have the code in the calling application to get out the bytes from the file to be sent. How do I prepare it to be sent with xml tags wrapped around it? I want the application to send out a string to the component and not a byte stream. This is because there is no way I can decipher the file type [pdf/doc/xls] by just looking at the byte stream. Hence the xml string with the filetype attribute. Any ideas on this? method for extracting Bytes below FileStream fs = new FileStream(_filePath, FileMode.Open, FileAccess.Read); using (Stream input = fs) { byte[] buffer = new byte[8192]; int bytesRead; while ((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) { } } return buffer; Thanks.

    Read the article

  • uploading image & getting back from database

    - by Anup Prakash
    Putting a set of code which is pushing image to database and fetching back from database: <!-- <?php error_reporting(0); // Connect to database $errmsg = ""; if (! @mysql_connect("localhost","root","")) { $errmsg = "Cannot connect to database"; } @mysql_select_db("test"); $q = <<<CREATE create table image ( pid int primary key not null auto_increment, title text, imgdata longblob, friend text) CREATE; @mysql_query($q); // Insert any new image into database if (isset($_POST['submit'])) { move_uploaded_file($_FILES['imagefile']['tmp_name'],"latest.img"); $instr = fopen("latest.img","rb"); $image = addslashes(fread($instr,filesize("latest.img"))); if (strlen($instr) < 149000) { $image_query="insert into image (title, imgdata,friend) values (\"". $_REQUEST['title']. "\", \"". $image. "\",'".$_REQUEST['friend']."')"; mysql_query ($image_query) or die("query error"); } else { $errmsg = "Too large!"; } $resultbytes=''; // Find out about latest image $query = "select * from image where pid=1"; $result = @mysql_query("$query"); $resultrow = @mysql_fetch_assoc($result); $gotten = @mysql_query("select * from image order by pid desc limit 1"); if ($row = @mysql_fetch_assoc($gotten)) { $title = htmlspecialchars($row[title]); $bytes = $row[imgdata]; $resultbytes = $row[imgdata]; $friend=$row[friend]; } else { $errmsg = "There is no image in the database yet"; $title = "no database image available"; // Put up a picture of our training centre $instr = fopen("../wellimg/ctco.jpg","rb"); $bytes = fread($instr,filesize("../wellimg/ctco.jpg")); } if ($resultbytes!='') { echo $resultbytes; } } ?> <html> <head> <title>Upload an image to a database</title> </head> <body bgcolor="#FFFF66"> <form enctype="multipart/form-data" name="file_upload" method="post"> <center> <div id="image" align="center"> <h2>Heres the latest picture</h2> <font color=red><?php echo $errmsg; ?></font> <b><?php echo $title ?></center> </div> <hr> <h2>Please upload a new picture and title</h2> <table align="center"> <tr> <td>Select image to upload: </td> <td><input type="file" name="imagefile"></td> </tr> <tr> <td>Enter the title for picture: </td> <td><input type="text" name="title"></td> </tr> <tr> <td>Enter your friend's name:</td> <td><input type="text" name="friend"></td> </tr> <tr> <td><input type="submit" name="submit" value="submit"></td> <td></td> </tr> </table> </form> </body> </html> --> Above set of code has one problem. The problem is whenever i pressing the "submit" button. It is just displaying the image on a page. But it is leaving all the html codes. even any new line message after the // Printing image on browser echo $resultbytes; //************************// So, for this i put this set of code in html tag: This is other sample code: <!-- <?php error_reporting(0); // Connect to database $errmsg = ""; if (! @mysql_connect("localhost","root","")) { $errmsg = "Cannot connect to database"; } @mysql_select_db("test"); $q = <<<CREATE create table image ( pid int primary key not null auto_increment, title text, imgdata longblob, friend text) CREATE; @mysql_query($q); // Insert any new image into database if (isset($_POST['submit'])) { move_uploaded_file($_FILES['imagefile']['tmp_name'],"latest.img"); $instr = fopen("latest.img","rb"); $image = addslashes(fread($instr,filesize("latest.img"))); if (strlen($instr) < 149000) { $image_query="insert into image (title, imgdata,friend) values (\"". $_REQUEST['title']. "\", \"". $image. "\",'".$_REQUEST['friend']."')"; mysql_query ($image_query) or die("query error"); } else { $errmsg = "Too large!"; } $resultbytes=''; // Find out about latest image $query = "select * from image where pid=1"; $result = @mysql_query("$query"); $resultrow = @mysql_fetch_assoc($result); $gotten = @mysql_query("select * from image order by pid desc limit 1"); if ($row = @mysql_fetch_assoc($gotten)) { $title = htmlspecialchars($row[title]); $bytes = $row[imgdata]; $resultbytes = $row[imgdata]; $friend=$row[friend]; } else { $errmsg = "There is no image in the database yet"; $title = "no database image available"; // Put up a picture of our training centre $instr = fopen("../wellimg/ctco.jpg","rb"); $bytes = fread($instr,filesize("../wellimg/ctco.jpg")); } } ?> <html> <head> <title>Upload an image to a database</title> </head> <body bgcolor="#FFFF66"> <form enctype="multipart/form-data" name="file_upload" method="post"> <center> <div id="image" align="center"> <h2>Heres the latest picture</h2> <?php if ($resultbytes!='') { // Printing image on browser echo $resultbytes; } ?> <font color=red><?php echo $errmsg; ?></font> <b><?php echo $title ?></center> </div> <hr> <h2>Please upload a new picture and title</h2> <table align="center"> <tr> <td>Select image to upload: </td> <td><input type="file" name="imagefile"></td> </tr> <tr> <td>Enter the title for picture: </td> <td><input type="text" name="title"></td> </tr> <tr> <td>Enter your friend's name:</td> <td><input type="text" name="friend"></td> </tr> <tr> <td><input type="submit" name="submit" value="submit"></td> <td></td> </tr> </table> </form> </body> </html> --> ** But in this It is showing the image in format of special charaters and digits. 1) So, Please help me to print the image with some HTML code. So that i can print it in my form to display the image. 2) Is there any way to convert the database image into real image, so that i can store it into my hard-disk and call it from tag? Please help me.

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Converting an AnsiString to a Unicode String

    - by jrodenhi
    I'm converting a D2006 program to D2010. I have a value stored in a single byte per character string in my database and I need to load it into a control that has a LoadFromStream, so my plan was to write the string to a stream and use that with LoadFromStream. But it did not work. In studying the problem, I see an issue that tells me that I don't really understand how conversion from AnsiString to Unicode string works. Here is some code I am puzzling over: oStringStream := TStringStream.Create(sBuffer); sUnicodeStream := oPayGrid.sStream; //explicit conversion to unicode string iSize1 := StringElementSize(oPaygrid.sStream); iSize2 := StringElementSize(sUnicodeStream); oStringStream.WriteString(sUnicodeStream); When I get to the last line, iSize1 does equal 1 and iSize2 does equal 2, so that part is what I understood from my reading. But, on the last line, after I write the string to the stream, and look at the Bytes Property of the string, it shows this (the string starts as '16,159'): (49 {$31}, 54 {$36}, 44 {$2C}, 49 {$31}, 53 {$35}, 57 {$39} ... I was expecting that it might look something like (49 {$31}, 00 {$00}, 54 {$36}, 00 {$00}, 44 {$2C}, 00 {$00}, 49 {$31}, 00 {$00}, 53 {$35}, 00 {$00}, 57 {$39}, 00 {$00} ... I'm not getting the right results out of the LoadFromStream because it is reading from the stream two bytes at a time, but the data it is receiving is not arranged that way. What is it that I should do to give the LoadFromStream a well formed stream of data based on a unicode string? Thank you for your help.

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • std::string.resize() and std::string.length()

    - by dreamlax
    I'm relatively new to C++ and I'm still getting to grips with the C++ Standard Library. To help transition from C, I want to format a std::string using printf-style formatters. I realise stringstream is a more type-safe approach, but I find myself finding printf-style much easier to read and deal with (at least, for the time being). This is my function: using namespace std; string formatStdString(const string &format, ...) { va_list va; string output; size_t needed; size_t used; va_start(va, format); needed = vsnprintf(&output[0], 0, format.c_str(), va); output.resize(needed + 1); // for null terminator?? used = vsnprintf(&output[0], output.capacity(), format.c_str(), va); // assert(used == needed); va_end(va); return output; } This works, kinda. A few things that I am not sure about are: Do I need to make room for a null terminator, or is this unnecessary? Is capacity() the right function to call here? I keep thinking length() would return 0 since the first character in the string is a '\0'. Occasionally while writing this string's contents to a socket (using its c_str() and length()), I have null bytes popping up on the receiving end, which is causing a bit of grief, but they seem to appear inconsistently. If I don't use this function at all, no null bytes appear.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Is the size of a struct required to be an exact multiple of the alignment of that struct?

    - by Steve314
    Once again, I'm questioning a longstanding belief. Until today, I believed that the alignment of the following struct would normally be 4 and the size would normally be 5... struct example { int m_Assume_32_Bits; char m_Assume_8_Bit_Bytes; }; Because of this assumption, I have data structure code that uses offsetof to determine the distance in bytes between two adjacent items in an array. Today, I spotted some old code that was using sizeof where it shouldn't, couldn't understand why I hadn't had bugs from it, coded up a unit test - and the test surprised me by passing. A bit of investigation showed that the sizeof the type I used for the test (similar to the struct above) was an exact multiple of the alignment - ie 8 bytes. It had padding after the final member. Here is an example of why I never expected this... struct example2 { example m_Example; char m_Why_Cant_This_Be_At_Offset_6_Bytes; }; A bit of Googling showed examples that make it clear that this padding after the final member is allowed - for example http://en.wikipedia.org/wiki/Data_structure_alignment#Data_structure_padding (the "or at the end of the structure" bit). This is a bit embarrassing, as I recently posted this comment - Use of struct padding (my first comment to that answer). What I can't seem to determine is whether this padding to an exact multiple of the alignment is guaranteed by the C++ standard, or whether it is just something that is permitted and that some (but maybe not all) compilers do. So - is the size of a struct required to be an exact multiple of the alignment of that struct according to the C++ standard? If the C standard makes different guarantees, I'm interested in that too, but the focus is on C++.

    Read the article

  • JNI Stream binary data from C++ to Java

    - by Cliff
    I need help passing binary data into Java. I'm trying to use jbytearray but when the data gets into Java it appears corrupt. Can somebody give me a hand? Here's a snip of some example code. First the native C++ side: printf("Building audio array copy\n"); jbyteArray rawAudioCopy = env-NewByteArray(10); jbyte toCopy[10]; printf("Filling audio array copy\n"); char theBytes[10] = {0,1,2,3,4,5,6,7,8,9}; for (int i = 0; i < sizeof(theBytes); i++) { toCopy[i] = theBytes[i]; } env->SetByteArrayRegion(rawAudioCopy,0,10,toCopy); printf("Finding object callback\n"); jmethodID aMethodId = env->GetMethodID(env->GetObjectClass(obj),"handleAudio","([B)V"); if(0==aMethodId) throw MyRuntimeException("Method not found error",99); printf("Invoking the callback\n"); env->CallVoidMethod(obj,aMethodId, &rawAudioCopy); and then the Java callback method: public void handleAudio(byte[] audio){ System.out.println("Audio supplied to Java [" + audio.length + "] bytes"); byte[] expectedAudio = {0,1,2,3,4,5,6,7,8,9}; for (int i = 0; i < audio.length; i++) { if(audio[i]!= expectedAudio[i]) System.err.println("Expected byte " + expectedAudio[i] + " at byte " + i + " but got byte " + audio[i]); else System.out.print('.'); } System.out.println("Audio passed back accordingly!"); } I get the following output when the callback is invoked: library loaded! Audio supplied to Java [-2019659176] bytes Audio passed back accordingly!

    Read the article

  • How to encrypt/decrypt a file in Java?

    - by Petike
    Hello, I am writing a Java application which can "encrypt" and consequently "decrypt" whatever binary file. I am just a beginner in the "cryptography" area so I would like to write a very simple application for the beginning. For reading the original file, I would probably use the java.io.FileInputStream class to get the "array of bytes" byte originalBytes[] of the file. Then I would probably use some very simple cipher, for example "shift up every byte by 1" and then I would get the "encrypted" bytes byte encryptedBytes[] and let's say that I would also set a "password" for it, for example "123456789". Next, when somebody wants to "decrypt" that file, he has to enter the password ("123456789") first and after that the file could be decrypted (thus "shift down every byte by 1") and consequently saved to the output file via java.io.FileOutputStream I am just wondering how to "store" the password information to the encrypted file so that the decrypting application knows if the entered password and the "real" password equals? Probably it would be silly to add the password (for example the ASCII ordinal numbers of the password letters) to the beginning of the file (before the encrypted data). So my main question is how to store the password information to the encrypted file?

    Read the article

  • HTTP Negotiate windows vs. Unix server implementation using python-kerberos

    - by ondra
    I tried to implement a simple single-sign-on in my python web server. I have used the python-kerberos package which works nicely. I have tested it from my Linux box (authenticating against active directory) and it was without problem. However, when I tried to authenticate using Firefox from Windows machine (no special setup, just having the user logged into the domain + added my server into negotiate-auth.trusted-uris), it doesn't work. I have looked at what is sent and it doesn't even resemble the things the Linux machine sends. This Microsoft description of the process pretty much resembles the way my interaction from Linux works, but the Windows machine generally sends a very short string, which doesn't even resemble the things microsoft documentation states, and when base64 decoded, it is something like 12 zero bytes followed by 3 or 4 non-zero bytes (GSS functions then return that it doesn't support such scheme) Either there is something wrong with the client Firefox settings, or there is some protocol which I am supposed to follow for the Negotiate protocol, but which I cannot find any reference anywhere. Any ideas what's wrong? Do you have any idea what protocol I should by trying to find, as it doesn' look like SPNEGO, at least from MS documentation.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • Why does Android allocate more memory than needed when loading images

    - by Simon
    Folks, I don't think that this is a duplicate and is NOT one of those how do I avoid OOMs questions. This is a genuine quest for knowledge so hold off on those down votes please... Imagine I have a JPEG of 500x500 pixels. I load it as ARGB_8888 which is as "bad as it gets". I would expect Android to allocate 500x500x4 bytes = a little under 1MB however, look at a heap dump and you will see that Android allocates significantly more, often factors of 5-10 times greater. You frequently see questions on here about OOMS where the stack trace shows a heap request of say 15MB and it is ALWAYS much larger than is required simply to hold the bytes of the image. The OP usually catches some downvotes then is bombarded with stock answers and comments about using less memory (thanks Romain!) and in scaling. I think there is more than meets the eye here. Anybody know why this is? If there is no apparent answer, I will put together an SSCCE if it helps. PS. I assume that JPEG vs PNG etc is irrelevant since we're talking about the memory usage of the backing bitmap which is simply x times y times BPP - or am I being slow?

    Read the article

  • Real Time Sound Captureing J2ME

    - by Abdul jalil
    i am capturing sound in J2me and send these bytes to remote system, i then play these bytes on remote system.five second voice is capture and send to remote system. i get the repeated sound again .i am making a sound messenger please help me where i am doing wrong i am using the follown code . String remoteTimeServerAddress="192.168.137.179"; sc = (SocketConnection) Connector.open("socket://"+remoteTimeServerAddress+":13"); p = Manager.createPlayer("capture://audio?encoding=pcm&rate=11025&bits=16&channels=1"); p.realize(); RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); OutputStream outstream =sc.openOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); int size=output.size(); int offset=0; while(true) { Thread.currentThread().sleep(5000); rc.commit(); output.flush(); size=output.size(); if(size0) { recordedSoundArray=output.toByteArray(); outstream.write(recordedSoundArray,0,size); } output.reset(); rc.reset(); rc.setRecordStream(output); rc.startRecord(); }

    Read the article

  • What is the Pythonic way to implement a simple FSM?

    - by Vicky
    Yesterday I had to parse a very simple binary data file - the rule is, look for two bytes in a row that are both 0xAA, then the next byte will be a length byte, then skip 9 bytes and output the given amount of data from there. Repeat to the end of the file. My solution did work, and was very quick to put together (even though I am a C programmer at heart, I still think it was quicker for me to write this in Python than it would have been in C) - BUT, it is clearly not at all Pythonic and it reads like a C program (and not a very good one at that!) What would be a better / more Pythonic approach to this? Is a simple FSM like this even still the right choice in Python? My solution: #! /usr/bin/python import sys f = open(sys.argv[1], "rb") state = 0 if f: for byte in f.read(): a = ord(byte) if state == 0: if a == 0xAA: state = 1 elif state == 1: if a == 0xAA: state = 2 else: state = 0 elif state == 2: count = a; skip = 9 state = 3 elif state == 3: skip = skip -1 if skip == 0: state = 4 elif state == 4: print "%02x" %a count = count -1 if count == 0: state = 0 print "\r\n"

    Read the article

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

  • Is this the right way to write a ProtocolDecoder in MINA?

    - by phpscriptcoder
    public class CustomProtocolDecoder extends CumulativeProtocolDecoder{ byte currentCmd = -1; int currentSize = -1; boolean isFirst = false; @Override protected boolean doDecode(IoSession is, ByteBuffer bb, ProtocolDecoderOutput pdo) throws Exception { if(currentCmd == -1) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); isFirst = true; } while(bb.remaining() > 0) { if(!isFirst) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); } else isFirst = false; //System.err.println(currentCmd + " " + bb.remaining() + " " + currentSize); if(bb.remaining() >= currentSize - 1) { Packet p = PacketDecoder.decodePacket(bb, currentCmd); pdo.write(p); } else { bb.flip(); return false; } } if(bb.remaining() == 0) return true; else return false; } } Anyone see anything wrong with this code? When a lot of packets are received at once, even when only one client is connected, one of them might get cut off at the end (12 bytes instead of 15 bytes, for example) which is obviously bad.

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • c++ struct size

    - by kiokko89
    struct CExample { int a; } int main(int argc, char* argv[]) { CExample ce; CExample ce2; cout << "Size:" << sizeof(ce)<< " Address: "<< &ce<< endl; cout << "Size:" << sizeof(ce2)<< " Address: "<< &ce2 << endl; CExample ceArr[2]; cout << "Size:" << sizeof(ceArr[0])<< " Address: "<<&ceArr[0]<<endl; cout << "Size:" << sizeof(ceArr[1])<< " Address: "<<&ceArr[1]<<endl; return 0; } Excuse me I'm just a beginner but i'd like to know why with this code, there is a difference of 12 bytes between the addresses of the first two objects(ce and ce2) (i thought about data allignment), but there is only a difference of 4 bytes between the two objects in the array. Sorry for my bad English...

    Read the article

  • Simple binary File I/O problem with cstdio(c++)

    - by Atilla Filiz
    The c++ program below fails to read the file. I know using cstdio is not good practice but that what I am used to and it should work anyway. $ ls -l l.uyvy -rw-r--r-- 1 atilla atilla 614400 2010-04-24 18:11 l.uyvy $ ./a.out l.uyvy Read 0 bytes out of 614400, possibly wrong file code: #include<cstdio> int main(int argc, char* argv[]) { FILE *fp; if(argc<2) { printf("usage: %s <input>\n",argv[0]); return 1; } fp=fopen(argv[1],"rb"); if(!fp) { printf("erör, cannot open %s for reading\n",argv[1]); return -1; } int bytes_read=fread(imgdata,1,2*IMAGE_SIZE,fp); //2bytes per pixel fclose(fp); if(bytes_read < 2*IMAGE_SIZE) { printf("Read %d bytes out of %d, possibly wrong file\n", bytes_read, 2*IMAGE_SIZE); return -1; } return 0; }

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >