Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 171/238 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Perl LWP::UserAgent mishandling UTF-8 response

    - by RedGrittyBrick
    When I use LWP::UserAgent to retrieve content encoded in UTF-8 it seems LWP::UserAgent doesn't handle the encoding correctly. Here's the output after setting the Command Prompt window to Unicode by the command chcp 65001 Note that this initially gives the appearance that all is well, but I think it's just the shell reassembling bytes and decoding UTF-8, From the other output you can see that perl itself is not handling wide characters correctly. C:\perl getutf8.pl ====================================================================== HTTP/1.1 200 OK Connection: close Date: Fri, 31 Dec 2010 19:24:04 GMT Accept-Ranges: bytes Server: Apache/2.2.8 (Win32) PHP/5.2.6 Content-Length: 75 Content-Type: application/xml; charset=utf-8 Last-Modified: Fri, 31 Dec 2010 19:20:18 GMT Client-Date: Fri, 31 Dec 2010 19:24:04 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 <?xml version="1.0" encoding="UTF-8"? <nameBudejovický Budvar</name ====================================================================== response content length is 33 ....v....1....v....2....v....3....v....4 <nameBudejovický Budvar</name . . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . 3c6e616d653e427564c49b6a6f7669636bc3bd204275647661723c2f6e616d653e < n a m e B u d ? ? j o v i c k ? ? B u d v a r < / n a m e Above you can see the payload length is 31 characters but Perl thinks it is 33. For confirmation, in the hex, we can see that the UTF-8 sequences c49b and c3bd are being interpreted as four separate characters and not as two Unicode characters. Here's the code #!perl use strict; use warnings; use LWP::UserAgent; my $ua = LWP::UserAgent-new(); my $response = $ua-get('http://localhost/Bud.xml'); if (! $response-is_success) { die $response-status_line; } print '='x70,"\n",$response-as_string(), '='x70,"\n"; my $r = $response-decoded_content((charset = 'UTF-8')); $/ = "\x0d\x0a"; # seems to be \x0a otherwise! chomp($r); # Remove any xml prologue $r =~ s/^<\?.*\?\x0d\x0a//; print "Response content length is ", length($r), "\n\n"; print "....v....1....v....2....v....3....v....4\n"; print $r,"\n"; print ". . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . \n"; print unpack("H*", $r), "\n"; print join(" ", split("", $r)), "\n"; Note that Bud.xml is UTF-8 encoded without a BOM. How can I persuade LWP::UserAgent to do the right thing? P.S. Ultimately I want to translate the Unicode data into an ASCII encoding, even if it means replacing each non-ASCII character with one question mark or other marker. I have accepted Ysth's "upgrade" answer - because I know it is the right thing to do when possible. However I am going to use a work-around (which may depress Tom further): $r = encode("cp437", decode("utf8", $r));

    Read the article

  • MySQL indexes: how do they work?

    - by bob-the-destroyer
    I'm a complete newbie with MySQL indexes. I have several MyISAM tables on MySQL 5.0x having utf8 charsets and collations with 100k+ records each. The primary keys are generally integer. Many columns on each table may have duplicate values. I need to quickly count, sum, average, or otherwise perform custom calculations on any number of fields in each table or joined on any number of others. I found this page giving an overview of MySQL index usage: http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html, but I'm still not sure I'm using indexes right. Just when I think I've made the perfect index out of a collection of fields I want to calculate against, I get the "index must be under 1000 bytes" error. Can anyone explain how to most efficiently create and use indexes to speed up queries? Caveat: upgrading Mysql is not possible in this case. Using Navicat Light for db administration, but this app isn't required.

    Read the article

  • Error appearing in application after updating cakePHP library files from 1.3.0 to 1.3.1

    - by Gaurav Sharma
    Hi everyone, I have just updated my cakephp library to latest version 1.3.1. Before this I was running v1.3.0 with no errors. After running the application I am given this error message. unserialize() [function.unserialize]: Error at offset 0 of 2574 bytes [CORE\cake\libs\cache\file.php, line 176] I updated the libraries simply by replacing the existing cake files with the new ones downloaded from the net. Is it the correct way of updating applications. I did'nt made any customizations to the core library of cakePHP. What is the problem ? Please help. Thanks

    Read the article

  • Convert Stream to IEnumerable. If possible when "keeping laziness"

    - by Binary255
    Hi, I recieve a Stream and need to pass in a IEnumerable to another method. public static void streamPairSwitchCipher(Stream someStream) { ... someStreamAsIEnumerable = ... IEnumerable returned = anotherMethodWhichWantsAnIEnumerable(someStreamAsIEnumerable); ... } One way is to read the entire Stream, convert it to an Array of bytes and pass it in, as Array implements IEnumerable. But it would be much nicer if I could pass in it in such a way that I don't have to read the entire Stream before passing it in. public static IEnumerable<T> anotherMethodWhichWantsAnIEnumerable<T>(IEnumerable<T> p) { ... // Something uninteresting }

    Read the article

  • How to put/get INT to/from a WCHAR array ?

    - by nimo
    how can I put a INT type variable to a wchar array ? Thanks EDIT: Sorry for the short question. Yes we can cast INT to a WCHAR array using WCHAR*, but when we are retrieving back the result (WCHAR[] to INT), I just realize that we need to read size of 2 from WCHAR array since INT is 4 BYTEs which is equal to 2 WCHARs. WCHAR arData[20]; INT iVal = 0; wmemcpy((WCHAR*)&iVal, arData, (sizeof(INT))/2); Is this the safest way to retrieve back INT value from WCHAR array

    Read the article

  • Magick++ Read Image with ICC colorspace

    - by FlashFan
    Hi guys I need to know how I can read an image which uses a separate ICC Color Profile. The image consists of 26'099'520 Bytes which is the result of 2480 width* 3508 height * 3 components per pixel. I tried it with the following code: Image * image = new Image(); Blob * blob = new Blob(imagedata.c_str(),imagedata.length()); image->read(*blob,Geometry(2480,3508),8,"RGB"); Blob * iccblob = new Blob(iccdata.c_str(),iccdata.length()); image->iccColorProfile(*iccblob); image->write("result.jpg"); But the colors are the same as when I don´t set the Icc-profile to the image. And the colors are wrong in both cases. Thanks for your help!

    Read the article

  • Why does DataInputStream not support integers?

    - by Jason
    I need to read in a list of numbers from a file, none of which are larger than 32767. Originally I was going to use the Scanner class to pull in the data, then I read about DataInputStream. This would work well for me, except that according to the API, it supports all primitive variables EXCEPT ints! Listed are longs, shorts, bytes, chars, booleans, ect, but no ints. I have no need for double precision from the incoming data. Is this a deliberate or unintentional oversight?

    Read the article

  • remove non-UTF-8 characters from xml with declared encoding=utf-8 - Java

    - by St Nietzke
    I have to handle this scenario in Java: I'm getting a request in XML form from a client with declared encoding=utf-8. Unfortunately it may contain not utf-8 characters and there is a requirement to remove these characters from the xml on my side (legacy). Let's consider an example where this invalid XML contains £ (pound). 1) I get xml as java String with £ in it (I don't have access to interface right now, but I probably get xml as a java String). Can I use replaceAll(£, "") to get rid of this character? Any potential issues? 2) I get xml as an array of bytes - how to handle this operation safely in that case?

    Read the article

  • Copying a 14bit grayscale image (saved in long[]) to a pictureBox

    - by Itsik
    My camera gives me 14bit grayscale images, but the API's function returns a long* to the image data. (so i'm assuming 4 bytes for each pixel) My application is written in C++/CLI, and the pictureBox is of .NET type. I am currently using the BitmapData.LockBits() mechanism to gain pointer access to the image data, and using memcpy(bmpData.Scan0.ToPointer(), imageData, sizeof(long)*height*width) to copy the image data to the Bitmap. For now, the only PixelFormat that is working is 32bit RGB, and the image appears in shades of blue with contours. Trying to initialize the Bitmap as 16bppGrayscale isn't working. I would ideally want to cast the array from long to word and using a 16bit format (hoping the the 14bit data will be displayed properly) but I'm not sure if this works. Also, I don't want to iterate over the image data, so finding the min/max and then histogram stretching to [0..255] isnt an option for me (the display must be as efficient as possible) Thanks

    Read the article

  • How can you tell the source of the data when using the Stream.BeginRead Method?

    - by xarzu
    When using the Stream.BeginRead Method, and you are reading from a stream into a memory, how is it determined where you are reading the data from? See: http://msdn.microsoft.com/en-us/library/system.io.stream.beginread.aspx In the list of parameters, I do not see one that tells where the data is being read from: Parameters buffer Type: System.Byte[] The buffer to read the data into. offset Type: System.Int32 The byte offset in buffer at which to begin writing data read from the stream. count Type: System.Int32 The maximum number of bytes to read. callback Type: System.AsyncCallback An optional asynchronous callback, to be called when the read is complete. state Type: System.Object A user-provided object that distinguishes this particular asynchronous read request from other requests.

    Read the article

  • How to set parameters in Python zlib module

    - by fagricipni
    I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification. Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.

    Read the article

  • Does C# have an equivalent to JavaScript's encodeURIComponent()?

    - by travis
    In JavaScript: encodeURIComponent("©v") == "%C2%A9%E2%88%9A" Is there an equivalent for C# applications? For escaping HTML characters I used: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"&#" + ((int)m.Value[0]).ToString() + ";"); But I'm not sure how to convert the match to the correct hexadecimal format that JS uses. For example this code: txtOut.Text = Regex.Replace(txtIn.Text, @"[\u0080-\uFFFF]", m => @"%" + String.Format("{0:x}", ((int)m.Value[0]))); Returns "%a9%221a" for "©v" instead of "%C2%A9%E2%88%9A". It looks like I need to split the string up into bytes or something. Edit: This is for a windows app, the only items available in System.Web are: AspNetHostingPermission, AspNetHostingPermissionAttribute, and AspNetHostingPermissionLevel.

    Read the article

  • Opening Large (24 GB) File In C

    - by zacaj
    I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz%28VS.80%29.aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?

    Read the article

  • php gzip xml file (53MB) casue Out of memory error

    - by ntan
    Hi, i have a 53 MB xml file that i want to gzip. The code below gzip it $gzFile = "my.gz"; $data = IMPLODE("", FILE($filename)); $gzdata = GZENCODE($data, 9); //open gz -- 'w9' is highest compression $fp = gzopen ($gzFile, 'w9'); //loop through array and write each line into the compressed file gzwrite ($fp, $gzdata); //close the file gzclose ($fp); This cause PHP Fatal error: Out of memory (allocated 70516736) (tried to allocate 24 bytes) Any one have any suggestions. I already have increase the memory in php.ini

    Read the article

  • Problem about socket communication

    - by Ahmet Altun
    I have two separate socket projects in VS.NET. One of them is sender, other one is receiver. After starting receiver, i send data from sender. Although send method returns 13 bytes as successfully transferred, the receiver receives 0 (zero). The receiver accepts sender socket and listens to it. But cannot receive data. Why? P.S. : If sender code is put in receiver project, receiver can get data, as well.

    Read the article

  • problem with NSInputStream on real iPhone

    - by ThamThang
    Hi guys, I have a problem with NSInputStream. Here is my code: case NSStreamEventHasBytesAvailable: printf("BYTE AVAILABLE\n"); int len = 0; NSMutableData *data = [[NSMutableData alloc] init]; uint8_t buffer[32768]; if(stream == iStream) { printf("Receiving...\n"); len = [iStream read:buffer maxLength:32768]; [data appendBytes:buffer length:len]; } [iStream close]; I try to read small data and it works perfectly on simulator and real iPhone. If I try to read large data (more than 4kB or maybe 5kB), the real iPhone just can read 2736 bytes and stop. Why is it? Help me plz! Merci d'avance!

    Read the article

  • vista bandwith reservation

    - by user185646
    I would like to write my own version of Microsoft Live labs pivot.http://www.getpivot.com/ For this i will use realtime texture streaming technology like John Carmack did for doom4. But i would like to use Windows vista SetFileBandwidthReservation api to have the best throughput possible. For example // reserve bandwidth of 200 bytes/sec result = SetFileBandwidthReservation( hFile, 1000, 200, FALSE, &transferSize, &outstandingRequests ); What i dont understand is the lpTransferSize and lpNumOutstandingRequests return parameters. How should i next read the file for this to be the most worth it. Should i do exactly lpNumOutstandingRequests number of request of size lpTransferSize. Or can i do one synchronous request bigger than lpTransferSize.

    Read the article

  • DIfferences in pictures between ie6 and ie7

    - by Para
    BACKGROUND-IMAGE: url(../images/feedback_trans_tab.png); _background-image: url(../images/feedback_tab_ie6.png) I've seen code like this in a css file feedback_tab_ie6.png is 886 bytes and feedback_trans_tab.png has 1.64Kb Both pics have transparent backgrounds so that you can apply your own background color. Both look the same. yet in IE6 if I comment this line: _background-image: url(../images/feedback_tab_ie6.png) this appears: and if I don't this appears: DOes anyone have any idea why this is? How can I turn a ie6 picture into an ie6 picture from Photoshop? Thank you in advance

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • About enumerations in Delphi and c++ in 64-bit environments

    - by sum1stolemyname
    I recently had to work around the different default sizes used for enumerations in Delphi and c++ since i have to use a c++ dll from a delphi application. On function call returns an array of structs (or records in delphi), the first element of which is an enum. To make this work, I use packed records (or aligned(1)-structs). However, since delphi selects the size of an enum-variable dynamically by default and uses the smallest datatype possible (it was a byte in my case), but C++ uses an int for enums, my data was not interpreted correctly. Delphi offers a compiler switch to work around this, so the declaration of the enum becomes {$Z4} TTypeofLight = ( V3d_AMBIENT, V3d_DIRECTIONAL, V3d_POSITIONAL, V3d_SPOT ); {$Z1} My Questions are: What will become of my structs when they are compiled on/for a 64-bit environment? Does the default c++ integer grow to 8 Bytes? Are there other memory alignment / data type size modifications (other than pointers)?

    Read the article

  • How to analyse contents of binary serialization stream?

    - by Tao
    I'm using binary serialization (BinaryFormatter) as a temporary mechanism to store state information in a file for a relatively complex (game) object structure; the files are coming out much larger than I expect, and my data structure includes recursive references - so I'm wondering whether the BinaryFormatter is actually storing multiple copies of the same objects, or whether my basic "number of objects and values I should have" arithmentic is way off-base, or where else the excessive size is coming from. Searching on stack overflow I was able to find the specification for Microsoft's binary remoting format: http://msdn.microsoft.com/en-us/library/cc236844(PROT.10).aspx What I can't find is any existing viewer that enables you to "peek" into the contents of a binaryformatter output file - get object counts and total bytes for different object types in the file, etc; I feel like this must be my "google-fu" failing me (what little I have) - can anyone help? This must have been done before, right??

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • Receving multiple multicast feeds on the same port - C, Linux

    - by Gigi
    I have an application that is receiving data from multiple multicast sources on the same port. I am able to receive the data. However, I am trying to account for statistics of each group (i.e. msgs received, bytes received) and all the data is getting mixed up. Does anyone know how to solved this problem? If I try to look at the sender's address, it is not the multicast address, but rather the IP of the sending machine. I am using the following socket options: struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr("224.1.2.3"); mreq.imr_interface.s_addr = INADDR_ANY; setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); and also: setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &reuse, sizeof(reuse)); I appreciate any help!!!

    Read the article

  • How do you access byte level information in JavaScript?

    - by JustSmith
    The generally accepted answer is that you can't. However there is mounting evidence that this is not true based on the existence of projects that read in types of data that are not basic HTML types. Some projects that do this are the JavaScript version of ProtoBuf and Smokescreen. Smokescreen is a flash interpreter written in JS so if it is not possible to get at the bytes directly how are these projects working around this? The source to Smokescreen can be found here. I have looked it over but with JS not being my primary language right now the solution eludes me.

    Read the article

  • Reading PGP key information

    - by calccrypto
    can someone show a description of the information of what a pgp looks like if only the descriptions were there but not the actual information? something like (i dont remember if the values are correct): packet-type[4 bits], total length in bytes[16 bits], packet version type [4 bits], creation-time[32 bits], encryption-algorithm[8 bits], ...,etc,etc ive tried to understand rfc4880, but its tedious and confusing. so far, i am think i have extracted the 4 i wrote above, but i cant seem to get the rest of the information out. can anyone help? i know i can just find some pgp program, but the whole point of this is to allow me to learn how those programs work in the first place

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >