Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 123/173 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Memcache won't flush or clear memory

    - by pedalpete
    I've been trying to clear my memcache as I'm noticing the storage taking up almost 30% of server memory when using ps -aux. So I ran the following php code. $memcache = new Memcache; $memcache-connect("localhost",11211); $memcache-flush(); print_r($memcache-getStats()); This results in the output of ( [pid] = 4936 [uptime] = 27318915 [time] = 1255318611 [version] = 1.2.2 [pointer_size] = 64 [rusage_user] = 9.659531 [rusage_system] = 49.770433 [curr_items] = 57864 [total_items] = 128246 [bytes] = 1931734247 [curr_connections] = 1 [total_connections] = 128488 [connection_structures] = 17 [cmd_get] = 170288 [cmd_set] = 128246 [get_hits] = 45464 [get_misses] = 124824 [evictions] = 1009 [bytes_read] = 5607431213 [bytes_written] = 1806543589 [limit_maxbytes] = 2147483648 [threads] = 1 ) This should be fairly basic, but clearly, I'm missing something.

    Read the article

  • Sphinx non-fulltext, integer only search

    - by James
    Hello guys, I've got a few tables that literally only hold integers, no "words" and for some reason Sphinx is unable to hold this data in it's library. Just returns "0 bytes" errors for these indexes. Is it possible to do this? If so, how? Below is an example from my Sphinx.conf for one that fails. source track { type = mysql sql_host = host sql_user = user sql_pass = pass sql_db = db sql_port = port sql_query = SELECT id, user, time FROM track; sql_attr_uint = user sql_attr_uint = time sql_query_info = SELECT * FROM track WHERE id=$id } index track { source = track path = /var/lib/sphinx/track docinfo = extern charset_type = utf-8 min_prefix_len = 1 enable_star = 1 }

    Read the article

  • Reading files using Windows API

    - by Eli Polonsky
    Hi I'm trying to write a console program that reads characters from a file. i want it to be able to read from a Unicode file as well as an ANSI one. how should i address this issue? do i need to programatically distinguish the type of file and read acoordingly? or can i somehow use the windows API data types like TCHAR and stuff like that. The only differnce between reading from the files is that in Unicode i have to read 2 bytes for a character and in ASNSI its 1 byte? im a little lost with this windows API. would appretiate any help thanks

    Read the article

  • How to use less memory while running a task in Symfony 1.4?

    - by Guillaume Flandre
    I'm using Symfony 1.4 and Doctrine. So far I had no problem running tasks with Symfony. But now that I have to import a pretty big amount of data and save them in the database, I get the infamous "Fatal Error: Allowed memory size of XXXX bytes exhausted" During this import I'm only creating new objects, setting a few fields and saving them. I'm pretty sure it has something to do with the number of objects I'm creating when saving data. Unsetting those objects doesn't do anything though. Are there any best practices to limit memory usage in Symfony?

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • read first 1kb of a blob from oracle

    - by Angus
    Hi, I wish to extract just the first 1024 bytes of a stored blob and not the whole file. The reason for this is I want to just extract the metadata from a file as quickly as possible without having to select the whole blob. I understand the following: select dbms_lob.substr(file_blob, 16,1) from file_upload where file_upload_id=504; which returns it as hex. How may I do this so it returns it in binary data without selecting the whole blob? Thanks in advance.

    Read the article

  • Perl LWP::UserAgent mishandling UTF-8 response

    - by RedGrittyBrick
    When I use LWP::UserAgent to retrieve content encoded in UTF-8 it seems LWP::UserAgent doesn't handle the encoding correctly. Here's the output after setting the Command Prompt window to Unicode by the command chcp 65001 Note that this initially gives the appearance that all is well, but I think it's just the shell reassembling bytes and decoding UTF-8, From the other output you can see that perl itself is not handling wide characters correctly. C:\perl getutf8.pl ====================================================================== HTTP/1.1 200 OK Connection: close Date: Fri, 31 Dec 2010 19:24:04 GMT Accept-Ranges: bytes Server: Apache/2.2.8 (Win32) PHP/5.2.6 Content-Length: 75 Content-Type: application/xml; charset=utf-8 Last-Modified: Fri, 31 Dec 2010 19:20:18 GMT Client-Date: Fri, 31 Dec 2010 19:24:04 GMT Client-Peer: 127.0.0.1:80 Client-Response-Num: 1 <?xml version="1.0" encoding="UTF-8"? <nameBudejovický Budvar</name ====================================================================== response content length is 33 ....v....1....v....2....v....3....v....4 <nameBudejovický Budvar</name . . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . 3c6e616d653e427564c49b6a6f7669636bc3bd204275647661723c2f6e616d653e < n a m e B u d ? ? j o v i c k ? ? B u d v a r < / n a m e Above you can see the payload length is 31 characters but Perl thinks it is 33. For confirmation, in the hex, we can see that the UTF-8 sequences c49b and c3bd are being interpreted as four separate characters and not as two Unicode characters. Here's the code #!perl use strict; use warnings; use LWP::UserAgent; my $ua = LWP::UserAgent-new(); my $response = $ua-get('http://localhost/Bud.xml'); if (! $response-is_success) { die $response-status_line; } print '='x70,"\n",$response-as_string(), '='x70,"\n"; my $r = $response-decoded_content((charset = 'UTF-8')); $/ = "\x0d\x0a"; # seems to be \x0a otherwise! chomp($r); # Remove any xml prologue $r =~ s/^<\?.*\?\x0d\x0a//; print "Response content length is ", length($r), "\n\n"; print "....v....1....v....2....v....3....v....4\n"; print $r,"\n"; print ". . . . v . . . . 1 . . . . v . . . . 2 . . . . v . . . . 3 . . . . \n"; print unpack("H*", $r), "\n"; print join(" ", split("", $r)), "\n"; Note that Bud.xml is UTF-8 encoded without a BOM. How can I persuade LWP::UserAgent to do the right thing? P.S. Ultimately I want to translate the Unicode data into an ASCII encoding, even if it means replacing each non-ASCII character with one question mark or other marker. I have accepted Ysth's "upgrade" answer - because I know it is the right thing to do when possible. However I am going to use a work-around (which may depress Tom further): $r = encode("cp437", decode("utf8", $r));

    Read the article

  • Can we represent bit fields in JSON/BSON?

    - by zubair
    We have a dozen simulators talking to each other on UDP. The interface definition is managed in a database. The simulators are written using different languages; mostly C++, some in Java and C#. Currently, when systems engineer makes changes in the interface definition database, simulator developers manually update the communication data structures in their code. The data is mostly 2-5 bytes with bit fields for each signal. What I want to do is to generate one file from interface definition database describing byte and bit field definitions and let each developer add it to his simulator code with minimal fuss. I looked at JSON/BSON but couldn't find a way to represent bit fields in it. Thanks Zubair

    Read the article

  • Android bluetooth socket error

    - by ashwini
    I am using backport bluetooth api on android 1.6. I am using Google Bluetooth Chat sample app for testing. The app works fine in normal scenarios. In a scenario, when I try to connect to paired device which is in off state, I get following error. 01-04 09:00:11.629: ERROR/BluetoothEventLoop.cpp(84): onGetRemoteServiceChannelResult: D-Bus error: org.bluez.Error.ConnectionAttemptFailed (Host is down) 01-04 09:00:11.729: DEBUG/dalvikvm(128): GC freed 4535 objects / 256008 bytes in 296ms 01-04 09:00:21.880: ERROR/bluetooth_RfcommSocket.cpp(1433): connect error: Host is down (112) But it sets the state as connected. The app is unable to catch the exception. Why does it happen? Or is it the case with backport api? Any help is appreciated as I am struggling a lot to get things run fine.

    Read the article

  • MySQL indexes: how do they work?

    - by bob-the-destroyer
    I'm a complete newbie with MySQL indexes. I have several MyISAM tables on MySQL 5.0x having utf8 charsets and collations with 100k+ records each. The primary keys are generally integer. Many columns on each table may have duplicate values. I need to quickly count, sum, average, or otherwise perform custom calculations on any number of fields in each table or joined on any number of others. I found this page giving an overview of MySQL index usage: http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html, but I'm still not sure I'm using indexes right. Just when I think I've made the perfect index out of a collection of fields I want to calculate against, I get the "index must be under 1000 bytes" error. Can anyone explain how to most efficiently create and use indexes to speed up queries? Caveat: upgrading Mysql is not possible in this case. Using Navicat Light for db administration, but this app isn't required.

    Read the article

  • Regarding xml parsing in iphone

    - by Prash.......
    hi... I am developing an applictaion in which i am doing xml parsing i found an error in [xmlparse parse] method. and the error for this is as follows: [NSCFString bytes]: unrecognized selector sent to instance 0x3df6310 2010-04-30 00:09:46.302 SPCiphone2[4234:1003] void SendDelegateMessage(NSInvocation*): delegate () failed to return after waiting 10 seconds. main run loop mode: kCFRunLoopDefaultMode code snippet for this as follows. responseOfWebResultData = [[NSMutableString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; NSLog(@"result: %@", responseOfWebResultData); //starting the XML parsing if(responseOfWebResultData) { @try { xmlParser = [[NSXMLParser alloc] initWithData:responseOfWebResultData]; [xmlParser setDelegate: self]; [xmlParser setShouldResolveExternalEntities: YES]; [xmlParser parse]; [responseOfWebResultData release]; } @catch(NSException *e) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Please " message:[e reason] delegate:nil cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alert show]; [alert release]; } }

    Read the article

  • .net converting bytearray to double[]

    - by AJ
    Hello, I am working with a database from a legacy app which stores 24 floating point values (doubles) as a byte array of length 192, so 8 bytes per value. This byte array is stored in a column of type image in a SQL Server 2005 database. In my .net app I need to read this byte array and convert it to a array of type Double[24]. I can access the field easy enough reader.GetBytes(...) but how to convert the returned ByteArray to Double[24] Any ideas? Thanks, AJ

    Read the article

  • Java IO (javase 6)- Help me understand the effects of my sample use of Streams and Writers...

    - by Daddy Warbox
    BufferedWriter out = new BufferedWriter( new OutputStreamWriter( new BufferedOutputStream( new FileOutputStream("out.txt") ) ) ); So let me see if I understand this: A byte output stream is opened for file "out.txt". It is then fed to a buffered output stream to make file operations faster. The buffered stream is fed to an output stream writer to bridge from bytes to characters. Finally, this writer is fed to a buffered writer... which adds another layer of buffering? Hmm...

    Read the article

  • how to read the txt file from the database(line by line)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • Unit testing UDP socket handling code

    - by JustJeff
    Are there any 'good' ways to cause a thread waiting on a recvfrom() call to become unblocked and return with an error? The motivation for this is to write unit tests for a system which includes a unit that reads UDP datagrams. One of the branches handles errors on the recvfrom call itself. The code isn't required to distinguish between different types of errors, it just has to set a flag. I've thought of closing the socket from another thread, or do a shutdown on it, to cause recvfrom to return with an error, but this seems a bit heavy handed. I've seen mention elsewhere that sending an over-sized packet would do it, and so set up an experiment where a 16K buffer was sent to a recvfrom waiting for just 4K, but that didn't result in an error. The recvfrom just return 4096, to indicate it had gotten that many bytes.

    Read the article

  • How do I handle partial write completions from overlapped I/O using I/O Completion Ports

    - by Poni
    On Windows I/O completion ports, say I do this: void function() { WSASend("1111"); // A WSASend("2222"); // B WSASend("3333"); // C } If I got a "write-complete" that says 3 bytes of WSASend() A were sent, is it possible that right after that I'll get a "write-complete" that tells me that some or all of B & C were sent, or will TCP will hold them until I re-issue a WSASend() call with the rest of A's data? Or will TCP complete it automatically?

    Read the article

  • How to open a large text file in C#

    - by desmati
    I have a text file that contains about 100000 articles. The structure of file is: BEGIN OF FILE .Document ID 42944-YEAR:5 .Date 03\08\11 .Cat political Article Content 1 .Document ID 42945-YEAR:5 .Date 03\08\11 .Cat political Article Content 2 END OF FILE I want to open this file in c# for processing it line by line. I tried this code: String[] FileLines = File.ReadAllText(TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.

    Read the article

  • Resizing Images with ASP.NET and saving to Database

    - by Ryan
    I need to take an uploaded image, resize it, and save it to the database. Simple enough, except I don't have access to save any temp files to the server. I'm taking the image, resizing it as a Bitmap, and need to save it to a database field as the original image type (JPG for example). How can I get the FileBytes() like this, so I can save it to the database? Before I was using ImageUpload.FileBytes() but now that I'm resizing I'm dealing with Images and Bitmaps instead of FileUploads and can't seem find anything that will give me the bytes. Thanks!

    Read the article

  • How to set parameters in Python zlib module

    - by fagricipni
    I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification. Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.

    Read the article

  • Optimal password salt length

    - by Juliusz Gonera
    I tried to find the answer to this question on Stack Overflow without any success. Let's say I store passwords using SHA-1 hash (so it's 160 bits) and let's assume that SHA-1 is enough for my application. How long should be the salt used to generated password's hash? The only answer I found was that there's no point in making it longer than the hash itself (160 bits in this case) which sounds logical, but should I make it that long? E.g. Ubuntu uses 8-byte salt with SHA-512 (I guess), so would 8 bytes be enough for SHA-1 too or maybe it would be too much?

    Read the article

  • Storing uploaded content on a website

    - by Matt
    For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name. I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred. I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.

    Read the article

  • How can you tell the source of the data when using the Stream.BeginRead Method?

    - by xarzu
    When using the Stream.BeginRead Method, and you are reading from a stream into a memory, how is it determined where you are reading the data from? See: http://msdn.microsoft.com/en-us/library/system.io.stream.beginread.aspx In the list of parameters, I do not see one that tells where the data is being read from: Parameters buffer Type: System.Byte[] The buffer to read the data into. offset Type: System.Int32 The byte offset in buffer at which to begin writing data read from the stream. count Type: System.Int32 The maximum number of bytes to read. callback Type: System.AsyncCallback An optional asynchronous callback, to be called when the read is complete. state Type: System.Object A user-provided object that distinguishes this particular asynchronous read request from other requests.

    Read the article

  • Available message types in JMS?

    - by Caylem
    This is based on a past exam question. The question is asking to describe the four types of message available using JMS. The problem is it says the four, not just four. So it assumes their is only four, no more no less. However according to this site their seems to be five; streams maps text objects bytes *Another book states that XML is another potential type in future versions of JMS. Is XML already available? Am I missing something or is the question just wrong? Thanks.

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • Magick++ Read Image with ICC colorspace

    - by FlashFan
    Hi guys I need to know how I can read an image which uses a separate ICC Color Profile. The image consists of 26'099'520 Bytes which is the result of 2480 width* 3508 height * 3 components per pixel. I tried it with the following code: Image * image = new Image(); Blob * blob = new Blob(imagedata.c_str(),imagedata.length()); image->read(*blob,Geometry(2480,3508),8,"RGB"); Blob * iccblob = new Blob(iccdata.c_str(),iccdata.length()); image->iccColorProfile(*iccblob); image->write("result.jpg"); But the colors are the same as when I don´t set the Icc-profile to the image. And the colors are wrong in both cases. Thanks for your help!

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >