Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 145/173 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Keeping track of leading zeros with BitSet in Java

    - by Ryan
    So, according to this question there are two ways to look at the size of a BitSet. size(), which is legacy and not really useful. I agree with this. The size is 64 after doing: BitSet b = new BitSet(8); length(), which returns the index of the highest set bit. In the above example, length() will return 0. This is somewhat useful, but doesn't accurately reflect the number of bits the BitSet is supposed to be representing in the event you have leading zeros. The information I'm dealing with rarely(if ever) falls evenly into 8-bit bytes, and the leading 0s are just as important to me as the 1s. I have some data fields that are 333 bits long, some that are 20, etc. Is there a better way to deal with bit-level details in Java that will keep track of leading zeros? Otherwise, I'm going to have to 'roll my own', so to speak. To which I have a few ideas already, but I'd prefer not to reinvent the wheel if possible.

    Read the article

  • Combining SQL Rows

    - by lumberjack4
    I've got SQL Compact Database that contains a table of IP Packet Headers. The Table looks like this: Table: PacketHeaders ID SrcAddress SrcPort DestAddress DestPort Bytes 1 10.0.25.1 255 10.0.25.50 500 64 2 10.0.25.50 500 10.0.25.1 255 80 3 10.0.25.50 500 10.0.25.1 255 16 4 75.48.0.25 387 74.26.9.40 198 72 5 74.26.9.40 198 75.48.0.25 387 64 6 10.0.25.1 255 10.0.25.50 500 48 I need to perform a query to show 'conversations' going on across a local network. Packets going from A - B is part of the same conversations as packets going from B - A. I need to perform a query to show the on going conversations. Basically what I need is something that looks like this: Returned Query: SrcAddress SrcPort DestAddress DestPort TotalBytes BytesA->B BytesB->A 10.0.25.1 255 10.0.25.50 500 208 112 96 75.48.0.25 387 74.26.9.40 198 136 72 64 As you can see I need the query (or series of queries) to recognize that A-B is the same as B-A and break up the byte counts accordingly. I'm not a SQL guru by any means but any help on this would be greatly appreciated.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • php ftp upload problem

    - by Autobyte
    Hi I am trying to write a small php function that will upload files to an FTP server and I keep getting the same error but I cannot find any fix by googling the problem, I am hoping you guys can help me here... The error I get is: Warning: ftp_put() [function.ftp-put]: Unable to build data connection: No route to host in . The file was created at the FTP server but it is zero bytes. Here is the code: <?php $file = "test.dat"; $ftp_server="ftp.server.com"; $ftp_user = "myname"; $ftp_pass = "mypass"; $destination_file = "test.dat"; $cid=ftp_connect($ftp_server); if(!$cid) { exit("Could not connect to server: $ftp_server\n"); } $login_result = ftp_login($cid, $ftp_user, $ftp_pass); if (!$login_result) { echo "FTP connection has failed!"; echo "Attempted to connect to $ftp_server for user $ftp_user"; exit; } else { echo "Connected to $ftp_server, for user $ftp_user"; } $upload = ftp_put($cid, $destination_file, $file, FTP_BINARY); if (!$upload) { echo "Failed upload for $source_file to $ftp_server as $destination_file<br>"; echo "FTP upload has failed!"; } else { echo "Uploaded $source_file to $ftp_server as $destination_file"; } ftp_close($cid); ?>

    Read the article

  • How CPU finds ISR and distinguishes between devices

    - by ripunjay-tripathi-gmail-com
    I should first share all what I know - and that is complete chaos. There are several different questions on the topic, so please don't get irritated :). 1) To find an ISR, CPU is provided with a interrupt number. In x86 machines (286/386 and above) there is a IVT with ISRs in it; each entry of 4 bytes in size. So we need to multiply interrupt number by 4 to find the ISR. So first bunch of questions is - I am completely confused in mechanism of CPU receiving the interrupt. To raise an interrupt, firstly device shall probe for IRQ - then what ? The interrupt number travels "on IRQ" towards CPU? I also read something like device putting ISR address on data bus ; whats that then ? What is the concept of devices overriding the ISR. Can somebody tell me few example devices where CPU polls for interrupts? And where does it finds ISR for them ? 2) If two devices share an IRQ (which is very much possible), how does CPU differs amongst them ? What if both devices raise an interrupt of same priority simultaneously. I got to know there will be masking of same type and low priority interrupts - but how this communication happens between CPU and device controller? I studied the role of PIC and APIC for this problem, but could not understand. Thanks for reading. Thank you very much for answering.

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Python minidom and UTF-8 encoded XML with hash references

    - by Jakob Simon-Gaarde
    Hi I am experiencing some difficulty in my home project where I need to parse a SOAP request. The SOAP is generated with gSOAP and involves string parameters with special characters like the danish letters "æøå". gSOAP builds SOAP requests with UTF-8 encoding by default, but instead of sending the special chatacters in raw format (ie. bytes C3A6 for the special character "æ") it sends what I think is called character hash references (ie. &#195;&#166;). I don't completely understand why gSOAP does it this way as I can see that it has marked the incomming payload as being UTF-8 encoded anyway (Content-Type: text/xml; charset=utf-8), but this is besides the question (I think). Anyway I guess gSOAP probably is obeying transport rules, or what? When I parse the request from gSOAP in python with xml.dom.minidom.parseString() I get element values as unicode objects which is fine, but the character hash references are not decoded as UTF-8 character codes. It unescapes the character hash references, but does not decode the string afterwards. In the end I have a unicode string object with UTF-8 encoding: So if the string "æble" is contained in the XML, it comes like this in the request: "&#195;&#166;ble" After parsing the XML the unicode string in the DOM Text Node's data member looks like this: u'\xc3\xa6ble' I would expect it to look like this: u'\xe6ble' What am I doing wrong? Should I unescape the SOAP XML before parsing it, or is it somewhere else I should be looking for the solution, maybe gSOAP? Thanks in advance. Best regards Jakob Simon-Gaarde

    Read the article

  • A UnicodeDecodeError that occurs with json in python on Windows, but not Mac.

    - by ventolin
    On windows, I have the following problem: >>> string = "Don´t Forget To Breathe" >>> import json,os,codecs >>> f = codecs.open("C:\\temp.txt","w","UTF-8") >>> json.dump(string,f) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\json\__init__.py", line 180, in dump for chunk in iterable: File "C:\Python26\lib\json\encoder.py", line 294, in _iterencode yield encoder(o) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 3-5: invalid data (Notice the non-ascii apostrophe in the string.) However, my friend, on his mac (also using python2.6), can run through this like a breeze: > string = "Don´t Forget To Breathe" > import json,os,codecs > f = codecs.open("/tmp/temp.txt","w","UTF-8") > json.dump(string,f) > f.close(); open('/tmp/temp.txt').read() '"Don\\u00b4t Forget To Breathe"' Why is this? I've also tried using UTF-16 and UTF-32 with json and codecs, but to no avail.

    Read the article

  • Creating a fixed length output string with sprintf containing floats

    - by Kungi
    Hi, I'm trying to create a file which has the following structure: - Each line has 32 bytes - Each line looks like this format string: "%10i %3.7f %3.7f\n" My Problem is the following: When i have a negative floating point numbers the line gets longer by one or even two characters because the - sign does not count to the "%3.7f". Is there any way to do this more nicely than this? if( node->lng > 0 && node->lat > 0 ) { sprintf( osm_node_repr, "%10i %3.7f %3.7f\n", node->id, node->lng, node->lat ); } else if (node->lng > 0 && node->lat < 0) { sprintf( osm_node_repr, "%10i %3.7f %3.6f\n", node->id, node->lng, node->lat ); } else if (node->lng < 0 && node->lat > 0) { sprintf( osm_node_repr, "%10i %3.6f %3.7f\n", node->id, node->lng, node->lat ); } else if ( node->lng < 0 && node->lat < 0 ) { sprintf( osm_node_repr, "%10i %3.6f %3.6f\n", node->id, node->lng, node->lat ); } Thanks for your Answers, Andreas

    Read the article

  • GCC - How to realign stack?

    - by psihodelia
    I try to build an application which uses pthreads and __m128 SSE type. According to GCC manual, default stack alignment is 16 bytes. In order to use __m128, the requirement is the 16-byte alignment. My target CPU supports SSE. I use a GCC compiler which doesn't support runtime stack realignment (e.g. -mstackrealign). I cannot use any other GCC compiler version. My test application looks like: #include <xmmintrin.h> #include <pthread.h> void *f(void *x){ __m128 y; ... } int main(void){ pthread_t p; pthread_create(&p, NULL, f, NULL); } The application generates an exception and exits. After a simple debugging (printf "%p", &y), I found that the variable y is not 16-byte aligned. My question is: how can I realign the stack properly (16-byte) without using any GCC flags and attributes (they don't help)? Should I use GCC inline Assembler within this thread function f()?

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Problems in getting data from CFStreamCreatePairWithSocketToHost

    - by gkedmi
    Hi I'm building an iPhoe app with a socket to a PC app , I need to get an image from this PC app. It's my first time using "CFStreamCreatePairWithSocketToHost".After I establish the socket with "NSOperation" I call CFStreamClientContext streamContext = {0, self, NULL, NULL, NULL}; BOOL success = CFReadStreamSetClient(myReadStream, kMyNetworkEvents,MyStreamCallBack,&streamContext); CFReadStreamScheduleWithRunLoop(myReadStream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode); then I call CFWriteStreamWrite(myWriteStream, &writeBuffer, 3); // Open read stream. if (!CFReadStreamOpen(myReadStream)) { // Notify error } . . . while(!cancelled && !finished) { SInt32 result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, NO); if (result == kCFRunLoopRunStopped || result == kCFRunLoopRunFinished) { break; } if (([NSDate timeIntervalSinceReferenceDate] - _lastRead) MyConnectionTimeout) { // Call timed out cancelled = YES; break; } // Also handle stream status CFStreamStatus status = CFReadStreamGetStatus(myReadStream); } and then when I get "kCFStreamEventHasBytesAvailable" I use while (CFReadStreamHasBytesAvailable(myReadStream)) { CFReadStreamRead(myReadStream, readBuffer, 1000); //and buffer the the bytes } It's unpredictable , sometimes I get the whole picture , sometime I got just part of it , and I can't understand what make the different. can someone has an idea what is wrong here? thanks

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Unable to upload large files on FTP using Apache commons-net-3.1

    - by Nitin
    I am trying to upload the one large file ( more than 8 MB) using storeFile(remote, local) method of FTPClient but it results false.It get uploaded with some extra bytes.Following is the code with Output: public class Main { public static void main(String[] args) { FTPClient client = new FTPClient(); FileInputStream fis = null; try { client.connect("208.106.181.143"); client.setFileTransferMode(client.BINARY_FILE_TYPE); client.login("abc", "java"); int reply = client.getReplyCode(); System.out.println("Received Reply from FTP Connection:" + reply); if(FTPReply.isPositiveCompletion(reply)){ System.out.println("Connected Success"); } client.changeWorkingDirectory("/"+"Everbest"+"/"); client.makeDirectory("ETPSupplyChain5.3-EvbstSP3"); client.changeWorkingDirectory("/"+"Everbest"+"/"+"ETPSupplyChain5.3-EvbstSP3"+"/"); FTPFile[] names = client.listFiles(); String filename = "E:\\Nitin\\D-Drive\\Installer.rar"; fis = new FileInputStream(filename); boolean result = client.storeFile("Installer.rar", fis); int replyAfterupload = client.getReplyCode(); System.out.println("Received Reply from FTP Connection replyAfterupload:" + replyAfterupload); System.out.println("result:"+result); for (FTPFile name : names) { System.out.println("Name = " + name); } client.logout(); fis.close(); client.disconnect(); } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } o/p: Received Reply from FTP Connection:230 Connected Success 32 /Everbest/ETPSupplyChain5.3-EvbstSP3 Received Reply from FTP Connection replyAfterupload:150 result:false

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • NSOperation and fwrite (Iphone)

    - by Sridhar
    Hi, I am having problem with this code.Basically I want to execute the fwrite from a timer function asyncronusly. Here is the code block in my Timer function. (This will call by the timer every 0.2 seconds. -(void)timerFunction { WriteFileOperation * operation = [WriteFileOperation writeFileWithBuffer:pFile buffer:readblePixels length:nBytes*15]; [_queue addOperation:operation]; // Here it is waiting to complete the fwrite } The WrtiteFilerOperation is an NSoperation class which it has to write the passing buffer to a file. I added this code in WriteFileOperation's "start" method. (void)start { if (![NSThread isMainThread]) { [self performSelectorOnMainThread:@selector(start) withObject:nil waitUntilDone:NO]; return; } [self willChangeValueForKey:@"isExecuting"]; _isExecuting = YES; [self didChangeValueForKey:@"isExecuting"]; NSLog(@"write bytes %d",fwrite(_buffer, 1, _nBytes, _file)); free(_buffer); [self finish]; } The problem here is , my timerFunction blocked by NSOperation until it writes the buffer into file.(I mean blocked till start method finishes its execution) and the performance seems same as directly placing the fwrite in timerFunction. I want to just return to timerFunction with out waiting from the start method execution to be completed. What I am doing wrong here ? Thanks In Advance Raghu

    Read the article

  • how is data stored at bit level according to "Endianness" ?

    - by bakra
    I read about Endianness and understood squat... so I wrote this main() { int k = 0xA5B9BF9F; BYTE *b = (BYTE*)&k; //value at *b is 9f b++; //value at *b is BF b++; //value at *b is B9 b++; //value at *b is A5 } k was equal to "A5 B9 BF 9F" and (byte)pointer "walk" o/p was "9F BF b9 A5" so I get it bytes are stored backwards...ok. ~ so now I thought how is it stored at BIT level... I means is "9f"(1001 1111) stored as "f9"(1111 1001)? so I wrote this int _tmain(int argc, _TCHAR* argv[]) { int k = 0xA5B9BF9F; void *ptr = &k; bool temp= TRUE; cout<<"ready or not here I come \n"< for(int i=0;i<32;i++) { temp = *( (bool*)ptr + i ); if( temp ) cout<<"1 "; if( !temp) cout<<"0 "; if(i==7||i==15||i==23) cout<<" - "; } } I get some random output even for nos. like "32" I dont get anything sensible. why ?

    Read the article

  • Encrypting an id in an URL in ASP.NET MVC

    - by Chuck Conway
    I'm attempting to encode the encrypted id in the Url. Like this: http://www.calemadr.com/Membership/Welcome/9xCnCLIwzxzBuPEjqJFxC6XJdAZqQsIDqNrRUJoW6229IIeeL4eXl5n1cnYapg+N However, it either doesn't encode correctly and I get slashes '/' in the encryption or I receive and error from IIS: The request filtering module is configured to deny a request that contains a double escape sequence. I've tried different encodings, each fails: HttpUtility.HtmlEncode HttpUtility.UrlEncode HttpUtility.UrlPathEncode HttpUtility.UrlEncodeUnicode Update The problem was I when I encrypted a Guid and converted it to a base64 string it would contain unsafe url characters . Of course when I tried to navigate to a url containing unsafe characters IIS(7.5/ windows 7) would blow up. Url Encoding the base64 encrypted string would raise and error in IIS (The request filtering module is configured to deny a request that contains a double escape sequence.). I'm not sure how it detects double encoded strings but it did. After trying the above methods to encode the base64 encrypted string. I decided to remove the base64 encoding. However this leaves the encrypted text as a byte[]. I tried UrlEncoding the byte[], it's one of the overloads hanging off the httpUtility.Encode method. Again, while it was URL encoded, IIS did not like it and served up a "page not found." After digging around the net I came across a HexEncoding/Decoding class. Applying the Hex Encoding to the encrypted bytes did the trick. The output is url safe. On the other side, I haven't had any problems with decoding and decrypting the hex strings.

    Read the article

  • GC output clarification

    - by elec
    I'm running a java application with the following settings: -XX:+CMSParallelRemarkEnabled -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution I'm not sure how to interpret the related gc logs(below). In particular: Heap after GC invocations=31 (full 3): does this mean there were 31 minor GCs, and 3 full GCs ? What triggers the several consecutive lines of Total time for which the application threads were stopped and Application Time ? Is it possible to get the time stamps associated with each of these lines ? GC logs: Total time for which application threads were stopped: 0.0046910 seconds Application time: 0.7946670 seconds Total time for which application threads were stopped: 0.0002900 seconds Application time: 1.0153640 seconds Total time for which application threads were stopped: 0.0002780 seconds Application time: 1.0161890 seconds Total time for which application threads were stopped: 0.0002760 seconds Application time: 1.0145990 seconds Total time for which application threads were stopped: 0.0002950 seconds Application time: 0.9999800 seconds Total time for which application threads were stopped: 0.0002770 seconds Application time: 1.0151640 seconds Total time for which application threads were stopped: 0.0002730 seconds Application time: 0.9996590 seconds Total time for which application threads were stopped: 0.0002880 seconds Application time: 0.9624290 seconds {Heap before GC invocations=30 (full 3): par new generation total 131008K, used 130944K [0x00000000eac00000, 0x00000000f2c00000, 0x00000000f2c00000) eden space 130944K, 100% used [0x00000000eac00000, 0x00000000f2be0000, 0x00000000f2be0000) from space 64K, 0% used [0x00000000f2bf0000, 0x00000000f2bf0000, 0x00000000f2c00000) to space 64K, 0% used [0x00000000f2be0000, 0x00000000f2be0000, 0x00000000f2bf0000) concurrent mark-sweep generation total 131072K, used 48348K [0x00000000f2c00000, 0x00000000fac00000, 0x00000000fac00000) concurrent-mark-sweep perm gen total 30000K, used 19518K [0x00000000fac00000, 0x00000000fc94c000, 0x0000000100000000) 2010-05-11T09:30:13.888+0100: 384.955: [GC 384.955: [ParNew Desired survivor size 32768 bytes, new threshold 0 (max 0) : 130944K-0K(131008K), 0.0052470 secs] 179292K-48549K(262080K), 0.0053030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] Heap after GC invocations=31 (full 3): par new generation total 131008K, used 0K [0x00000000eac00000, 0x00000000f2c00000, 0x00000000f2c00000) eden space 130944K, 0% used [0x00000000eac00000, 0x00000000eac00000, 0x00000000f2be0000) from space 64K, 0% used [0x00000000f2be0000, 0x00000000f2be0000, 0x00000000f2bf0000) to space 64K, 0% used [0x00000000f2bf0000, 0x00000000f2bf0000, 0x00000000f2c00000) concurrent mark-sweep generation total 131072K, used 48549K [0x00000000f2c00000, 0x00000000fac00000, 0x00000000fac00000) concurrent-mark-sweep perm gen total 30000K, used 19518K [0x00000000fac00000, 0x00000000fc94c000, 0x0000000100000000) } Total time for which application threads were stopped: 0.0056410 seconds Application time: 0.0475220 seconds Total time for which application threads were stopped: 0.0001800 seconds Application time: 1.0174830 seconds Total time for which application threads were stopped: 0.0003820 seconds Application time: 1.0126350 seconds Total time for which application threads were stopped: 0.0002750 seconds Application time: 1.0155910 seconds Total time for which application threads were stopped: 0.0002680 seconds Application time: 1.0155580 seconds Total time for which application threads were stopped: 0.0002880 seconds Application time: 1.0155480 seconds Total time for which application threads were stopped: 0.0002970 seconds Application time: 0.9896810 seconds

    Read the article

  • XmlHttpRequest in a bookmarklet returns empty responseText on GET?

    - by David Eyk
    I'm trying to build a javascript bookmarklet for a special URL shortening service we've built at http://esv.to for shortening scripture references (i.e. "Matthew 5" becomes "http://esv.to/Mt5". The bookmarklet is supposed to do a GET request to http://api.esv.to/Matthew+5, which returns a text/plain response of http://esv.to/Mt5. The code for the bookmarklet itself looks like this (expanded for readability): var body = document.getElementsByTagName('body')[0], script = document.createElement('script'); script.type = 'text/javascript'; script.src = 'http://esv.to/media/js/bookmarklet.js'; body.appendChild(script); void(0); The code from http://esv.to/media/js/bookmarklet.js looks like this: (function() { function shorten(ref, callback) { var url = "http://esv.to/api/" + escape(ref); var req = new XMLHttpRequest(); req.onreadystatechange = function shortenIt() { if ( this.readyState == 4 && this.status == 200 ) { callback(req.responseText); }; }; req.open( "GET", url ); req.send(); }; function doBookmarklet() { var ref = prompt("Enter a scripture reference or keyword search to link to:", "") shorten(ref, function (short) { prompt("Here is your shortened ESV URL:", short); }); }; doBookmarklet(); })(); When called from http://esv.to itself, the bookmarklet works correctly. But when used on another page, it does not. The strange thing is, when I watch the request from Firebug, the response is 200 OK, the browser downloads 17 bytes (the length of the returned string), but the response body is empty! No error is thrown, just an empty responseText on the XmlHttpRequest object. Now, according to http://stackoverflow.com/questions/664689/ajax-call-from-bookmarklet, GET shouldn't violate the same origin policy. Is this a bug? Is there a workaround?

    Read the article

  • Visual Studio 2008 freeze after save

    - by Klay
    I recently added about a dozen classes from another solution into my current solution in Visual Studio. After adding these classes, Visual Studio started freezing for about 10 seconds whenever I Save. The cursor disappears and mouse clicks and keys do nothing. Some interesting points: Even after I removed the classes, the freezing behavior is still there. Freezing occurs whether I've made changes to the code or not. This behavior ONLY seems to affect this particular version of this solution. No other solutions exhibit this behavior. Older versions of this solution are not affected. In Sysinternals Process Explorer, whenever I save in Visual Studio, the I/O bytes graph jumps from 0 to 2MB for about 5 seconds, then drops to about 1 MB for a split second, then jumps back to 2MB for another 5 seconds. Processor use goes up to about 3-5% during this time. Here are the details of my setup: C# Silverlight project (maybe 20 classes), .NET version 3.5 SP1, Visual Studio 2008 v9.0.30729 SP1. EDIT: I edited this question extensively to reflect the more detailed information. I thought this might be preferable to starting a new question.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >