Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 126/173 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • Java - Need help with binary/code string manipulation

    - by ShrimpCrackers
    For a project, I have to convert a binary string into (an array of) bytes and write it out to a file in binary. Say that I have a sentence converted into a code string using a huffman encoding. For example, if the sentence was: "hello" h = 00 e = 01, l = 10, o = 11 Then the string representation would be 0001101011. How would I convert that into a byte? <-- If that question doesn't make sense it's because I know little about bits/byte bitwise shifting and all that has to do with manipulating 1's and 0's.

    Read the article

  • C++'s std::string pools, debug builds? std::string and valgrind problems

    - by Den.Jekk
    Hello, I have a problem with many valgrind warnings about possible memory leaks in std::string, like this one: 120 bytes in 4 blocks are possibly lost in loss record 4,192 of 4,687 at 0x4A06819: operator new(unsigned long) (vg_replace_malloc.c:230) by 0x383B89B8B0: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C3B4: (within /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C4A9: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) I'm wondering: does std::string (GCC 4.1.2) use any memory pools? if so, is there any way to disable the pools (in form of a debug build etc.)? Regards, Den

    Read the article

  • Treat a void function as a value

    - by Brendan Long
    I'm writing some terrible, terrible code, and I need a way to put a free() in the middle of a statement. The actual code is: int main(){ return printf("%s", isPalindrome(fgets(malloc(1000), 1000, stdin))?"Yes!\n":"No!\n") >= 0; // leak 1000 bytes of memory } I was using alloca(), but I can't be sure that will actually work on my target computer. My problem is that free returns void, so my code has this error message: error: void value not ignored as it ought to be The obvious idea I had was: int myfree(char *p){ free(p); return 0; } Which is nice in that it makes the code even more unreadable, but I'd prefer not to add another function. I also briefly tried treating free() as a function pointer, but I don't know if that would work, and I don't know enough about C to do it properly. Note: I know this is a terrible idea. Don't try this at home kids.

    Read the article

  • App Engine - Objectify - Storing a byte[]

    - by Spines
    I'm using the Objectify library for interfacing with the app engine datastore. In my User class, I store the hashed password as a byte[]. When I put it in the datastore, it is correctly stored as a blob. When I try to load the User object back out I get this error: java.lang.IllegalStateException: Cannot load non-collection value '<Blob: 40 bytes>' into private byte[] How do I fix this? Do I have to change my User class to have the hashed password be of type ShortBlob?

    Read the article

  • Portable way to determine the platform's line separator

    - by Adrian McCarthy
    Different platforms use different line separator schemes (LF, CR-LF, CR, NEL, Unicode LINE SEPARATOR, etc.). C++ (and C) make a lot of this transparent to most programs, by converting '\n' to and from the target platform's native new line encoding. But if your program needs to determine the actual byte sequence used, how could you do it portably? The best method I've come up with is: Write a temporary file in text mode with just '\n' in it, letting the run-time do the translation. Read back the temporary file in binary mode to see the actual bytes. That feels kludgy. Is there a way to do it without temporary files? I tried stringstreams instead, but the run-time doesn't actually translate '\n' in that context (which makes sense). Does the run-time expose this information in some other way?

    Read the article

  • problem while switching between Portrait and landscape in android views

    - by vnshetty
    In my application im going to display a web page in web view , it works fine but if i flip between landscape to portrait or vice versa, then it exits and comes to main page. wht is the prblm? logcat: 03-10 13:35:47.123: INFO/WindowManager(69): Setting rotation to 1, animFlags=1 03-10 13:35:47.242: INFO/ActivityManager(69): Config changed: { scale=1.0 imsi=310/260 loc=en_US touch=3 keys=2/1/1 nav=3/1 orien=2 layout=17 uiMode=17 seq=70} 03-10 13:35:47.363: INFO/UsageStats(69): Unexpected resume of com.mireader while already resumed in com.mireader 03-10 13:35:50.413: DEBUG/dalvikvm(69): GC_EXPLICIT freed 395 objects / 20424 bytes in 195ms

    Read the article

  • http authenitcation in xcode

    - by user313100
    I am trying to make twitter work in my app and everything works fine except the code does not seem to recognize an error from twitter. If the username/password are not valid, I get an error message through this function: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSString* strData = [[[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSASCIIStringEncoding] autorelease]; NSLog(@"Received data: %@", strData ) ; return ; } It prints: Received data: Could not authenticate you. . However, the app continues to the post a tweet view I have and ignores the error. Obviously, I do not have something setup right to detect such an error from twitter so my question is how do I get xcode to recognize an error like this? This uses basic http auth btw and don't mention anything about OAuth...just trying to get this to work for now.

    Read the article

  • getResourceAsStream returns HttpInputStream not of the entire file

    - by khue
    Hi, I am having a web application with an applet which will copy a file packed witht the applet to the client machine. When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ; The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer. The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used. How will I copy the file completely, is there a size limited for HttpInputStream? Thanks a lot.

    Read the article

  • How do I include extremely long literals in C++ source?

    - by BillyONeal
    Hello everyone :) I've got a bit of a problem. Essentially, I need to store a large list of whitelisted entries inside my program, and I'd like to include such a list directly -- I don't want to have to distribute other libraries and such, and I don't want to embed the strings into a Win32 resource, for a bunch of reasons I don't want to go into right now. I simply included my big whitelist in my .cpp file, and was presented with this error: 1>ServicesWhitelist.cpp(2807): fatal error C1091: compiler limit: string exceeds 65535 bytes in length The string itself is about twice this allowed limit by VC++. What's the best way to include such a large literal in a program?

    Read the article

  • Google App Engine - About how much quota does a single datastore put use?

    - by Spines
    The latency for a datastore put is about 150ms (http://code.google.com/status/appengine/detail/datastore/2010/03/11#ae-trust-detail-datastore-put-latency). About how much CPUTime is used by a single datastore put with data size of 100 bytes, into an entity that has only 2 columns, and no indexes? I plan to do some testing with this later today to figure it out, but if anyone already knows that would help me out :). Also, does anyone know about how much extra overhead in CPUTime doing this datastore put through the task queue would be? Note: This is kind of a follow up to this question: http://stackoverflow.com/questions/2421075/google-app-engine-how-reliable-are-the-logs.

    Read the article

  • how to show the right word in my code, my code is : os.urandom(64)

    - by zjm1126
    My code is: print os.urandom(64) which outputs: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" \xd0\xc8=<\xdbD' \xdf\xf0\xb3>\xfc\xf2\x99\x93 =S\xb2\xcd'\xdbD\x8d\xd0\\xbc{&YkD[\xdd\x8b\xbd\x82\x9e\xad\xd5\x90\x90\xdcD9\xbf9.\xeb\x9b>\xef#n\x84 which isn't readable, so I tried this: print os.urandom(64).decode("utf-8") but then I get: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" Traceback (most recent call last): File "D:\zjm_code\a.py", line 17, in <module> print os.urandom(64).decode("utf-8") File "D:\Python25\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-3: invalid data What should I do to get human-readable output?

    Read the article

  • Column.DbType affecting runtime behavior

    - by leppie
    Hi According to the MSDN docs, the DbType property/attribute of a Column type/element is only used for database creation. Yet, today, when trying to submit data to an IMAGE column on a SQLCE database (not sure if only on CE), I got an exception of 'Data truncated to 8000 bytes'. This was due to the DbType still being defined as VARBINARY(MAX) which SQLCE does not support. Changing the type to IMAGE in the DbType fixes the issue. So what other surprises does Linq2SQL attributes hold in store? Is this a bug or intended? Should I report it to MS? UPDATE After getting the answer from Guffa, I tested it, but it seems for NVARCHAR(10) adding a 11 char length string causes a SQL exception, and not Linq2SQL one. The data was truncated while converting from one data type to another. [ Name of function(if known) = ] A first chance exception of type 'System.Data.SqlServerCe.SqlCeException' occurred in System.Data.SqlServerCe.dll

    Read the article

  • printf not passing correct Hex Address to stack

    - by kriss
    I have a hickup in using printf . I am on ubuntu 10.04. Basically i have a C program asking for some input and then prints it back. It is OK for printing something after inputing. I tried to insert some Hex Address to Stack by following format:- printf "hello world!\x12\x23\x34" | ./input1 But i don't know what is the problem. If i give only string beyond 12 bytes it overwrites BUT If I give hex address(through printf), it doesn't overwrite on return address. Instead it stores some other thing. Could anyone help??? I can't proceed further becoz of this. Thanks in advance

    Read the article

  • Can you solve my odd Sharepoint CSS cache / customising problem?

    - by Aidan
    I have a weird situation with my sharepoint css. It is deployed as part of a .wsp solution and up until now everything has been fine. The farm it deploys too has a couple of webfront ends and a single apps server and SQL box. The symptom is that if I deploy the solution, then use a webbrowser to view the page it has no styles, and if I access the .css directly I see the first 100 or so bytes of the .css. However if I go into sharepoint designer and look at the file it is looks fine, and if I check it out and publish it (customising the file but not actually changing anything in it) then the website works fine and the css downloads completely. There is some fairly complex caching on the servers Disk based and object caches. as far as I can tell I have cleared these (and an issreset should clear them anyway... shouldn't it?) I have used this tool to clear the blobcache from the whole farm http://blobcachefarmflush.codeplex.com/

    Read the article

  • aacplus frame alignment problems

    - by Daniel Mošmondor
    I have an application that rips aac+ audio streams, cutting them at every regular interval (i.e. 10 minutes). Sometimes files are playable OK, but sometimes, Windows Media Player just closes when trying to build DirectShow graph. I am using Orban aacplus plugin, and it works under directshow. When I play this file with winamp or vlc, that have it's own aacplus decoding engine, it works fine. However, I need it to work under directshow. Anyway, problematic file is here: http://www.videophill.com/files/00272-20100418100002.aac I know that there is frame alignment error and I confirmed my theory by filling first 256 bytes with 0x00, tried to play it again, and it worked. Is there any info on aacplus frames available on the web, so I can try to find the beginning of the frame manually and cut the rest off?

    Read the article

  • A 4-byte Unsigned Int for Sql Server 2008?

    - by Jeff Meatball Yang
    I understand there are multiple questions about this on SO, but I have yet to find a definitive answer of "yes, here's how..." So here it is again: What are the possible ways to store an unsigned integer value (32-bit value or 32-bit bitmap) into a 4-byte field in SQL Server? Here are ideas I have seen: 1) Use a -1*2^31 offset for all values Disadvantages: need to perform math on the values before reading/writing/aggregating. 2) Use 4 tinyint fields Disadvantages: need to concatenate values to perform any operations 3) Use binary(4) Disadvantages: actually uses 4 + 2 bytes of space

    Read the article

  • What is the best way to maintain an entity's original properties when they are not included in MVC binding from edit page?

    - by kingdango
    I have an ASP.NET MVC view for editing a model object. The edit page includes most of the properties of my object but not all of them -- specifically it does not include CreatedOn and CreatedBy fields since those are set upon creation (in my service layer) and shouldn't change in the future. Unless I include these properties as hidden fields they will not be picked up during Binding and are unavailable when I save the modified object in my EF 4 DB Context. In actuality, upon save the original values would be overwritten by nulls (or some type-specific default). I don't want to drop these in as hidden fields because it is a waste of bytes and I don't want those values exposed to potential manipulation. Is there a "first class" way to handle this situation? Is it possible to specify a EF Model property is to be ignored unless explicitly set?

    Read the article

  • why can't I call .update on a MessageDigest instance

    - by Arthur Ulfeldt
    when i run this from the repl: (def md (MessageDigest/getInstance "SHA-1")) (. md update (into-array [(byte 1) (byte 2) (byte 3)])) I get: No matching method found: update for class java.security.MessageDigest$Delegate the Java 6 docs for MessageDigest show: update(byte[] input) Updates the digest using the specified array of bytes. and the class of (class (into-array [(byte 1) (byte 2) (byte 3)])) is [Ljava.lang.Byte; Am I missing something in the definition of update? Not creating the class I think I am? Not passing it the type I think I am?

    Read the article

  • java memory usage

    - by xdevel2000
    I know I always post a similar question about array memory usage but now I want post the question more specific. After I read this article: http://www.javamex.com/tutorials/memory/object_memory_usage.shtml I didn't understand some things: the size of a data type is always the same also on different platform (Linux / Windows 32 / 64 bit)??? so an int will be always 32 bit?; when I compute the memory usage I must put also the reference value itself? If I have an object to a class that has an int field its memory will be 12 (object header) + 4 reference + 4 (the int field) + 3 (padding) = 24 bytes??

    Read the article

  • write image file larger than 4096

    - by ntan
    Hi, *************EDIT********** i am using ODBC and found that can not read more than 4096 for a field Any suggestions *************EDIT************ i am reading an image from db $image=$row["image-contents"]; Now try to write the file to disk $image_name="test.jpg"; $file = fopen( "images/".$image_name, "w" ); fwrite( $file, $image); fclose( $file ); The problem is that the file created is only 4096 bytes and the image file is corrupt because $image is larger than 4096. I now that fwrite use blocks for write but i dont know how do it. Help plz!

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >