Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 126/173 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • C++'s char * by swig got problem in Python 3.0

    - by gpliu3
    Our C++ lib works fine with Python2.4 using Swig, returning a C++ char* back to a python str. But this solution hit problem in Python3.0, error is: Exception=(, UnicodeDecodeError('utf8', b"\xb6\x9d\xa.....",0, 1, 'unexpected code byte') Our definition is like(working fine in Python 2.4): void cGetPubModulus( void* pSslRsa, char* cMod, int* nLen ); %include "cstring.i" %cstring_output_withsize( char* cMod, int* nLen ); Suspect swig is doing a Bytes-Str conversion automatically. In python2.4 it can be implicit but in Python3.0 it's no long allowed.. Anyone got a good idea? thanks

    Read the article

  • How to sanely read and dump structs to disk when some fields are pointers?

    - by bp
    Hello, I'm writing a FUSE plugin in C. I'm keeping track of data structures in the filesystem through structs like: typedef struct { block_number_t inode; filename_t filename; //char[SOME_SIZE] some_other_field_t other_field; } fs_directory_table_item_t; Obviously, I have to read (write) these structs from (to) disk at some point. I could treat the struct as a sequence of bytes and do something like this: read(disk_fd, directory_table_item, sizeof(fs_directory_table_item_t)); ...except that cannot possibly work as filename is actually a pointer to the char array. I'd really like to avoid having to write code like: read(disk_df, *directory_table_item.inode, sizeof(block_number_t)); read(disk_df, directory_table_item.filename, sizeof(filename_t)); read(disk_df, *directory_table_item.other_field, sizeof(some_other_field_t)); ...for each struct in the code, because I'd have to replicate code and changes in no less than three different places (definition, reading, writing). Any DRYer but still maintainable ideas?

    Read the article

  • Storing CLLocationCoordinates2D in NSMutableArray

    - by Amarsh
    After some searching, I got the following solution : ref: http://stackoverflow.com/questions/1392909/nsmutablearray-addobject-with-mallocd-struct CLLocationCoordinate2D* new_coordinate = malloc(sizeof(CLLocationCoordinate2D)); new_coordinate->latitude = latitude; new_coordinate->longitude = longitude; [points addObject:[NSData dataWithBytes:(void *)new_coordinate length:sizeof(CLLocationCoordinate2D)]]; free(new_coordinate); And access it as: CLLocationCoordinate2D* c = (CLLocationCoordinate2D*) [[points objectAtIndex:0] bytes]; However, someone claims that there is a memory leak here? Can anyone suggest me where is the leak and how to fix it. Further, is there a better way of storing a list of CLLocationCoordinate2D in NSMutableArray? Please give sample code since I am an Objective C newbie.

    Read the article

  • Is there a concise way to create an InputSupplier for an InputStream in Google Guava?

    - by Fabian Steeg
    There are a few factory methods in Google Guava to create InputSuppliers, e.g. from a byte[]: ByteStreams.newInputStreamSupplier(bytes); Or from a File: Files.newInputStreamSupplier(file); Is there a similar way to to create an InputSupplier for a given InputStream? That is, a way that's more concise than an anonymous class: new InputSupplier<InputStream>() { public InputStream getInput() throws IOException { return inputStream; } }; Background: I'd like to use InputStreams with e.g. Files.copy(...) or ByteStreams.equal(...).

    Read the article

  • Portable way to determine the platform's line separator

    - by Adrian McCarthy
    Different platforms use different line separator schemes (LF, CR-LF, CR, NEL, Unicode LINE SEPARATOR, etc.). C++ (and C) make a lot of this transparent to most programs, by converting '\n' to and from the target platform's native new line encoding. But if your program needs to determine the actual byte sequence used, how could you do it portably? The best method I've come up with is: Write a temporary file in text mode with just '\n' in it, letting the run-time do the translation. Read back the temporary file in binary mode to see the actual bytes. That feels kludgy. Is there a way to do it without temporary files? I tried stringstreams instead, but the run-time doesn't actually translate '\n' in that context (which makes sense). Does the run-time expose this information in some other way?

    Read the article

  • Java - Need help with binary/code string manipulation

    - by ShrimpCrackers
    For a project, I have to convert a binary string into (an array of) bytes and write it out to a file in binary. Say that I have a sentence converted into a code string using a huffman encoding. For example, if the sentence was: "hello" h = 00 e = 01, l = 10, o = 11 Then the string representation would be 0001101011. How would I convert that into a byte? <-- If that question doesn't make sense it's because I know little about bits/byte bitwise shifting and all that has to do with manipulating 1's and 0's.

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • C++'s std::string pools, debug builds? std::string and valgrind problems

    - by Den.Jekk
    Hello, I have a problem with many valgrind warnings about possible memory leaks in std::string, like this one: 120 bytes in 4 blocks are possibly lost in loss record 4,192 of 4,687 at 0x4A06819: operator new(unsigned long) (vg_replace_malloc.c:230) by 0x383B89B8B0: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C3B4: (within /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C4A9: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) I'm wondering: does std::string (GCC 4.1.2) use any memory pools? if so, is there any way to disable the pools (in form of a debug build etc.)? Regards, Den

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • Google App Engine - About how much quota does a single datastore put use?

    - by Spines
    The latency for a datastore put is about 150ms (http://code.google.com/status/appengine/detail/datastore/2010/03/11#ae-trust-detail-datastore-put-latency). About how much CPUTime is used by a single datastore put with data size of 100 bytes, into an entity that has only 2 columns, and no indexes? I plan to do some testing with this later today to figure it out, but if anyone already knows that would help me out :). Also, does anyone know about how much extra overhead in CPUTime doing this datastore put through the task queue would be? Note: This is kind of a follow up to this question: http://stackoverflow.com/questions/2421075/google-app-engine-how-reliable-are-the-logs.

    Read the article

  • Treat a void function as a value

    - by Brendan Long
    I'm writing some terrible, terrible code, and I need a way to put a free() in the middle of a statement. The actual code is: int main(){ return printf("%s", isPalindrome(fgets(malloc(1000), 1000, stdin))?"Yes!\n":"No!\n") >= 0; // leak 1000 bytes of memory } I was using alloca(), but I can't be sure that will actually work on my target computer. My problem is that free returns void, so my code has this error message: error: void value not ignored as it ought to be The obvious idea I had was: int myfree(char *p){ free(p); return 0; } Which is nice in that it makes the code even more unreadable, but I'd prefer not to add another function. I also briefly tried treating free() as a function pointer, but I don't know if that would work, and I don't know enough about C to do it properly. Note: I know this is a terrible idea. Don't try this at home kids.

    Read the article

  • http authenitcation in xcode

    - by user313100
    I am trying to make twitter work in my app and everything works fine except the code does not seem to recognize an error from twitter. If the username/password are not valid, I get an error message through this function: - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSString* strData = [[[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSASCIIStringEncoding] autorelease]; NSLog(@"Received data: %@", strData ) ; return ; } It prints: Received data: Could not authenticate you. . However, the app continues to the post a tweet view I have and ignores the error. Obviously, I do not have something setup right to detect such an error from twitter so my question is how do I get xcode to recognize an error like this? This uses basic http auth btw and don't mention anything about OAuth...just trying to get this to work for now.

    Read the article

  • PHP / javascript live chat using too much bandwidth

    - by David
    So I am learning about javascript, so I am making a live chat system with PHP and javascript. I have it so the javascript refreshes the log (each message gets logged in a file on the server), and it refreshes every second. Im using firebug to monitor the resource usage, and I see under the net tab each times its updated, and the bytes add up really fast. I know I can change it to update less, but is there a way that when a user on the other end I'm talking to, when the send a message, it gets sent to the server, then an alert gets sent to me saying that the chatlog needs to update somehow. That way it only updates when the log is updated. let me know, thanks

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 distinct colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • how many color combinations in a 24 bit image

    - by numerical25
    I am reading a book and I am not sure if its a mistake or I am misunderstanding the quote. It reads... Nowadays every PC you can buy has hardware that can render images with at least 16.7 million individual colors. Rather than have an array with thousands of color entries, the images instead contain explicit color values for each pixel. A 24-bit display, of course, uses 24 bits, or 3 bytes per pixel, for color information. This gives 1 byte, or 256 distinct values each, for red, green, and blue. This is generally called true color, because 256^3 (16.7 million) He says 1 byte is equal to 256 distinct values. 1 byte = 8 bits. 8^2 bits = 64 combinations of colors right ?? It's not adding up right to me. I know it might be something simple to understand, but I don't understand.

    Read the article

  • printf not passing correct Hex Address to stack

    - by kriss
    I have a hickup in using printf . I am on ubuntu 10.04. Basically i have a C program asking for some input and then prints it back. It is OK for printing something after inputing. I tried to insert some Hex Address to Stack by following format:- printf "hello world!\x12\x23\x34" | ./input1 But i don't know what is the problem. If i give only string beyond 12 bytes it overwrites BUT If I give hex address(through printf), it doesn't overwrite on return address. Instead it stores some other thing. Could anyone help??? I can't proceed further becoz of this. Thanks in advance

    Read the article

  • how to show the right word in my code, my code is : os.urandom(64)

    - by zjm1126
    My code is: print os.urandom(64) which outputs: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" \xd0\xc8=<\xdbD' \xdf\xf0\xb3>\xfc\xf2\x99\x93 =S\xb2\xcd'\xdbD\x8d\xd0\\xbc{&YkD[\xdd\x8b\xbd\x82\x9e\xad\xd5\x90\x90\xdcD9\xbf9.\xeb\x9b>\xef#n\x84 which isn't readable, so I tried this: print os.urandom(64).decode("utf-8") but then I get: > "D:\Python25\pythonw.exe" "D:\zjm_code\a.py" Traceback (most recent call last): File "D:\zjm_code\a.py", line 17, in <module> print os.urandom(64).decode("utf-8") File "D:\Python25\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-3: invalid data What should I do to get human-readable output?

    Read the article

  • getResourceAsStream returns HttpInputStream not of the entire file

    - by khue
    Hi, I am having a web application with an applet which will copy a file packed witht the applet to the client machine. When I deploy it to webserver and use: InputStream in = getClass().getResourceAsStream("filename") ; The in.available() always return a size of 8192 bytes for every file I tried, which means the file is corrupted when it is copied to the client computer. The InputStream is of type HttpInputStream (sun.net.protocol.http.HttpUrlConnection$httpInputStream). But while I test applet in applet viewer, the files are copied fine, with the InputStream returned is of type BufferedInputStream, which has the file's byte sizes. I guess that when getResourceStream in file system the BufferedInputStream will be used and when at http protocol, HttpInputStream will be used. How will I copy the file completely, is there a size limited for HttpInputStream? Thanks a lot.

    Read the article

  • problem while switching between Portrait and landscape in android views

    - by vnshetty
    In my application im going to display a web page in web view , it works fine but if i flip between landscape to portrait or vice versa, then it exits and comes to main page. wht is the prblm? logcat: 03-10 13:35:47.123: INFO/WindowManager(69): Setting rotation to 1, animFlags=1 03-10 13:35:47.242: INFO/ActivityManager(69): Config changed: { scale=1.0 imsi=310/260 loc=en_US touch=3 keys=2/1/1 nav=3/1 orien=2 layout=17 uiMode=17 seq=70} 03-10 13:35:47.363: INFO/UsageStats(69): Unexpected resume of com.mireader while already resumed in com.mireader 03-10 13:35:50.413: DEBUG/dalvikvm(69): GC_EXPLICIT freed 395 objects / 20424 bytes in 195ms

    Read the article

  • What is the best way to maintain an entity's original properties when they are not included in MVC binding from edit page?

    - by kingdango
    I have an ASP.NET MVC view for editing a model object. The edit page includes most of the properties of my object but not all of them -- specifically it does not include CreatedOn and CreatedBy fields since those are set upon creation (in my service layer) and shouldn't change in the future. Unless I include these properties as hidden fields they will not be picked up during Binding and are unavailable when I save the modified object in my EF 4 DB Context. In actuality, upon save the original values would be overwritten by nulls (or some type-specific default). I don't want to drop these in as hidden fields because it is a waste of bytes and I don't want those values exposed to potential manipulation. Is there a "first class" way to handle this situation? Is it possible to specify a EF Model property is to be ignored unless explicitly set?

    Read the article

  • Linux, C++ audio capturing (just microphone) library

    - by TheOm3ga
    I'm developing a musical game, it's like a singstar but instead of singing, you have to play the recorder. It's called oFlute, and it's still in early development stage. In the game, I capture the microphone input, then run a simple FFT analysis and compare the results to typical recorder's frequencies, thus getting the played note. At the beginning, the audio library I was using was RtAudio, but I don't remember why I switched to PortAudio, which is what I'm currently using. The problem is that, from time to time, either it crashes randomly or stops capturing, like if there were no sound coming from the microphone. My question is, what's the best option to capture microphone input on Linux? I just need to open, read, and close a flow of bytes from the microphone. I've been reading this guide, and (un)surprisingly it says: I don't think that PortAudio is very good API for Unix-like operating systems. So, what do you recommend me?

    Read the article

  • How do you encrypt data between client and server running in Flash and Java?

    - by ArmlessJohn
    We have a multiclient system where the client is written in Flash and the server is written in Java. Currently, communication is done in Flash by usage of flash.net.Socket and the protocol is written in JSON. The server uses a custom port to receive connections and then proceed to talk with each client. As expected, data is sent and received on both fronts as raw bytes, which are then decoded as needed. We would like to encrypt the communication between clients and server. I have some basic understanding about public/private key encryption, but I do not know what is the best way to exchange keys or what libraries are available (on both languages) to do this. What would be the best strategy to attack this problem and where should I start looking for libraries/methods to implement this encryption?

    Read the article

  • How to receive packets on the MCU's serial port?

    - by itisravi
    Hello, Consider this code running on my microcontroller unit(MCU): while(1){ do_stuff; if(packet_from_PC) send_data_via_gpio(new_packet); //send via general purpose i/o pins else send_data_via_gpio(default_packet); do_other_stuff; } The MCU is also interfaced to a PC via a UART.Whenever the PC sends data to the MCU, the *new_packet* is sent, otherwise the *default_packet* is sent.Each packet can be 5 or more bytes with a pre defined packet structure. My question is: 1.Should i receive the entire packet from PC using inside the UART interrut service routine (ISR)? In this case, i have to implement a state machine inside the ISR to assemble the packet (which can be lengthy with if-else or switch-case blocks). 2.Detect a REQUEST command (one byte)from the PC in my ISR set a flag, diable UART interrupt alone and form the packet in my while(1) loop by polling the UART?

    Read the article

  • aacplus frame alignment problems

    - by Daniel Mošmondor
    I have an application that rips aac+ audio streams, cutting them at every regular interval (i.e. 10 minutes). Sometimes files are playable OK, but sometimes, Windows Media Player just closes when trying to build DirectShow graph. I am using Orban aacplus plugin, and it works under directshow. When I play this file with winamp or vlc, that have it's own aacplus decoding engine, it works fine. However, I need it to work under directshow. Anyway, problematic file is here: http://www.videophill.com/files/00272-20100418100002.aac I know that there is frame alignment error and I confirmed my theory by filling first 256 bytes with 0x00, tried to play it again, and it worked. Is there any info on aacplus frames available on the web, so I can try to find the beginning of the frame manually and cut the rest off?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >