Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 174/238 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • The shortest way to convert infix expressions to postfix (RPN) in C

    - by kuszi
    Original formulation is given here (you can try also your program for correctness) . Additional rules: 1. The program should read from standard input and write do standard output. 2. The program should return zero to the calling system/program. 3. The program should compile and run with gcc -O2 -lm -s -fomit-frame-pointer. The challenge has some history: the call for short implementations has been announced at the Polish programming contest blog in September 2009. After the contest, the shortest code was 81 chars long. Later on the second call has been made for even shorter code and after the year matix2267 published his solution in 78 bytes: main(c){read(0,&c,1)?c-41&&main(c-40&&(c%96<27||main(c),putchar(c))):exit(0);} Anyone to make it even shorter or prove this is impossible?

    Read the article

  • How to receive packets on the MCU's serial port?

    - by itisravi
    Hello, Consider this code running on my microcontroller unit(MCU): while(1){ do_stuff; if(packet_from_PC) send_data_via_gpio(new_packet); //send via general purpose i/o pins else send_data_via_gpio(default_packet); do_other_stuff; } The MCU is also interfaced to a PC via a UART.Whenever the PC sends data to the MCU, the *new_packet* is sent, otherwise the *default_packet* is sent.Each packet can be 5 or more bytes with a pre defined packet structure. My question is: 1.Should i receive the entire packet from PC using inside the UART interrut service routine (ISR)? In this case, i have to implement a state machine inside the ISR to assemble the packet (which can be lengthy with if-else or switch-case blocks). 2.Detect a REQUEST command (one byte)from the PC in my ISR set a flag, diable UART interrupt alone and form the packet in my while(1) loop by polling the UART?

    Read the article

  • Java: BufferedImage from raw BMP file format data

    - by Victor
    Hello there. I've got BMP file's raw pixels table in byte[], it's structure is: (b g r) (b g r) ... (b g r) padding ... (b g r) (b g r) ... (b g r) padding Where r, g, b are byte each, padding is to round row length up to a multiple of 4 bytes. So, how can I create new BufferedImage from this raw data without copying, just using this raw data? I took a look at creating BufferedImage from DataBuffer, but I just didn't get it. Unfortunately ImageIO is not allowed in my situation.

    Read the article

  • Playback audio data with GWT

    - by Henrik
    I am creating a GWT client application which interacts with a server and I am getting all my response data from the server in JSON format. Amongst others there are wave data on the server's database which I would like to retrieve and then playback on the client. I am able to get the wave data as an array of bytes in the JSON format. My problem is, how do I playback the wave array data in a browser? Is it even possible or do I have to find another solution? I've searched the web and found some GWT packages which are able to playback sound, but they are all playing back directly from an url.

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • java memory usage

    - by xdevel2000
    I know I always post a similar question about array memory usage but now I want post the question more specific. After I read this article: http://www.javamex.com/tutorials/memory/object_memory_usage.shtml I didn't understand some things: the size of a data type is always the same also on different platform (Linux / Windows 32 / 64 bit)??? so an int will be always 32 bit?; when I compute the memory usage I must put also the reference value itself? If I have an object to a class that has an int field its memory will be 12 (object header) + 4 reference + 4 (the int field) + 3 (padding) = 24 bytes??

    Read the article

  • Converting c pointer types

    - by bobbyb
    I have a c pointer to a structre type called uchar4 which looks like { uchar x; uchar y; uchar z; uchar w; } I also have data passed in as uint8*. I'd like to create a uchar* pointing to the data at the uint8* so I've tried doing this: uint8 *data_in; uchar4 *temp = (uchar4*)data_in; However, the first 8 bytes always seem to be wrong. Is there another way of doing this?

    Read the article

  • Codesample with bufferoverflow (gets method). Why does it not behave as expected?

    - by citronas
    This an extract from an c program that should demonstrate a bufferoverflow. void foo() { char arr[8]; printf(" enter bla bla bla"); gets(arr); printf(" you entered %s\n", arr); } The question was "How many input chars can a user maximal enter without a creating a buffer overflow" My initial answer was 8, because the char-array is 8 bytes long. Although I was pretty certain my answer was correct, I tried a higher amount of chars, and found that the limit of chars that I can enter, before I get a segmentation fault is 11. (Im running this on A VirtualBox Ubuntu) So my question is: Why is it possible to enter 11 chars into that 8 byte array?

    Read the article

  • Length of data returned from CGImageGetDataProvider is larger than expected

    - by jcoplan
    I'm loading a grayscale png image and I want to access the underlying pixel data. However after I load get the pixel data via CGImageGetDataProvider, the length of the data returned is longer than expected. CCGDataProviderRef provider = CGDataProviderCreateWithFilename(cStr); CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, FALSE, kCGRenderingIntentDefault); mapWidth = CGImageGetWidth(image); mapHeight = CGImageGetHeight(image); lookupMap = CGDataProviderCopyData(CGImageGetDataProvider(image)); mapWidth comes out to 1804 and mapHeight comes out to 1005. The product of which is 1813020 When I call CFDataGetLength(lookupMap) the response is 1833120. Where are these extra 20100 bytes coming from? Any help here is much appreciated. Am I missing something about the underlying format of the image?

    Read the article

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • SQL Server 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a SQL Server 2005 database. I've been doing some research regarding SQL Server performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • python input UnicodeDecodeError:

    - by The man on the Clapham omnibus
    python 3.x >>> a = input() hope >>> a 'hope' >>> b = input() håpe >>> b 'håpe' >>> c = input() start typing hå... delete using backspace... and change to hope Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 1: invalid continuation byte >>> The situation is not terrible, I am working around it, but find it strange that when deleting, the bytes get messed up. Has anyone else experienced this? the terminal history shows that I thought that I entered h?ope any ideas? in the script that is using this, I do import readline to give command line history.

    Read the article

  • printing double in binary

    - by Happy Mittal
    In Thinking in C++ by Bruce eckel, there is a program given to print a double value in binary.(Chapter 3, page no. 189) int main(int argc, char* argv[]) { if(argc != 2) { cout << "Must provide a number" << endl; exit(1); } double d = atof(argv[1]); unsigned char* cp = reinterpret_cast<unsigned char*>(&d); for(int i = sizeof(double); i > 0 ; i -= 2) { printBinary(cp[i-1]); printBinary(cp[i]); } } Here while printing cp[i] when i=8(assuming double is of 8 bytes), wouldn't it be undefined behaviour? I mean this code doesn't work as it doesn't print cp[0].

    Read the article

  • Seeking not working in HTML5 audio tag

    - by lord_wilmore
    I have a lighttpd server running locally. If I load a static file on the server (through an html5 audio tag), it plays and seeks fine. However, seeking doesn't work when running a dev server (web.py/CherryPy) or if I return the bytes via a defined action url instead of as a static file. It won't load the duration either. According to the "HTTP byte range requests" section in this Opera Page it's something to do with support for byte range requests/partial content responses. The content is treated as streaming instead. What I don't understand is: If the browser has the whole file downloaded surely it can display the duration, and surely it can seek. What I need to do on the web server to enable byte range requests (for non-static urls). Any advice would be most gratefully received.

    Read the article

  • small string optimization for vector?

    - by BuschnicK
    I know several (all?) STL implementations implement a "small string" optimization where instead of storing the usual 3 pointers for begin, end and capacity a string will store the actual character data in the memory used for the pointers if sizeof(characters) <= sizeof(pointers). I am in a situation where I have lots of small vectors with an element size <= sizeof(pointer). I cannot use fixed size arrays, since the vectors need to be able to resize dynamically and may potentially grow quite large. However, the median (not mean) size of the vectors will only be 4-12 bytes. So a "small string" optimization adapted to vectors would be quite useful to me. Does such a thing exist? I'm thinking about rolling my own by simply brute force converting a vector to a string, i.e. providing a vector interface to a string. Good idea?

    Read the article

  • When will a TCP network packet be fragmented at the application layer?

    - by zooropa
    When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?

    Read the article

  • MS SQL 2005 - Understanding ouput of DBCC SHOWCONTIG

    - by user169743
    I'm seeing some slow performance on a MS SQL 2005 database. I've been doing some research regarding MS SQL performance but I'm having difficulty fully understanding the output of SHOWCONTIG and would be very grateful if someone could have a look and offer some suggestions to improve performance. TABLE level scan performed. Pages Scanned................................: 19348 Extents Scanned..............................: 2427 Extent Switches..............................: 3829 Avg. Pages per Extent........................: 8.0 Scan Density [Best Count:Actual Count].......: 63.16% [2419:3830] Logical Scan Fragmentation ..................: 8.40% Extent Scan Fragmentation ...................: 35.15% Avg. Bytes Free per Page.....................: 938.1 Avg. Page Density (full).....................: 88.41%

    Read the article

  • how to write floating value accurately to a bin file.

    - by user319873
    Hi I am trying to dump the floating point values from my program to a bin file. Since I can't use any stdlib function, I am thinking of writting it char by char to a big char array which I am dumping in my test application to a file. It's like float a=3132.000001; I will be dumping this to a char array in 4 bytes. Code example would be:- if((a < 1.0) && (a > 1.0) || (a > -1.0 && a < 0.0)) a = a*1000000 // 6 bit fraction part. Can you please help me writting this in a better way.

    Read the article

  • AES Key encoded byte[] to String and back to byte[]

    - by Tom Brito
    In the similar question "Conversion of byte[] into a String and then back to a byte[]" is said to not to do the byte[] to String and back conversion, what looks like apply to most cases, mainly when you don't know the encoding used. But, in my case I'm trying to save to a DB the javax.crypto.SecretKey data, and recoverd it after. The interface provide a method getEncoded() which returns the key data encoded as byte[], and with another class I can use this byte[] to recover the key. So, the question is, how do I write the key bytes as String, and later get back the byte[] to regenerate the key?

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Handling of data truncation (short reads/writes) in FUSE

    - by Vi
    I expect any good program should do all their reads and writes in a loop until all data written/read without relying that write will write everything (even with regular files). Am I right? Implemented simple FUSE filesystem which only allows reading and writing with small buffers, very often returning that it is written less bytes that in a buffer (using -o direct_io). Some programs work, some not (notably mountlo). Are them buggy or programs should not expect truncated writes and reads from the regular files? In general, are seekable file descriptors expected to truncate data like sockets and pipes?

    Read the article

  • c++ normalizing data sizes across systems

    - by Bocochoco
    I have a struct with three variables: two unsigned ints and an unsigned char. From my understanding, a c++ char is always 1 byte regardless of what operating system it is on. The same can't be said for other datatypes. I am looking for a way to normalize POD's so that when saved into a binary file, the resulting file is readable on any operating system that the code is compiled for. I changed my struct to use a 1-byte alignment by adding #pragma as follows: #pragma pack(push, 1) struct test { int a; } #pragma pack(pop) but that doesn't necessarily mean that int a is exactly 4 bytes on every os, I don't think? Is there a way to ensure that a file saved from my code will always be readable?

    Read the article

  • Unable to sign an imported msi.dll assembly using tlbimp

    - by BigMoose
    This seems so trivial, yet I can't get it to work.. I have an msi.dll wrapper (named Interop.WindowsInstaller.dll) which I need to sign. The way to do it is by signing it upon import (this specific case is even documented in MSDN: http://msdn.microsoft.com/en-us/library/zec56a0w.aspx). BUT - no matter how I do it (w/ or w/o a keyfile, w/ or w/o adding "/delaysign"), the generated assemly's size is always 36,864 bytes and when viewing the DLL's properties there is no "Digital Signatures" tab (needless to say - the DLL is NOT signed). What am I missing here?? (... HELP!...)

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >