Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 128/173 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • write() in sys/uio.h returns -1

    - by fredrik
    I'm using Ubuntu Server 9.10 AMD Phenom 2 cpu g++ (Ubuntu 4.4.1-4ubuntu9) 4.4.1 trying to run the application pftp-shit v 1.11. The following code in tcp.cc is executed successfully: int outfile_fd = open(name, O_CREAT | O_TRUNC | O_RDWR | O_BINARY)) which returns a file descriptor int (in my case 6) - name is a char array containing a valid path to my file which successfully i created. and successfully running: fchmod(outfile_fd, S_IRUSR | S_IWUSR); and access(name, W_OK) The issue occurs during running the function (from sys/uio.h) write(outfile_fd, this-control_buffer, read_length) which returns -1. -1 is of returned if nothing was written and otherwise a non-negative integer is returned which is equal to the number of bytes written. Anyone having a clue how I can get the write function to work?

    Read the article

  • Perl: Fastest way to get directory (and subdirs) size on unix - using stat() at the moment

    - by ivicas
    I am using Perl stat() function to get the size of directory and it's subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • c++ normalizing data sizes across systems

    - by Bocochoco
    I have a struct with three variables: two unsigned ints and an unsigned char. From my understanding, a c++ char is always 1 byte regardless of what operating system it is on. The same can't be said for other datatypes. I am looking for a way to normalize POD's so that when saved into a binary file, the resulting file is readable on any operating system that the code is compiled for. I changed my struct to use a 1-byte alignment by adding #pragma as follows: #pragma pack(push, 1) struct test { int a; } #pragma pack(pop) but that doesn't necessarily mean that int a is exactly 4 bytes on every os, I don't think? Is there a way to ensure that a file saved from my code will always be readable?

    Read the article

  • Unable to sign an imported msi.dll assembly using tlbimp

    - by BigMoose
    This seems so trivial, yet I can't get it to work.. I have an msi.dll wrapper (named Interop.WindowsInstaller.dll) which I need to sign. The way to do it is by signing it upon import (this specific case is even documented in MSDN: http://msdn.microsoft.com/en-us/library/zec56a0w.aspx). BUT - no matter how I do it (w/ or w/o a keyfile, w/ or w/o adding "/delaysign"), the generated assemly's size is always 36,864 bytes and when viewing the DLL's properties there is no "Digital Signatures" tab (needless to say - the DLL is NOT signed). What am I missing here?? (... HELP!...)

    Read the article

  • How much memory is reserved when i declare a string?

    - by Bhagya
    What exactly happens, in terms of memory, when i declare something like: char arr[4]; How many bytes are reserved for arr? How is null string accommodated when I 'strcpy' a string of length 4 in arr? I was writing a socket program, and when I tried to suffix NULL at arr[4] (i.e. the 5th memory location), I ended up replacing the values of some other variables of the program (overflow) and got into a big time mess. Any descriptions of how compilers (gcc is what I used) manage memory?

    Read the article

  • Handling of data truncation (short reads/writes) in FUSE

    - by Vi
    I expect any good program should do all their reads and writes in a loop until all data written/read without relying that write will write everything (even with regular files). Am I right? Implemented simple FUSE filesystem which only allows reading and writing with small buffers, very often returning that it is written less bytes that in a buffer (using -o direct_io). Some programs work, some not (notably mountlo). Are them buggy or programs should not expect truncated writes and reads from the regular files? In general, are seekable file descriptors expected to truncate data like sockets and pipes?

    Read the article

  • Understanding character encoding in typical Java web app

    - by Marcus
    Some pseudocode from a typical web app: String a = "A bunch of text"; //UTF-16 saveTextInDb(a); //Write to Oracle VARCHAR(15) column String b = readTextFromDb(); //UTF-16 out.write(b); //Write to http response In the first line we create a Java String which uses UTF-16. When you save to Oracle VARCHAR(15) does Oracle also store this as UTF-16? Does the length of an Oracle VARCHAR refer to number of Unicode characters (and not number of bytes)? And then when we write b to the ServletResponse is this being written as UTF-16 or are we by default converting to another encoding like UTF-8?

    Read the article

  • Is there anything in the FTP protocol like the HTTP Range header?

    - by Cheeso
    Suppose I want to transfer just a portion of a file over FTP - is it possible using a standard FTP protocol? In HTTP I could use a Range header in the request to specify the data range of the remote resource. If it's a 1mb file, I could ask for the bytes from 600k to 700k. Is there anything like that in FTP? I am reading the FTP RFC, don't see anything, but want to make sure I'm not missing anything. There's a Restart command in FTP - would that work?

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Simplest way to create a wrapper class around some strings for a WPF DataGrid?

    - by Joel
    I'm building a simple hex editor in C#, and I've decided to use each cell in a DataGrid to display a byte*. I know that DataGrid will take a list and display each object in the list as a row, and each of that object's properties as columns. I want to display rows of 16 bytes each, which will require a wrapper with 16 string properties. While doable, it's not the most elegant solution. Is there an easier way? I've already tried creating a wrapper around a public string array of size 16, but that doesn't seem to work. Thanks *The rational for this is that I can have spaces between each byte without having to strip them all out when I want to save my edited file. Also it seems like it'll be easier to label the rows and columns.

    Read the article

  • Would this union work if char had stricter alignment requirements than int?

    - by paxdiablo
    Recently I came across the following snippet, which is an attempt to ensure all bytes of i (nad no more) are accessible as individual elements of c: union { int i; char c[sizeof(int)]; }; Now this seems a good idea, but I wonder if the standard allows for the case where the alignment requirements for char are more restrictive than that for int. In other words, is it possible to have a four-byte int which is required to be aligned on a four-byte boundary with a one-byte char (it is one byte, by definition, see below) required to be aligned on a sixteen-byte boundary? And would this stuff up the use of the union above? Two things to note. I'm talking specifically about what the standard allows here, not what a sane implementor/architecture would provide. I'm using the term "byte" in the ISO C sense, where it's the width of a char, not necessarily 8 bits.

    Read the article

  • Getting list of all existing vtables.

    - by Patrick
    In my application I have quite some void-pointers (this is because of historical reasons, application was originally written in pure C). In one of my modules I know that the void-pointers points to instances of classes that could inherit from a known base class, but I cannot be 100% sure of it. Therefore, doing a dynamic_cast on the void-pointer might give problems. Possibly, the void-pointer even points to a plain-struct (so no vptr in the struct). I would like to investigate the first 4 bytes of the memory the void-pointer is pointing to, to see if this is the address of the valid vtable. I know this is platform, maybe even compiler-version-specific, but it could help me in moving the application forward, and getting rid of all the void-pointers over a limited time period (let's say 3 years). Is there a way to get a list of all vtables in the application, or a way to check whether a pointer points to a valid vtable, and whether that instance pointing to the vtable inherits from a known base class?

    Read the article

  • PHP Force Download Causing 0 Byte Files

    - by Alex
    Hey, I'm trying to force download files from my web server using PHP. I'm not a pro in PHP but I just can't seem to get around the problem of files downloading in 0 bytes in size. CODE: $filename = "FILENAME..."; header("Content-type: $type"); header("Content-Disposition: attachment;filename=$filename"); header("Content-Transfer-Encoding: binary"); header('Pragma: no-cache'); header('Expires: 0'); set_time_limit(0); readfile($file); Can anybody help? Thanks.

    Read the article

  • Website Images not indexed by Google, Yahoo and Bing

    - by Nabil Kadimi
    Hi, My classifieds website has been present online since 2006, the html pages are indexed and rank as expected whereas a search on Google Images for site:example.com returns nothing & in Yahoo or Bing it returns only a few image results, 8 to 10. Here is an example of a response HTTP headers as reported by Firebug: Date Sat, 15 Jan 2011 20:38:21 GMT Server Apache Cache-Control max-age=34560000 Expires Sun, 19 Feb 2012 20:38:21 GMT Accept-Ranges bytes Last-Modified Fri, 14 Jan 2011 21:59:16 GMT Vary Accept-Encoding Content-Encoding gzip Content-Length 21675 Connection close Content-Type image/jpeg What should I do to tell search engines to index my website images? Thanks in advance.

    Read the article

  • Zebra Label Printer with C#

    - by user3702654
    I'm having trouble printing a label using ZDesigner GK420T using C# .NET. I converted the following string to Bytes and passed into the printer. ^XA ^FO3,3^AD^FDZEBRA^FS ^XZ The expected outcome was that the printer was supposed to print 'ZEBRA' but it didn't. My C# Code: StringBuilder sb; sb = new StringBuilder(); if (frmPrintJob._type != 1) { sb.AppendLine("^XA"); sb.AppendLine("^FO3,3^AD^FDZEBRA^FS"); sb.AppendLine("^XZ"); } int intTotalPrinted = 0; for (int i = 1; i <= NoOfCopies; i++) { if (RawPrinterHelper.SendStringToPrinter(PrinterName, sb.ToString()) == true) intTotalPrinted++; } What am I doing wrong here? Do I need any extra code?

    Read the article

  • Java blocking socket returning incomplete ByteBuffer

    - by evandro-carrenho
    I have a socketChannel configured as blocking, but when reading byte buffers of 5K from this socket, I get an incomplete buffer sometimes. ByteBuffer messageBody = ByteBuffer.allocate(5*1024); messageBody.mark(); messageBody.order(ByteOrder.BIG_ENDIAN); int msgByteCount = channel.read(messageBody); Ocasionally, messageBody is not completely filled and channel.read() does not return -1 or an exception, but the actual number of bytes read (which is less than 5k). Has anyone experienced a similar problem?

    Read the article

  • Looking at the C++ new[] cookie. How portable is this code?

    - by carleeto
    I came up with this as a quick solution to a debugging problem - I have the pointer variable and its type, I know it points to an array of objects allocated on the heap, but I don't know how many. So I wrote this function to look at the cookie that stores the number of bytes when memory is allocated on the heap. template< typename T > int num_allocated_items( T *p ) { return *((int*)p-4)/sizeof(T); } //test #include <iostream> int main( int argc, char *argv[] ) { using std::cout; using std::endl; typedef long double testtype; testtype *p = new testtype[ 45 ]; //prints 45 std::cout<<"num allocated = "<<num_allocated_items<testtype>(p)<<std::endl; delete[] p; return 0; } I'd like to know just how portable this code is.

    Read the article

  • Get Python 2.7's 'json' to not throw an exception when it encounters random byte strings

    - by Chris Dutrow
    Trying to encode a a dict object into json using Python 2.7's json (ie: import json). The object has some byte strings in it that are "pickled" data using cPickle, so for json's purposes, they are basically random byte strings. I was using django.utils's simplejson and this worked fine. But I recently switched to Python 2.7 on google app engine and they don't seem to have simplejson available anymore. Now that I am using json, it throws an exception when it encounters bytes that aren't part of UTF-8. The error that I'm getting is: UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte It would be nice if it printed out a string of the character codes like the debugging might do, ie: \u0002]q\u0000U\u001201. But I really don't much care how it handles this data just as long as it doesn't throw an exception and continues serializing the information that it does recognize. How can I make this happen? Thanks!

    Read the article

  • How can I get an image too big from a server?

    - by Daniel Calderon Mori
    I'm currenty developing for blackberry and just bumped into this problem as i was trying to download an image from a server. The servlet which the device communicates with is working correctly, as I have made a number of tests for it. But it gives me the 413 HTTP error ("Request entity too large"). I figure i will just get the bytes, uhm, portion by portion. How can i accomplish this? This is the code of the servlet (the doGet() method): try { ImageIcon imageIcon = new ImageIcon("c:\\Users\\dcalderon\\prueba.png"); Image image = imageIcon.getImage(); PngEncoder pngEncoder = new PngEncoder(image, true); output.write(pngEncoder.pngEncode()); } finally { output.close(); } Thanks. It's worth mentioning that I am developing both the client-side and the server-side.

    Read the article

  • message queue : selection and sizing

    - by user238591
    Hi, I have 20 messages/s, each 1 - 1.5 Mbytes. I need High Availability (2 to 4 servers min). I need low latencey (high daily volume - full RAM prefered). I need persistent poisoned messages queue. Only few clients (about 16), locally. I can have 12-16G bytes RAM per server (brooker). Which JMS message queue / messaging would you recommend ? On what configuration (CPU/RAM) ? Can I propose optionnal NAS persistence (in case of final delivery failure) ? Thanks

    Read the article

  • Is the "message" of an exception culturally independent?

    - by Ray Hayes
    In an application I'm developing, I have the need to handle a socket-timeout differently from a general socket exception. The problem is that many different issues result in a SocketException and I need to know what the cause was. There is no inner exception reported, so the only information I have to work with is the message: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" This question has a general and specific part: is it acceptable to write conditional logic based upon the textual representation of an exception? Is there a way to avoid needing exception handling? Example code below... try { IPEndPoint endPoint = null; client.Client.ReceiveTimeout = 1000; bytes = client.Receive(ref endPoint); } catch( SocketException se ) { if ( se.Message.Contains("did not properly respond after a period of time") ) { // Handle timeout differently.. } }

    Read the article

  • memory usage in iOS

    - by varun
    My app has a simple UI interface having simple buttons, date picker, picker view, table view, action sheet, toolbar, alert boxes etc. No images, no network access. Just plain simple UI. It accesses SQLite database a lot. ARC option is enabled. I have many questions to ask: In .h files, I am defining IBOutlets like @property(nonatomic, retain) IBOutlet UIButton *bt; Where do i need to do bt=nil? in didReceiveMemoryWarning or viewDidLoad Live Bytes in Instruments tool is 4-5MB. Is it enough or I need to reduce memory usage? If so, how can I do so? Please mention few important points. Also, what all need to be added to the following methods? applicationDidReceiveMemoryWarning UIApplicationDidReceiveMemoryWarningNotification

    Read the article

  • Ignoring "Content is not allowed in trailing section" SAXException

    - by Paul J. Lucas
    I'm using Java's DocumentBuilder.parse(InputStream) to parse an XML document. Occasionally, I get malformed XML documents in that there is extra junk after the final > that causes a SAXException: Content is not allowed in trailing section. (In the cases I've seen, the junk is simply one or more null bytes.) I don't care what's after the final >. Is there an easy way to parse an entire XML document in Java and have it ignore any trailing junk? Note that by "ignore" I don't simply mean to catch and ignore the exception: I mean to ignore the trailing junk, throw no exception, and to return the Document object since the XML up to an including the final > is valid.

    Read the article

  • GetDiskFreeSpaceEx in winCE 5.0 emulator?

    - by vidhyarthi
    Hi, I am trying to use GetDiskFreeSpaceEx in wince5.0 emulator. This is the following code I have written. ULARGE_INTEGER notused, totalBytes, freeBytes; GetDiskFreeSpaceEx(_T("\\Windows"),&notused,&totalBytes,&freeBytes); printf(" Error in disk %d ", GetLastError()); printf(" values = notused %d,totalBytes %d,freeBytes %d",notused,totalBytes,freeBytes); *Output * 14540 PID:3db620e TID:3e5c83e Error in disk 0 14540 PID:3db620e TID:3e5c83e values = notused 25987296,totalBytes 0,freeBytes 26234880 The total bytes that I get is zero. Am I missing something or in emulator is that OK?

    Read the article

  • How do I build a python string from a raw (binary) ctype buffer?

    - by fcrazy
    I'm playing with Python and ctypes and I can't figure out how to resolve this problem. I call to a C function which fills a raw binary data. My code looks like this: class Client(): def __init__(self): self.__BUFSIZE = 1024*1024 self.__buf = ctypes.create_string_buffer(self.__BUFSIZE) self.client = ctypes.cdll.LoadLibrary(r"I:\bin\client.dll") def do_something(self): len_written = self.client.fill_raw_buffer(self.__buf, self.__BUFSIZE) my_string = repr(self.__buf.value) print my_string The problem is that I'm receiving binary data (with 0x00) and it's truncated when I tried to build my_string. How can I build my_string if self._buf contains null bytes 0x00? Any idea is welcome. Thanks

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >