Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 102/238 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

  • Is this the right way to write a ProtocolDecoder in MINA?

    - by phpscriptcoder
    public class CustomProtocolDecoder extends CumulativeProtocolDecoder{ byte currentCmd = -1; int currentSize = -1; boolean isFirst = false; @Override protected boolean doDecode(IoSession is, ByteBuffer bb, ProtocolDecoderOutput pdo) throws Exception { if(currentCmd == -1) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); isFirst = true; } while(bb.remaining() > 0) { if(!isFirst) { currentCmd = bb.get(); currentSize = Packet.getSize(currentCmd); } else isFirst = false; //System.err.println(currentCmd + " " + bb.remaining() + " " + currentSize); if(bb.remaining() >= currentSize - 1) { Packet p = PacketDecoder.decodePacket(bb, currentCmd); pdo.write(p); } else { bb.flip(); return false; } } if(bb.remaining() == 0) return true; else return false; } } Anyone see anything wrong with this code? When a lot of packets are received at once, even when only one client is connected, one of them might get cut off at the end (12 bytes instead of 15 bytes, for example) which is obviously bad.

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • c++ struct size

    - by kiokko89
    struct CExample { int a; } int main(int argc, char* argv[]) { CExample ce; CExample ce2; cout << "Size:" << sizeof(ce)<< " Address: "<< &ce<< endl; cout << "Size:" << sizeof(ce2)<< " Address: "<< &ce2 << endl; CExample ceArr[2]; cout << "Size:" << sizeof(ceArr[0])<< " Address: "<<&ceArr[0]<<endl; cout << "Size:" << sizeof(ceArr[1])<< " Address: "<<&ceArr[1]<<endl; return 0; } Excuse me I'm just a beginner but i'd like to know why with this code, there is a difference of 12 bytes between the addresses of the first two objects(ce and ce2) (i thought about data allignment), but there is only a difference of 4 bytes between the two objects in the array. Sorry for my bad English...

    Read the article

  • Simple binary File I/O problem with cstdio(c++)

    - by Atilla Filiz
    The c++ program below fails to read the file. I know using cstdio is not good practice but that what I am used to and it should work anyway. $ ls -l l.uyvy -rw-r--r-- 1 atilla atilla 614400 2010-04-24 18:11 l.uyvy $ ./a.out l.uyvy Read 0 bytes out of 614400, possibly wrong file code: #include<cstdio> int main(int argc, char* argv[]) { FILE *fp; if(argc<2) { printf("usage: %s <input>\n",argv[0]); return 1; } fp=fopen(argv[1],"rb"); if(!fp) { printf("erör, cannot open %s for reading\n",argv[1]); return -1; } int bytes_read=fread(imgdata,1,2*IMAGE_SIZE,fp); //2bytes per pixel fclose(fp); if(bytes_read < 2*IMAGE_SIZE) { printf("Read %d bytes out of %d, possibly wrong file\n", bytes_read, 2*IMAGE_SIZE); return -1; } return 0; }

    Read the article

  • c programming malloc question

    - by user535256
    Hello guys, Just got query regarding c malloc() function. I am read()ing x number of bytes from a file to get lenght of filename, like ' read(file, &namelen, sizeof(unsigned char)); ' . The variable namelen is a type unsigned char and was written into file as that type (1 byte). Now namelen has the lenght of filename ie namelen=8 if file name was 'data.txt', plus extra /0 at end, that working fine. Now I have a structure recording file info, ie filename, filelenght, content size etc. struct fileinfo { char *name; ...... other variable like size etc }; struct fileinfo *files; Question: I want to make that files.name variable the size of namelen ie 8 so I can successfully write the filename into it, like ' files[i].name = malloc(namelen) ' However, I dont want it to be malloc(sizeof(namelen)) as that would make it file.name[1] as the size of its type unsigned char. I want it to be the value thats stored inside variable &namelen ie 8 so file.name[8] so data.txt can be read() from file as 8 bytes and written straight into file.name[8? Is there a way to do this my current code is this and returns 4 not 8 files[i].name = malloc(namelen); //strlen(files[i].name) - returns 4 //perhaps something like malloc(sizeof(&namelen)) but does not work Thanks for any suggestions Have tried suggested suggestions guys, but I now get a segmentation fault error using: printf("\nsizeofnamelen=%x\n",namelen); //gives 8 for data.txt files[i].name = malloc(namelen + 1); read(file, &files[i].name, namelen); int len=strlen(files[i].name); printf("\nnamelen=%d",len); printf("\nname=%s\n",files[i].name); When I try to open() file with that files[i].name variable it wont open so the data does not appear to be getting written inside the read() &files[i].name and strlen() causes segemntation error as well as trying to print the filename

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Make function declarations based on function definitions

    - by Clinton Blackmore
    I've written a .cpp file with a number of functions in it, and now need to declare them in the header file. It occurred to me that I could grep the file for the class name, and get the declarations that way, and it would've worked well enough, too, had the complete function declaration before the definition -- return code, name, and parameters (but not function body) -- been on one line. It seems to me that this is something that would be generally useful, and must've been solved a number of times. I am happy to edit the output and not worried about edge cases; anything that gives me results that are right 95% of the time would be great. So, if, for example, my .cpp file had: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) // optional user-supplied buffer { ... } and a number of other similar functions, getting this back: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) for inclusion in the header file, after a little editing, would be fine. Getting this back: i2cstatus_t writeRegisters( uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); or this: i2cstatus_t writeRegisters(uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); would be even better.

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • Binary serialization and deserialization without creating files (via strings)

    - by the_V
    Hi, I'm trying to create a class that will contain functions for serializing/deserializing objects to/from string. That's what it looks like now: public class BinarySerialization { public static string SerializeObject(object o) { string result = ""; if ((o.GetType().Attributes & TypeAttributes.Serializable) == TypeAttributes.Serializable) { BinaryFormatter f = new BinaryFormatter(); using (MemoryStream str = new MemoryStream()) { f.Serialize(str, o); str.Position = 0; StreamReader reader = new StreamReader(str); result = reader.ReadToEnd(); } } return result; } public static object DeserializeObject(string str) { object result = null; byte[] bytes = System.Text.Encoding.ASCII.GetBytes(str); using (MemoryStream stream = new MemoryStream(bytes)) { BinaryFormatter bf = new BinaryFormatter(); result = bf.Deserialize(stream); } return result; } } SerializeObject method works well, but DeserializeObject does not. I always get an exception with message "End of Stream encountered before parsing was completed". What may be wrong here?

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • How to set buffer size in client-server app using sockets?

    - by nelly
    First of all i am new to networking so i may say dumb thing in here. Considering a client-server application using sockets(.net with c# if that matters). The client sends some data, the server process it and sends back a string. The client sends some other data, the serve process it, queries the db and sends back several hundreds of items from the database The client sends some other type of data and the server notifies some other clients . My question is how to set the buffer size correctly for reading/writing operation. Should i do something like this: byte[] buff = new byte[client.ReceiveBufferSize] ? I am thinking of something like this: Client sends data to the server(and the server will follow the same pattern) byte[] bytesToSend=new byte[2048] //2048 to be standard for any command send by the client bytes 0..1 ->command type bytes 1..2047 ->command parameters byte[] bytesToReceive=new byte[8]/byte[64]/byte[8192] //switch(command type) But..what is happening when a client is notified by the server without sending data? What is the correct way to accomplish what i am trying to do? Thanks for reading.

    Read the article

  • Question about CALL statement

    - by Bruce
    I have the following code in VC++ Func5(){ StackWalk(); } Func4{ Func5();} I am a Beginner in x86 Assembly Language. I am trying to find out the starting address of Func5(). I get the Func5()'s return address from its stack frame. Now before this return address there should be a CALL statement. So I extract out the bytes before the return address. Sometimes it's a near call like E8 ff ff ff d8. So for this statement I subtract the offset 0x28 from the function's return address to get Func5()'s base address (where it resides in memory). The problem is I don't know how to calculate this for a indirect NEAR call. I have been trying to find out how to do it for some time now. So I have extracted out the first 5 bytes before the return address and they are ff 75 08 ff d2 I think this stands for CALL ECX (ff d2) but I am not sure. I will be very grateful if someone can tell me what kind of CALL statement this is and how I can calculate the function's base address from this kind of call.

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • What is the fastest way to check if files are identical?

    - by ojblass
    If you have 1,000,0000 source files, you suspect they are all the same, and you want to compare them what is the current fasted method to compare those files? Assume they are Java files and platform where the comparison is done is not important. cksum is making me cry. When I mean identical I mean ALL identical. Update: I know about generating checksums. diff is laughable ... I want speed. Update: Don't get stuck on the fact they are source files. Pretend for example you took a million runs of a program with very regulated output. You want to prove all 1,000,000 versions of the output are the same. Update: read the number of blocks rather than bytes? Immediatly throw out those? Is that faster than finding the number of bytes? Update: Is this ANY different than the fastest way to compare two files?

    Read the article

  • Servlet File Upload Memory Consumption

    - by Scott
    Hi, I'm using a servlet to do a multi fileupload (using apache commons fileupload to help). A portion of my code is posted below. My problem is that if I upload several files at once, the memory consumption of the app server jumps rather drastically. This is probably OK if it were only until the file upload is finished, but the app server seems to hang on to the memory and never return it to the OS. I'm worried that when I put this into production I'll end up getting an out of memory exception on the server. Any ideas on why this is happening? I'm thinking the server may have started a session and will give the memory back after it expires, but I'm not 100% positive. if(ServletFileUpload.isMultipartContent(request)) { ServletFileUpload upload = new ServletFileUpload(); FileItemIterator iter = upload.getItemIterator(request); while(iter.hasNext()) { FileItemStream license = iter.next(); if(license.getFieldName().equals("upload_button") || license.getName().equals("")) continue; //DataInputStream stream = new DataInputStream(license.openStream()); InputStream stream = license.openStream(); ArrayList<Integer> byteArray = new ArrayList<Integer>(); int tempByte; do { tempByte = stream.read(); byteArray.add(tempByte); }while(tempByte != -1); stream.close(); byteArray.remove(byteArray.size()-1); byte[] bytes = new byte[byteArray.size()]; int i = 0; for(Integer tByte : byteArray) { bytes[i++] = tByte.byteValue(); } Thanks in advanced!!

    Read the article

  • html in do_GET() method of a simple Python webserver

    - by Meeri_Peeri
    I am relatively new to Python but have been doing a lot of different things with it recently and I am liking it a lot. However, I ran into trouble/block with the following code. import http.server import socketserver import glob import random class Server(http.server.SimpleHTTPRequestHandler): def do_GET(self): self.send_response(200, 'OK') self.send_header('Content-type', 'html') self.end_headers() self.wfile.write(bytes("<html> <head><title> Hello World </title> </head> <body>", 'UTF-8')) images = glob.glob('*.jpg') rand = random.randint(0,len(images)-1) imagestring = "<img src = \"" + images[rand] + "\" height = 1028 width = 786 align = \"right\"/> </body> </html>" self.wfile.write(bytes(imagestring, 'UTF-8')) def serve_forever(port): socketserver.TCPServer(('', port), Server).serve_forever() if __name__ == "__main__": Server.serve_forever(8000) What I am trying to do here is grab a random image from multiple images in the directory and add it into the response to a web request. The code works fine but when I access the server via browser, the image is not displayed. The html of the page is as intended though. The permissions on the files are 755. Also I tried to create an index.html file in the do_GET method. That didn't work either. I mean the index.html was generated fine, but the response in the browser this time did not show anything (not even the hello world in the title). Am I missing anything very simple here? I was thinking should I overload the handle_request of the underlying SocketServer.BaseServer as the documentation says you should never override BaseHTTPServer's handle() method and should rather override the corresponding do_* method?

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Hashing 11 byte unique ID to 32 bits or less

    - by MoJo
    I am looking for a way to reduce a 11 byte unique ID to 32 bits or fewer. I am using an Atmel AVR microcontroller that has the ID number burned in at the factory, but because it has to be transmitted very often in a very low power system I want to reduce the length down to 4 bytes or fewer. The ID is guaranteed unique for every microcontroller. It is made up of data from the manufacturing process, basically the coordinates of the silicone on the wafer and the production line that was used. They look like this: 304A34393334-16-11001000 314832383431-0F-09000C00 Obviously the main danger is that by reducing these IDs they become non-unique. Unfortunately I don't have a large enough sample size to test how unique these numbers are. Having said that because there will only be tens of thousands of devices in use and there is secondary information that can be used to help identify them (such as their approximate location, known at the time of communication) collisions might not be too much of an issue if they are few and far between. Is something like MD5 suitable for this? My concern is that the data being hashed is very short, just 11 bytes. Do hash functions work reliably on such short data?

    Read the article

  • How to strip out 0x0a special char from utf8 file using c# and keep file as utf8?

    - by user1013388
    The following is a line from a UTF-8 file from which I am trying to remove the special char (0X0A), which shows up as a black diamond with a question mark below: 2464577 ????? True s6620178 Unspecified <1?1009-672 This is generated when SSIS reads a SQL table then writes out, using a flat file mgr set to code page 65001. When I open the file up in Notepad++, displays as 0X0A. I'm looking for some C# code to definitely strip that char out and replace it with either nothing or a blank space. Here's what I have tried: string fileLocation = "c:\\MyFile.txt"; var content = string.Empty; using (StreamReader reader = new System.IO.StreamReader(fileLocation)) { content = reader.ReadToEnd(); reader.Close(); } content = content.Replace('\u00A0', ' '); //also tried: content.Replace((char)0X0A, ' '); //also tried: content.Replace((char)0X0A, ''); //also tried: content.Replace((char)0X0A, (char)'\0'); Encoding encoding = Encoding.UTF8; using (FileStream stream = new FileStream(fileLocation, FileMode.Create)) { using (BinaryWriter writer = new BinaryWriter(stream, encoding)) { writer.Write(encoding.GetPreamble()); //This is for writing the BOM writer.Write(content); } } I also tried this code to get the actual string value: byte[] bytes = { 0x0A }; string text = Encoding.UTF8.GetString(bytes); And it comes back as "\n". So in the code above I also tried replacing "\n" with " ", both in double quotes and single quotes, but still no change. At this point I'm out of ideas. Anyone got any advice? Thanks.

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Issue changing innodb_log_file_size

    - by savageguy
    I haven't done much tweaking in the past so this might be relatively easy however I am running into issues. This is what I do: Stop MySQL Edit my.cnf (changing innodb_log_file_size) Remove ib_logfile0/1 Start MySQL Starts fine however all InnoDB tables have the .frm file is invalid error, the status shows InnoDB engine is disabled so I obviously go back, remove the change and everything works again. I was able to change every other variable I've tried but I can't seem to find out why InnoDB fails to start even after removing the log files. Am I missing something? Thanks. Edit: Pasting of the log below - looks like it still seems to find the log file even though they are not there? Shutdown: 090813 10:00:14 InnoDB: Starting shutdown... 090813 10:00:17 InnoDB: Shutdown completed; log sequence number 0 739268981 090813 10:00:17 [Note] /usr/sbin/mysqld: Shutdown complete Startup after making the changes: InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 090813 11:00:18 [Warning] 'user' entry '[email protected]' ignored in --skip-name-resolve mode. 090813 11:00:18 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.81-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Edition (GPL) 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' Its just a spam of the same error until I correct it When it did start after it recreated the log files so it must be looking in the same spot I am.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >