Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 74/173 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Python - network buffer handling question...

    - by Patrick Moriarty
    Hi, I want to design a game server in python. The game will mostly just be passing small packets filled with ints, strings, and bytes stuffed into one message. As I'm using a different language to write the game, a normal packet would be sent like so: Writebyte(buffer, 5); // Delimit type of message Writestring(buffer, "Hello"); Sendmessage(buffer, socket); As you can see, it writes the bytes to the buffer, and sends the buffer. Is there any way to read something like this in python? I am aware of the struct module, and I've used it to pack things, but I've never used it to actually read something with mixed types stuck into one message. Thanks for the help.

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • PHP Fatal error on line number that doesn't exist

    - by alexantd
    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 523800 bytes) in /Library/WebServer/Documents/XMLDataStore.class.php on line 981 The curious thing about this error is not the memory leak, which would be easy enough to troubleshoot. Rather, it is the fact that XMLDataStore.class.php is only 850 lines long, which I have verified in multiple text editors. This is with the PHP 5.3 bundled with Snow Leopard. I'm not using an opcode cache. Here is my php.ini: allow_url_fopen = Off error_reporting = -1 display_errors = 1 display_startup_errors = 1 date.timezone = 'America/Los_Angeles' output_buffering = Off realpath_cache_size = 0k XMLDataStore.class.php has recently been refactored and it used to be longer than 981 lines. It's almost as if PHP has cached a 2-week-old version and is reading that. I'm positive that the current version at /Library/WebServer/Documents/XMLDataStore.class.php is only 850 lines long, though.

    Read the article

  • delete & new in c++

    - by singh
    Hi This may be very simple question,But please help me. i wanted to know what exactly happens when i call new & delete , For example in below code char * ptr=new char [10]; delete [] ptr; call to new returns me memory address. Does it allocate exact 10 bytes on heap, Where information about size is stored.When i call delete on same pointer,i see in debugger that there are a lot of byte get changed before and after the 10 Bytes. Is there any header for each new which contain information about number of byte allocated by new. Thanks a lot

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

  • Is it good practice to initialize array in C/C++?

    - by sand
    I recently encountered a case where I need to compare two files (golden and expected) for verification of test results and even though the data written to both the files were same, the files does not match. On further investigation, I found that there is a structure which contains some integers and a char array of 64 bytes, and not all the bytes of char array were getting used in most of the cases and unused fields from the array contain random data and that was causing the mismatch. This brought me ask the question whether it is good practice to initialize the array in C/C++ as well, as it is done in Java?

    Read the article

  • Which of FILE* or ifstream has better memory usage?

    - by Viet
    I need to read fixed number of bytes from files, whose sizes are around 50MB. To be more precise, read a frame from YUV 4:2:0 CIF/QCIF files (~25KB to ~100KB per frame). Not very huge number but I don't want whole file to be in the memory. I'm using C++, in such a case, which of FILE* or ifstream has better (less/minimal) memory usage? Please kindly advise. Thanks! EDIT: I read fixed number of bytes: 25KB or 100KB (depending on QCIF/CIF format). The reading is in binary mode and forward-only. No seeking needed. No writing needed, only reading. EDIT: If identifying better of them is hard, which one does not require loading the whole file into memory?

    Read the article

  • Do you bother to write a pretty error page?

    - by Chacha102
    So, everyone is really used to the errors that PHP gives you. They look kind of like this: Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 2 bytes) in /path/to/file(437) on line 21 My question is, do you put in the time to make your error pages more useful? I find that I am able to debug a lot faster using my own error page: I find this to be a lot better than the PHP errors because it gives me a stack trace, the usual error message, along with the actual location of the error, and more. Also, are there any downsides from creating your own development error pages. Obviously you wouldn't want to have a user see this page, but what about during development?

    Read the article

  • Change HttpContext.Request.InputStream

    - by user320478
    I am getting lot of errors for HttpRequestValidationException in my event log. Is it possible to HTMLEncode all the inputs from override of ProcessRequest on web page. I have tried this but it gives context.Request.InputStream.CanWrite == false always. Is there any way to HTMLEncode all the feilds when request is made? public override void ProcessRequest(HttpContext context) { if (context.Request.InputStream.CanRead) { IEnumerator en = HttpContext.Current.Request.Form.GetEnumerator(); while (en.MoveNext()) { //Response.Write(Server.HtmlEncode(en.Current + " = " + //HttpContext.Current.Request.Form[(string)en.Current])); } long nLen = context.Request.InputStream.Length; if (nLen > 0) { string strInputStream = string.Empty; context.Request.InputStream.Position = 0; byte[] bytes = new byte[nLen]; context.Request.InputStream.Read(bytes, 0, Convert.ToInt32(nLen)); strInputStream = Encoding.Default.GetString(bytes); if (!string.IsNullOrEmpty(strInputStream)) { List<string> stream = strInputStream.Split('&').ToList<string>(); Dictionary<int, string> data = new Dictionary<int, string>(); if (stream != null && stream.Count > 0) { int index = 0; foreach (string str in stream) { if (str.Length > 3 && str.Substring(0, 3) == "txt") { string textBoxData = str; string temp = Server.HtmlEncode(str); //stream[index] = temp; data.Add(index, temp); index++; } } if (data.Count > 0) { List<string> streamNew = stream; foreach (KeyValuePair<int, string> kvp in data) { streamNew[kvp.Key] = kvp.Value; } string newStream = string.Join("", streamNew.ToArray()); byte[] bytesNew = Encoding.Default.GetBytes(newStream); if (context.Request.InputStream.CanWrite) { context.Request.InputStream.Flush(); context.Request.InputStream.Position = 0; context.Request.InputStream.Write(bytesNew, 0, bytesNew.Length); //Request.InputStream.Close(); //Request.InputStream.Dispose(); } } } } } } base.ProcessRequest(context); }

    Read the article

  • python len calculation

    - by n00bz0r
    I'm currently trying to build a RDP client in python and I came across the following issue with a len check; From: http://msdn.microsoft.com/en-us/library/cc240836%28v=prot.10%29.aspx "81 2a - ConnectData::connectPDU length = 298 bytes Since the most significant bit of the first byte (0x81) is set to 1 and the following bit is set to 0, the length is given by the low six bits of the first byte and the second byte. Hence, the value is 0x12a, which is 298 bytes." This sounds weird. For normal len checks, I'm simply using : struct.pack("h",len(str(PacketLen))) but in this case, I really don't see how I can calculate the len as described above. Any help on this would be greatly appreciated !

    Read the article

  • What's causing "Unable to retrieve native address from ByteBuffer object"?

    - by r0u1i
    As a very novice Java programmer, I probably should not mess with that kind of things. Unfortunately, I'm using a library which have a method that accepts a ByteBuffer object and throws when I try to use it: Exception in thread "main" java.lang.NullPointerException: Unable to retrieve native address from ByteBuffer object Is it because I'm not using a non-direct buffer? edit: There's not a lot of my code there. The library I'm using is jNetPcap, and I'm trying to dump a packet to file. My code takes an existing packet, and extract a ByteBuffer out of it: byte[] bytes = m_packet.getByteArray(0, m_packet.size()); ByteBuffer buffer = ByteBuffer.wrap(bytes); Then it calls on of the dump methods of jNetPcap that takes a ByteBuffer.

    Read the article

  • HttpSessionState Where, How, Advantages?

    - by blgnklc
    You see the code below, how I did use the session variable; So the three questions are; 1- Where are they stored? (Server or Client side) 2- Are they unique for each web page visitor? 3- Can I remove it using ajax or simple js code when my job is done with it? or it will be removed automatically..? sbyte[][] arrImages = svc.getImagesForFields(new String[] { "CustomerName", "CustomerSurName" }); Dictionary<string, byte[]> smartImageData = new Dictionary<string, byte[]>(); int i = 0; foreach (sbyte[] bytes in arrImages) { smartImageData.Add(fieldNames[i], ConvertToByte(bytes)); i++; } Session.Add("SmartImageData", smartImageData);

    Read the article

  • Go - Concurrent method

    - by nevalu
    How to get a concurrent method? In my case, the library would be called from a program to get a value to each argument str --in method Get()--. When it's used Get() then it assigns a variable from type bytes.Buffer which it will have the value to return. The returned values --when it been concurrently called-- will be stored into a database or a file and it doesn't matter that its output been of FIFO way (from method). type test struct { foo uint8 bar uint8 } func NewTest(arg1 string) (*test, os.Error) {...} func (self *test) Get(str string) ([]byte, os.Error) { var format bytes.Buffer ... } I think that all code inner of method Get() should be put inner of go func() {...}(), and then to use a channel. Would there be a problem if it's called another method from Get()? Or would it also has to be concurrent?

    Read the article

  • Where's the rest of the space used in this table?

    - by Eric H.
    I'm using SQL Server 2005. I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width). There is only one index, clustered. The fill factor is 0. After inserting a ton of data, sp_spaceused returns the following name rows reserved data index_size unused OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row. Where's the rest of the space? Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • Can I use part of MD5 hash for data identification?

    - by sharptooth
    I use MD5 hash for identifying files with unknown origin. No attacker here, so I don't care that MD5 has been broken and one can intendedly generate collisions. My problem is I need to provide logging so that different problems are diagnosed easier. If I log every hash as a hex string that's too long, inconvenient and looks ugly, so I'd like to shorten the hash string. Now I know that just taking a small part of a GUID is a very bad idea - GUIDs are designed to be unique, but part of them are not. Is the same true for MD5 - can I take say first 4 bytes of MD5 and assume that I only get collision probability higher due to the reduced number of bytes compared to the original hash?

    Read the article

  • matrix = *((fxMatrix*)&d3dMatrix); //Evil?

    - by Xilliah
    I've been using matrix = *((fxMatrix*)&d3dMatrix); for quite a while. It worked fine until my screen turned black and received a bucket of frustration on my desk. fxMatrix contains 4 fxVectors. fxVector used to be 16 bytes, but now it was suddenly 20. This was because it inherited fxStreamable, which added the vTable. So one solution is of course just to not inherit fxStreamable, and leave a comment saying that it must always be 16 bytes and never more. Another solution would be to make conversion functions, and copy the matrix completely. This makes it more secure, but has an impact on the performance. I suppose this is the best idea. Another solution is to not convert at all, and stick to D3DXMATRIX, but this makes the engine inconsistent and I personally really dislike this idea. What is your opinion?

    Read the article

  • c# marshaling dynamic-length string

    - by mitsky
    i have a struct with dynamic length [StructLayout(LayoutKind.Sequential, Pack = 1)] struct PktAck { public Int32 code; [MarshalAs(UnmanagedType.LPStr)] public string text; } when i'm converting bytes[] to struct by: GCHandle handle = GCHandle.Alloc(value, GCHandleType.Pinned); stru = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T)); handle.Free(); i have a error, because size of struct less than size of bytes[] and "string text" is pointer to string... how can i use dynamic strings? or i can use only this: [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)]

    Read the article

  • memcpy() safety on adjacent memory regions

    - by JaredC
    I recently asked a question on using volatile and was directed to read some very informative articles from Intel and others discussing memory barriers and their uses. After reading these articles I have become quite paranoid though. I have a 64-bit machine. Is it safe to memcpy into adjacent, non-overlapping regions of memory from multiple threads? For example, say I have a buffer: char buff[10]; Is it always safe for one thread to memcpy into the first 5 bytes while a second thread copies into the last 5 bytes? My gut reaction (and some simple tests) indicate that this is completely safe, but I have been unable to find documentation anywhere that can completely convince me.

    Read the article

  • How to research unmanaged memory leaks in .NET?

    - by Brandon
    I have a WCF service running over MSMQ. Memory gradually increases over time, indicating that there is some sort of memory leak. I ran the service locally and monitored some counters using PerfMon. Total CLR memory managed heap bytes remains relatively constant, while the process' private bytes increases over time. This leads me to believe that there is some sort of unmanaged memory leak. Assuming that unmanaged memory leak is the issue, how do I address the issue? Are there any tools I could use to give me hints as to what is causing the unmanaged memory leak? Also, all my service is doing is reading from the transactional queue and writing to a database, all as part of a DTC transaction (handled under the hood by requiring a transaction on the service contract). I am not doing anything explicitly with COM or DllImports. Thanks in advance!

    Read the article

  • Size of a class with 'this' pointer

    - by psvaibhav
    The size of a class with no data members is returned as 1 byte, even though there is an implicit 'this' pointer declared. Shouldn't the size returned be 4 bytes(on a 32 bit machine)? I came across articles which indicated that 'this' pointer is not counted for calculating the size of the object. But I am unable to understand the reason for this. Also, if any member function is declared virtual, the size of the class is now returned as 4 bytes. This means that the vptr is counted for calculating the size of the object. Why is the vptr considered and 'this' pointer ignored for calculating the size of object?

    Read the article

  • How to measure the time taken by C# NetworkStream.Read?

    - by publicENEMY
    I want to measure time taken for client to receive data over tcp using c#. Im using NetworkStream.Read to read 100 megabits of data that are sent using NetworkStream.Write. I set the buffer to the same size of data, so there no buffer underrun problem etc. Generally it looks like this. Stopwatch sw = new Stopwatch(); sw.Start(); stream.Read(bytes, 0, bytes.Length); sw.Stop(); The problem is, there is a possibility where the sender hasnt actually sent the data but the stopwatch is already running. how can i accurately measure the time taken to receive the data? i did try to use the time lapse of the remote pc stream.Write, but the time it took to write is extremely small. by the way, is the stopwatch is the most accurate tool for this task?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >