Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 83/238 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • grdb not working variables

    - by stupid_idiot
    hi, i know this is kinda retarded but I just can't figure it out. I'm debugging this: xor eax,eax mov ah,[var1] mov al,[var2] call addition stop: jmp stop var1: db 5 var2: db 6 addition: add ah,al ret the numbers that I find on addresses var1 and var2 are 0x0E and 0x07. I know it's not segmented, but that ain't reason for it to do such escapades, because the addition call works just fine. Could you please explain to me where is my mistake? I see the problem, dunno how to fix it yet though. The thing is, for some reason the instruction pointer starts at 0x100 and all the segment registers at 0x1628. To address the instruction the used combination is i guess [cs:ip] (one of the segment registers and the instruction pointer for sure). The offset to var1 is 0x10 (probably because from the begining of the code it's the 0x10th byte in order), i tried to examine the memory and what i got was: 1628:100 8 bytes 1628:108 8 bytes 1628:110 <- wtf? (assume another 8 bytes) 1628:118 ... whatever tricks are there in the memory [cs:var1] points somewhere else than in my code, which is probably where the label .data would usually address ds.... probably.. i don't know what is supposed to be at 1628:10 ok, i found out what caused the assness and wasted me whole fuckin day. the behaviour described above is just correct, the code is fully functional. what i didn't know is that grdb debugger for some reason sets the begining address to 0x100... the sollution is to insert the directive ORG 0x100 on the first line and that's the whole thing. the code was working because instruction pointer has the right address to first instruction and goes one by one, but your assembler doesn't know what effective address will be your program stored at so it pretty much remains relative to first line of the code which means all the variables (if not using label for data section) will remain pointing as if it started at 0x0. which of course wouldn't work with DOS. and grdb apparently emulates some DOS features... sry for the language, thx everyone for effort, hope this will spare someone's time if having the same problem... heheh.. at least now i know the reason why to use .data section :))))

    Read the article

  • Analyzing Python Code: Modulus Operator

    - by Bhubhu Hbuhdbus
    I was looking at some code in Python (I know nothing about Python) and I came across this portion: def do_req(body): global host, req data = "" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((host, 80)) s.sendall(req % (len(body), body)) tmpdata = s.recv(8192) while len(tmpdata) > 0: data += tmpdata tmpdata = s.recv(8192) s.close() return data This is then called later on with body of huge size, as in over 500,000 bytes. This is sent to an Apache server that has the max request size on the default 8190 bytes. My question is what is happening at the "s.sendall()" part? Obviously the entire body cannot be sent at once and I'm guessing it is reduced by way of the modulus operator. I don't know how it works in Python, though. Can anyone explain? Thanks.

    Read the article

  • Efficiently display file status when using background thread

    - by schmoopy
    How can i efficiently display the status of a file when using a background thread? For instance, lets say i have a 100MB file: when i do the code below via a thread (just as an example) it runs in about 1 min: foreach(byte b in file.bytes) { WriteByte(b, xxx); } But... if i want to update the user i have to use a delegate to update the UI from the main thread, the code below takes - FOREVER - literally i don't know how long im still waiting, ive created this post and its not even 30% done. int total = file.length; int current = 0; foreach(byte b in file.bytes) { current++; UpdateCurrentFileStatus(current, total); WriteByte(b, xxx); } public delegate void UpdateCurrentFileStatus(int cur, int total); public void UpdateCurrentFileStatus(int cur, int total) { // Check if invoke required, if so create instance of delegate // the update the UI if(this.InvokeRequired) { } else { UpdateUI(...) } }

    Read the article

  • C socket programming: recv / select not seeing sent messages

    - by Fantastic Fourier
    Hey guys, I had some questions, about socket programming for client-server using TCP/IP. I am using select() to recv(), which works fine when client send() messages to server, but not the other way around. The send() returns positive (and reasonable) numbers of bytes sent by server but I know that the nubmer of bytes "sent" really means "sent out of the socket", not "sent and was received by the client." The select() function seems to work fine. So given that, my guess is that it's the send() function that is giving me the problem. Probably the address of client in send() is not correct. But when I compared address.sin_addr.s_addrmember (it's an unsigned long int) of struct sockaddr_in from recv() and send() of server, they are identical. So I am kind of lost as to what could be wrong?

    Read the article

  • Make Sphinx quiet (non-verbose)

    - by J. Pablo Fernández
    I'm using Sphinx through Thinking Sphinx in a Ruby on Rails project. When I create seed data and all the time, it's quite verbose, printing this: using config file '/Users/pupeno/projectx/config/development.sphinx.conf'... indexing index 'user_delta'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.6% done total 7 docs, 159 bytes total 0.042 sec, 3749.29 bytes/sec, 165.06 docs/sec Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff for every record that is created or so. Is there a way to suppress that output?

    Read the article

  • Joomla article not showing in the frontend but its visible in administrative part.

    - by user248674
    I have a very big article to be put into our joomla site. At first I was getting "Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 285487 bytes) in..." . But then I increased the memory_limit in php.ini . And I was able to create the article. Now I can view the article by logging into backend. But the article is not at all visible in the front end. If I click on the menu item pointing to that article, all I can see is a blank page with nothing in it. All other articles in the site are visible properly. Any idea? ps: I have enabled the error reporting to maximum, also ran the site in debugging mode. But saw nothing unusual.

    Read the article

  • Python - network buffer handling question...

    - by Patrick Moriarty
    Hi, I want to design a game server in python. The game will mostly just be passing small packets filled with ints, strings, and bytes stuffed into one message. As I'm using a different language to write the game, a normal packet would be sent like so: Writebyte(buffer, 5); // Delimit type of message Writestring(buffer, "Hello"); Sendmessage(buffer, socket); As you can see, it writes the bytes to the buffer, and sends the buffer. Is there any way to read something like this in python? I am aware of the struct module, and I've used it to pack things, but I've never used it to actually read something with mixed types stuck into one message. Thanks for the help.

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • FFmpeg + iPhone - Interesting (incorrect?) video encoding results

    - by jtrim
    I'm encoding some video on the iPhone by running the png image data through swscale to get YUV420P data then encoding that frame using the MSMPEG4V1 codec. In the api docs, avcodec_encode_video should return the number of bytes used from the output buffer by that encode operation. There are 234,000 bytes going into the encoder, but the result returned by avcodec_encode_video is simply "4". The result is exactly the same over 24 frames. Something seems fishy here...any insight? Here's a pastebin link to the code: http://pastebin.com/ht94FWva (sorry for the link away from SO, I just didn't want to have the code duplicated in several places) EDIT: Also, I've set up a custom log callback for ffmpeg to use and I have the log level set to "Verbose" (libavutil/log.h), so libavcodec should be logging any goofs to the console, but avcodec is quiet throught he whole operation. (note: I did test to make sure my log callback was working)

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

  • PHP Fatal error on line number that doesn't exist

    - by alexantd
    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 523800 bytes) in /Library/WebServer/Documents/XMLDataStore.class.php on line 981 The curious thing about this error is not the memory leak, which would be easy enough to troubleshoot. Rather, it is the fact that XMLDataStore.class.php is only 850 lines long, which I have verified in multiple text editors. This is with the PHP 5.3 bundled with Snow Leopard. I'm not using an opcode cache. Here is my php.ini: allow_url_fopen = Off error_reporting = -1 display_errors = 1 display_startup_errors = 1 date.timezone = 'America/Los_Angeles' output_buffering = Off realpath_cache_size = 0k XMLDataStore.class.php has recently been refactored and it used to be longer than 981 lines. It's almost as if PHP has cached a 2-week-old version and is reading that. I'm positive that the current version at /Library/WebServer/Documents/XMLDataStore.class.php is only 850 lines long, though.

    Read the article

  • delete & new in c++

    - by singh
    Hi This may be very simple question,But please help me. i wanted to know what exactly happens when i call new & delete , For example in below code char * ptr=new char [10]; delete [] ptr; call to new returns me memory address. Does it allocate exact 10 bytes on heap, Where information about size is stored.When i call delete on same pointer,i see in debugger that there are a lot of byte get changed before and after the 10 Bytes. Is there any header for each new which contain information about number of byte allocated by new. Thanks a lot

    Read the article

  • Is it good practice to initialize array in C/C++?

    - by sand
    I recently encountered a case where I need to compare two files (golden and expected) for verification of test results and even though the data written to both the files were same, the files does not match. On further investigation, I found that there is a structure which contains some integers and a char array of 64 bytes, and not all the bytes of char array were getting used in most of the cases and unused fields from the array contain random data and that was causing the mismatch. This brought me ask the question whether it is good practice to initialize the array in C/C++ as well, as it is done in Java?

    Read the article

  • Which of FILE* or ifstream has better memory usage?

    - by Viet
    I need to read fixed number of bytes from files, whose sizes are around 50MB. To be more precise, read a frame from YUV 4:2:0 CIF/QCIF files (~25KB to ~100KB per frame). Not very huge number but I don't want whole file to be in the memory. I'm using C++, in such a case, which of FILE* or ifstream has better (less/minimal) memory usage? Please kindly advise. Thanks! EDIT: I read fixed number of bytes: 25KB or 100KB (depending on QCIF/CIF format). The reading is in binary mode and forward-only. No seeking needed. No writing needed, only reading. EDIT: If identifying better of them is hard, which one does not require loading the whole file into memory?

    Read the article

  • Change HttpContext.Request.InputStream

    - by user320478
    I am getting lot of errors for HttpRequestValidationException in my event log. Is it possible to HTMLEncode all the inputs from override of ProcessRequest on web page. I have tried this but it gives context.Request.InputStream.CanWrite == false always. Is there any way to HTMLEncode all the feilds when request is made? public override void ProcessRequest(HttpContext context) { if (context.Request.InputStream.CanRead) { IEnumerator en = HttpContext.Current.Request.Form.GetEnumerator(); while (en.MoveNext()) { //Response.Write(Server.HtmlEncode(en.Current + " = " + //HttpContext.Current.Request.Form[(string)en.Current])); } long nLen = context.Request.InputStream.Length; if (nLen > 0) { string strInputStream = string.Empty; context.Request.InputStream.Position = 0; byte[] bytes = new byte[nLen]; context.Request.InputStream.Read(bytes, 0, Convert.ToInt32(nLen)); strInputStream = Encoding.Default.GetString(bytes); if (!string.IsNullOrEmpty(strInputStream)) { List<string> stream = strInputStream.Split('&').ToList<string>(); Dictionary<int, string> data = new Dictionary<int, string>(); if (stream != null && stream.Count > 0) { int index = 0; foreach (string str in stream) { if (str.Length > 3 && str.Substring(0, 3) == "txt") { string textBoxData = str; string temp = Server.HtmlEncode(str); //stream[index] = temp; data.Add(index, temp); index++; } } if (data.Count > 0) { List<string> streamNew = stream; foreach (KeyValuePair<int, string> kvp in data) { streamNew[kvp.Key] = kvp.Value; } string newStream = string.Join("", streamNew.ToArray()); byte[] bytesNew = Encoding.Default.GetBytes(newStream); if (context.Request.InputStream.CanWrite) { context.Request.InputStream.Flush(); context.Request.InputStream.Position = 0; context.Request.InputStream.Write(bytesNew, 0, bytesNew.Length); //Request.InputStream.Close(); //Request.InputStream.Dispose(); } } } } } } base.ProcessRequest(context); }

    Read the article

  • Do you bother to write a pretty error page?

    - by Chacha102
    So, everyone is really used to the errors that PHP gives you. They look kind of like this: Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 2 bytes) in /path/to/file(437) on line 21 My question is, do you put in the time to make your error pages more useful? I find that I am able to debug a lot faster using my own error page: I find this to be a lot better than the PHP errors because it gives me a stack trace, the usual error message, along with the actual location of the error, and more. Also, are there any downsides from creating your own development error pages. Obviously you wouldn't want to have a user see this page, but what about during development?

    Read the article

  • What's causing "Unable to retrieve native address from ByteBuffer object"?

    - by r0u1i
    As a very novice Java programmer, I probably should not mess with that kind of things. Unfortunately, I'm using a library which have a method that accepts a ByteBuffer object and throws when I try to use it: Exception in thread "main" java.lang.NullPointerException: Unable to retrieve native address from ByteBuffer object Is it because I'm not using a non-direct buffer? edit: There's not a lot of my code there. The library I'm using is jNetPcap, and I'm trying to dump a packet to file. My code takes an existing packet, and extract a ByteBuffer out of it: byte[] bytes = m_packet.getByteArray(0, m_packet.size()); ByteBuffer buffer = ByteBuffer.wrap(bytes); Then it calls on of the dump methods of jNetPcap that takes a ByteBuffer.

    Read the article

  • HttpSessionState Where, How, Advantages?

    - by blgnklc
    You see the code below, how I did use the session variable; So the three questions are; 1- Where are they stored? (Server or Client side) 2- Are they unique for each web page visitor? 3- Can I remove it using ajax or simple js code when my job is done with it? or it will be removed automatically..? sbyte[][] arrImages = svc.getImagesForFields(new String[] { "CustomerName", "CustomerSurName" }); Dictionary<string, byte[]> smartImageData = new Dictionary<string, byte[]>(); int i = 0; foreach (sbyte[] bytes in arrImages) { smartImageData.Add(fieldNames[i], ConvertToByte(bytes)); i++; } Session.Add("SmartImageData", smartImageData);

    Read the article

  • python len calculation

    - by n00bz0r
    I'm currently trying to build a RDP client in python and I came across the following issue with a len check; From: http://msdn.microsoft.com/en-us/library/cc240836%28v=prot.10%29.aspx "81 2a - ConnectData::connectPDU length = 298 bytes Since the most significant bit of the first byte (0x81) is set to 1 and the following bit is set to 0, the length is given by the low six bits of the first byte and the second byte. Hence, the value is 0x12a, which is 298 bytes." This sounds weird. For normal len checks, I'm simply using : struct.pack("h",len(str(PacketLen))) but in this case, I really don't see how I can calculate the len as described above. Any help on this would be greatly appreciated !

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • Where's the rest of the space used in this table?

    - by Eric H.
    I'm using SQL Server 2005. I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width). There is only one index, clustered. The fill factor is 0. After inserting a ton of data, sp_spaceused returns the following name rows reserved data index_size unused OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row. Where's the rest of the space? Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?

    Read the article

  • Can I use part of MD5 hash for data identification?

    - by sharptooth
    I use MD5 hash for identifying files with unknown origin. No attacker here, so I don't care that MD5 has been broken and one can intendedly generate collisions. My problem is I need to provide logging so that different problems are diagnosed easier. If I log every hash as a hex string that's too long, inconvenient and looks ugly, so I'd like to shorten the hash string. Now I know that just taking a small part of a GUID is a very bad idea - GUIDs are designed to be unique, but part of them are not. Is the same true for MD5 - can I take say first 4 bytes of MD5 and assume that I only get collision probability higher due to the reduced number of bytes compared to the original hash?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >