Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 67/126 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Double buffering C#

    - by LmSNe
    Hey, I'm trying to implement the following method: void Ball::DrawOn(Graphics g); The method should draw all previous locations(stored in a queue) of the ball and finally the current location. I don't know if that matters, but I print the previous locations using g.DrawEllipse(...) and the current location using g.FillEllipse(...). The question is, that as you could imagine there is a lot of drawing to be done and thus the display starts to flicker much. I had searched for a way to double buffer, but all I could find is these 2 ways: 1) System.Windows.Forms.Control.DoubleBuffered = true; 2) SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true); while trying to use the first, I get the an error explaining that from in this method the Property DoubleBuffered is inaccessible due to its protection level. While I can't figure how to use the SetStyle method. Is it possible at all to double buffer while all the access I have is to the Graphics Object I get as input in the method? Thanks in Advance,

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • can I disable the "(Type e to repeat macro)" message in emacs?

    - by lindes
    Hi there, So, I've finally made the plunge, and have gotten to the state where I'm quite happy to have switched from vi and vim to emacs... I've been putting stuff in my .emacs file, learning how to evaluate things (not to mention becoming familiar with movement commands), etc. etc. etc. And now I have a problem with a require line in my .emacs file (a require statement*), which bombs out when I launch emacs (and generally fails to work). So, this lead me to the following situation: In the process of trying to debug the above situation, one of the steps I did was to open the file I was trying to require, and evaluate it bit by bit, using C-M-f and C-x C-e (and later just M-x eval-buffer), which all worked fine. But along the way of the section-by-section, I got tired of typing all those, and so I recorded a keyboard macro... C-x ( C-M-f C-x C-e C-x ) and then C-x e... which gave me a message in the minibuffer (I think I'm using the right name), saying (Type e to repeat macro). Which meant I could no longer see the resultant value of the evaluation of each section of code... which, while not critical in this case, I was liking having. Which leads me to the actual question: Is there a way to disable that message, and/or to cause the minibuffer to show multiple lines at once? I know about the *Messages* buffer, and that could have helped, I'm just wondering if there's a way to either disable that message, or otherwise make it coexist with other messages. Any suggestions? Thanks! lindes * - the problem at hand, which is not really my question, is that (require 'ruby-mode/ruby-mode) fails, even though emacs is definitely and successfully (per system call tracing) opening and reading the ruby-mode.el file. I presume this is because the provide line says just 'ruby-mode. I've found a solution for this, but if anyone can point me to any "best practices", I'd appreciate it.

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • ERROR_MORE_DATA ---- Reading from Registry

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • verifying the signature of x509

    - by sid
    Hi All, While verifying the certificate I am getting EVP_F_EVP_PKEY_GET1_DH My Aim - Verify the certificate signature. I am having 2 certificates : 1. a CA certificate 2. certificate issued by CA. I extracted the 'RSA Public Key (key)' Modulus From CA Certificate using, pPublicKey = X509_get_pubkey(x509); buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); if (n != buf_len) LOG(ERROR," : key error\n"); if (key[0] & 0x80) LOG(DEBUG, "00\n"); Now, I have CA public key & CA key length and also having certificate issued by CA in buffer, buffer length & public key. To verify the signature, I have following code int iRet1, iRet2, iRet3, iReason; iRet1 = EVP_VerifyInit(&md_ctx, EVP_sha1()); iRet2 = EVP_VerifyUpdate(&md_ctx, buf, buflen); iRet3 = EVP_VerifyFinal(&md_ctx, (const unsigned char *)CAkey, CAkeyLen, pubkey); iReason = ERR_get_error(); if(ERR_GET_REASON(iReason) == EVP_F_EVP_PKEY_GET1_DH) { LOG(ERROR, "EVP_F_EVP_PKEY_GET1_DH\n"); } LOG(INFO,"EVP_VerifyInit returned %d : EVP_VerifyUpdate returned %d : EVP_VerifyFinal = %d \n", iRet1, iRet2, iRet3); EVP_MD_CTX_cleanup(&md_ctx); EVP_PKEY_free(pubkey); if (iRet3 != 1) { LOG(ERROR,"EVP_VerifyFinal() failed\n"); ret = -1; } LOG(INFO,"signature is valid\n"); I am unable to figure out What might went wrong??? Please if anybody faced same issues? What EVP_F_EVP_PKEY_GET1_DH Error means? Thanks in Advance - opensid

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

  • How do I install the OpenSSL C++ library on Ubuntu?

    - by Daryl Spitzer
    I'm trying to build some code on Ubuntu 10.04 LTS that uses OpenSSL 1.0.0. When I run make, it invokes g++ with the "-lssl" option. The source includes: #include <openssl/bio.h> #include <openssl/buffer.h> #include <openssl/des.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/rsa.h> I ran: $ sudo apt-get install openssl Reading package lists... Done Building dependency tree Reading state information... Done openssl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. But I guess the openssl package doesn't include the library. I get these errors on make: foo.cpp:21:25: error: openssl/bio.h: No such file or directory foo.cpp:22:28: error: openssl/buffer.h: No such file or directory foo.cpp:23:25: error: openssl/des.h: No such file or directory foo.cpp:24:25: error: openssl/evp.h: No such file or directory foo.cpp:25:25: error: openssl/pem.h: No such file or directory foo.cpp:26:25: error: openssl/rsa.h: No such file or directory How do I install the OpenSSL C++ library on Ubuntu 10.04 LTS? I did a man g++ and (under "Options for Linking") for the -l option it states: " The linker searches a standard list of directories for the library..." and "The directories searched include several standard system directories..." What are those standard system directories?

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • x509 certificate verification in C

    - by sid
    Hi All, I do have certificates in DER and PEM format, My goal is to retrieve the fields of Issuer and Subject And verify the Certificate with the CA public key and simultaneously verify CA certificate with the Root public key. I am able to retrieve all the details of issuer and subject But unable to verify the certificate. Please help. The API's used, x509 = d2i_X509_fp (fp, &x509); //READING DER Format x509 = PEM_read_X509 (fp, &x509, NULL, NULL); //READING PEM Format X509_NAME_oneline(X509_get_subject_name(x509), subject, sizeof (subject)); //to retrive the Subject X509_NAME_oneline(X509_get_issuer_name(x509), issuer, sizeof (issuer)); //to retrive the Issuer // to store the CA public key (in unsigned char *key)that will be used to verify the certificate (My case Always sha1WithRSAEncryption) RSA *x = X509_get_pubkey(x509)->pkey.rsa; bn = x->n; //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); stored_CA_pubKey = (unsigned char *)malloc (buf_len); i_n = BN_bn2bin (bn, (unsigned char *)stored_CA_pubKey); if (i_n != buf_len) LOG(ERROR," : key error\n"); if (key[0] & 0x80) LOG(DEBUG, "00\n"); stored_CA_pubKeyLen = EVP_PKEY_size(X509_get_pubkey(x509)); For Verification I went through different approaches but unable to verify a) i_x509_verify = X509_verify(cert_x509, ca_pubkey); b) /* verify the signature */ int iRet1, iRet2, iReason; iRet1 = EVP_VerifyInit(&md_ctx, EVP_sha1()); iRet2 = EVP_VerifyUpdate(&md_ctx, cert_code, cert_code_len); rv = EVP_VerifyFinal(&md_ctx, (const unsigned char *)stored_CA_pubKey, stored_CA_pubKeyLen, cert_pubkey); NOTE : cert_code & stored_CA_pubKey is unsigned char buffer. Thanks in Advance

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Projective transformation

    - by mcwehner
    Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation". I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me". An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.

    Read the article

  • OpenAL not playing on Max OS X 10.6

    - by Grimless
    I've been working on getting a basic audio engine running on my Mac using OpenAL. It seems relatively straightforward after working with OpenGL for a while. However, despite the fact that I believe I have everything in place, my sound will not play. Here is the order of things I am doing: //Creating a new device ALCdevice* device = alcOpenDevice(NULL); //Create a new context with the device ALCcontext* context = alcCreateContext(device, NULL); //Make that context current alcMakeContextCurrent(context); //Do lots of loading stuff to bring in an AIFF... voodooAIFF = myAIFFLoader("name"); //Then use that data ALuint buf; alGenBuffers(1, &buf); //Check for errors, but none happen... //Bind buffer data. alBufferData(buf, voodooAIFF.format, voodooAIFF.data, voodooAIFF.sizeInBytes, voodooAIFF.frequency); //Check for errors, none here either... //Create Source ALuint src; alGenSources(1, &src); //Error check again, no errors. //Bind source to buffer alSourcei(src, AL_BUFFER, buf); //Set reference distance alSourcei(sourceID, AL_REFERENCE_DISTANCE, 1); //Set source attributes including gain and pitch to 1 (direction set to 0,0,0) //Check for errors, nothing... //Set up listener attributes. //Check for errors, no errors. //Begin playing. alSourcePlay(src); Observe silence... Any insight, what steps am I missing here?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • How to do buffered intersection checks on an IPoint?

    - by Quigrim
    How would I buffer an IPoint to do an intersection check using IRelationalOperator? I have, for arguments sake: IPoint p1 = xxx; IPoint p2 = yyy; IRelationalOperator rel1 = (IRelationalOperator)p1; if (rel.Intersects (p2)) // Do something But now I want to add a tolerance to my check, so I assume the right way to do that is by either buffering p1 or p2. Right? How do I add such a buffer? Note: the Intersects method I am using is an extension method I wrote to simplify my code. Here it is: /// <summary> /// Returns true if the IGeometry is intersected. /// This method negates the Disjoint method. /// </summary> /// <param name="relOp">The rel op.</param> /// <param name="other">The other.</param> /// <returns></returns> public static bool Intersects ( this IRelationalOperator relOp, IGeometry other) { return (!relOp.Disjoint (other)); }

    Read the article

  • NASM - Load code from USB Drive

    - by new123456
    Hola, Would any assembly gurus know the argument (register dl) that signifies the first USB drive? I'm working through a couple of NASM tutorials, and would like to get a physical boot (I can get a clean one with qemu). This is the section of code that loads the "kernel" data from disk: loadkernel: mov si, LMSG ;; 'Loading kernel',13,10,0 call prints ;; ex puts() mov dl, 0x00 ;; The disk to load from mov ah, 0x02 ;; Read operation mov al, 0x01 ;; Sectors to read mov ch, 0x00 ;; Track mov cl, 0x02 ;; Sector mov dh, 0x00 ;; Head mov bx, 0x2000 ;; Buffer end mov es, bx mov bx, 0x0000 ;; Buffer start int 0x13 jc loadkernel mov ax, 0x2000 mov ds, ax jmp 0x2000:0x00 If it makes any difference, I'm running a stock Dell Inspiron 15 BIOS. Apparently, the correct value for me is 0x80. The BIOS loads the hard drives and labels them starting at 0x80 according to this answer. My particular BIOS decides to load the USB drive up as the first, for some reason, so I can boot from there.

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • How do I controll clipping with non-opaque graphics-item's in Qt?

    - by JJacobsson
    I have a bunch of QGraphicsSvgItem's in a QGraphicsScene that are drawn connected by QGraphicsLineItem's. This show's a graph of a tree-structure. What I want to do is provide a feature where everything but a selected sub-tree becomes transparent. A kind of "highlight this sub-tree" feature. That part was easy, but the results are ugly because now the lines can be seen through the semi-transparent svg's. I am looking for some way to still clip other QGraphicsItem's in the scene to the svg item's, giving the effect that the svg's are semi-transparent windows to the background. I know this code does not use svg's but I figure you can replace that yourself if you are so inclined. int main(int argc, char *argv[]) { QApplication app(argc, argv); QGraphicsScene scene; for( int i = 0; i < 10; ++i ) { QGraphicsLineItem* line = new QGraphicsLineItem; line->setLine( i * 25.0 + 1.0, 0, i * 25.0 + 23.0, 0 ); scene.addItem( line ); } for( int i = 0; i < 11; ++i ) { QGraphicsEllipseItem* ellipse = new QGraphicsEllipseItem; ellipse->setRect( (i * 25.0) - 9.0, -9.0, 18.0, 18.0f ); ellipse->setBrush( QBrush( Qt::green, Qt::SolidPattern ) ); ellipse->setOpacity( 0.5 ); scene.addItem( ellipse ); } QGraphicsView view( &scene ); view.show(); return app.exec(); } I would like the line's to not be seen behind the circle's. I have tried fiddling with the depth-buffer and the stencil buffer using opengl rendering to no avail. How do I get the QGraphicsSvgItem's (or QGraphicsEllipseItem's in the example code) to still clip the lines even though they are semi-transparent?

    Read the article

  • Why Does My Vector<PEVENTLOGRECORD> Mysteriously Get Cleared?

    - by Eric
    Hello everyone, I am making a program that reads and stores data from Windows EventLog files (.evt) in C++. I am using the calls OpenBackupEventLog(ServerName, FileName) and ReadEventLog(...). Also using this: PEVENTLOGRECORD Anyway, without supplying all of the code, here is the basic idea: 1. I get a handle to the .evt file using OpenBackupEventLog() and passing in a file name. 2. I then use ReadEventLog() to fill up a buffer with an unknown number of EventLog messages. 3. I traverse through the buffer and add each message to a vector 4. I keep filling up buffers (repeat steps 2 and 3) until I reach the end of the file. Here is my code for filling the vector: vector<PEVENTLOGRECORD> allRecords; while(_status == ERROR_SUCCESS) { if(!ReadEventLog(...)) CheckStatus(); else FillVectorFromBuffer(allRecords) } // Function FillVectorFromBuffer FillVectorFromBuffer(vector(PEVENTLOGRECORD) &allRecords) { int bytesExamined = 0; PBYTE pRecord = (PBYTE)_lpBuffer; // This is one of the params in ReadEventLog() while(bytesExamined < _pnBytesRead) // Another param from ReadEventLog { PEVENTLOGRECORD currentRecord = (PEVENTLOGRECORD)(pRecord); allRecords.push_back(currentRecord); pRecord += currentRecord->Length; bytesExamined += currentRecord->Length; } } Anyway, whenever I run this, it will get all the EventLogs in the file, and the vector will have everything I want it to. But as soon as this line: if(!ReadEventLog()) gets called and returns true (aka ReadEventLog() returns false), then every field in my vector gets set to zero. The vector will still contain the correct number of elements, it's just that all of the fields in the PEVENTLOGRECORD struct are now zero. Anyone with better debugging experience have any ideas? Thanks.

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • java url connection, wait for data being sent through the outputstream

    - by Mateu
    I'm writting a java class that tests uploading speed connection to a server. I want to check how many data can be send in 5 seconds. I've written a class which creates a URL, creates a connection, and sends data trough the outPutStream. There is a loop where I writte data to the stream for 5 seconds. However I'm not able to see when data has been send (I writte data to the output stream, but data is not send yet). How can I wait untill data is really sent to the server? Here goes my code (which does not work): URL u = new URL(url) HttpURLConnection uc = (HttpURLConnection) u.openConnection(); uc.setDoOutput(true); uc.setDoInput(true); uc.setUseCaches(false); uc.setDefaultUseCaches(false); uc.setRequestMethod("POST"); uc.setRequestProperty("Content-Type", "application/octet-stream"); uc.connect(); st.start(); // Send the request OutputStream os = uc.getOutputStream(); //This while is incorrect cause it does not wait for data beeing sent while (st.getElapsedTime() < miliSeconds) { os.write(buffer); os.flush(); st.addSize(buffer.length); } os.close(); Thanks

    Read the article

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >