Search Results

Search found 4759 results on 191 pages for 'depth buffer'.

Page 86/191 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • C# udp can not receive any data

    - by StoneHeart
    here is my code Socket sck = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck.Bind(new IPEndPoint(IPAddress.Any, 0)); // Broadcast to find server string msg = "Imlookingforaserver:" + udp_listen_port; byte[] sendBytes4 = Encoding.ASCII.GetBytes(msg); IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("255.255.255.255"), server_port); sck.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); sck.SendTo(sendBytes4, groupEP); //Wait response from server Socket sck2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); sck2.Bind(new IPEndPoint(IPAddress.Any, udp_listen_port)); byte[] buffer = new byte[128]; EndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, udp_listen_port); sck2.ReceiveFrom(buffer, ref remoteEndPoint); //<<< I never pass this line I use above code to try find a server. First i broadcast a message and then i wait response from server. A test i made with the server written in c++ and running in windows vista, client written in C# and run on the same machine with server. Problem is: The server can receive message which client broadcast. But client can not receive anything from server. I try to write a client with c++ and it work like a charm, i think my problem is in C# client.

    Read the article

  • Can't get syntax on to work in my gvim.

    - by user198553
    (I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in /usr/share/vim/vim71/. /home/user/ My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my /home/user/.vim/.vimrc: " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! What's going on? Am I missing something?

    Read the article

  • Double buffering C#

    - by LmSNe
    Hey, I'm trying to implement the following method: void Ball::DrawOn(Graphics g); The method should draw all previous locations(stored in a queue) of the ball and finally the current location. I don't know if that matters, but I print the previous locations using g.DrawEllipse(...) and the current location using g.FillEllipse(...). The question is, that as you could imagine there is a lot of drawing to be done and thus the display starts to flicker much. I had searched for a way to double buffer, but all I could find is these 2 ways: 1) System.Windows.Forms.Control.DoubleBuffered = true; 2) SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true); while trying to use the first, I get the an error explaining that from in this method the Property DoubleBuffered is inaccessible due to its protection level. While I can't figure how to use the SetStyle method. Is it possible at all to double buffer while all the access I have is to the Graphics Object I get as input in the method? Thanks in Advance,

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • can I disable the "(Type e to repeat macro)" message in emacs?

    - by lindes
    Hi there, So, I've finally made the plunge, and have gotten to the state where I'm quite happy to have switched from vi and vim to emacs... I've been putting stuff in my .emacs file, learning how to evaluate things (not to mention becoming familiar with movement commands), etc. etc. etc. And now I have a problem with a require line in my .emacs file (a require statement*), which bombs out when I launch emacs (and generally fails to work). So, this lead me to the following situation: In the process of trying to debug the above situation, one of the steps I did was to open the file I was trying to require, and evaluate it bit by bit, using C-M-f and C-x C-e (and later just M-x eval-buffer), which all worked fine. But along the way of the section-by-section, I got tired of typing all those, and so I recorded a keyboard macro... C-x ( C-M-f C-x C-e C-x ) and then C-x e... which gave me a message in the minibuffer (I think I'm using the right name), saying (Type e to repeat macro). Which meant I could no longer see the resultant value of the evaluation of each section of code... which, while not critical in this case, I was liking having. Which leads me to the actual question: Is there a way to disable that message, and/or to cause the minibuffer to show multiple lines at once? I know about the *Messages* buffer, and that could have helped, I'm just wondering if there's a way to either disable that message, or otherwise make it coexist with other messages. Any suggestions? Thanks! lindes * - the problem at hand, which is not really my question, is that (require 'ruby-mode/ruby-mode) fails, even though emacs is definitely and successfully (per system call tracing) opening and reading the ruby-mode.el file. I presume this is because the provide line says just 'ruby-mode. I've found a solution for this, but if anyone can point me to any "best practices", I'd appreciate it.

    Read the article

  • ERROR_MORE_DATA ---- Reading from Registry

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • How to functionally generate a tree breadth-first. (With Haskell)

    - by Dennetik
    Say I have the following Haskell tree type, where "State" is a simple wrapper: data Tree a = Branch (State a) [Tree a] | Leaf (State a) deriving (Eq, Show) I also have a function "expand :: Tree a - Tree a" which takes a leaf node, and expands it into a branch, or takes a branch and returns it unaltered. This tree type represents an N-ary search-tree. Searching depth-first is a waste, as the search-space is obviously infinite, as I can easily keep on expanding the search-space with the use of expand on all the tree's leaf nodes, and the chances of accidentally missing the goal-state is huge... thus the only solution is a breadth-first search, implemented pretty decent over here, which will find the solution if it's there. What I want to generate, though, is the tree traversed up to finding the solution. This is a problem because I only know how to do this depth-first, which could be done by simply called the "expand" function again and again upon the first child node... until a goal-state is found. (This would really not generate anything other then a really uncomfortable list.) Could anyone give me any hints on how to do this (or an entire algorithm), or a verdict on whether or not it's possible with a decent complexity? (Or any sources on this, because I found rather few.)

    Read the article

  • How do I install the OpenSSL C++ library on Ubuntu?

    - by Daryl Spitzer
    I'm trying to build some code on Ubuntu 10.04 LTS that uses OpenSSL 1.0.0. When I run make, it invokes g++ with the "-lssl" option. The source includes: #include <openssl/bio.h> #include <openssl/buffer.h> #include <openssl/des.h> #include <openssl/evp.h> #include <openssl/pem.h> #include <openssl/rsa.h> I ran: $ sudo apt-get install openssl Reading package lists... Done Building dependency tree Reading state information... Done openssl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. But I guess the openssl package doesn't include the library. I get these errors on make: foo.cpp:21:25: error: openssl/bio.h: No such file or directory foo.cpp:22:28: error: openssl/buffer.h: No such file or directory foo.cpp:23:25: error: openssl/des.h: No such file or directory foo.cpp:24:25: error: openssl/evp.h: No such file or directory foo.cpp:25:25: error: openssl/pem.h: No such file or directory foo.cpp:26:25: error: openssl/rsa.h: No such file or directory How do I install the OpenSSL C++ library on Ubuntu 10.04 LTS? I did a man g++ and (under "Options for Linking") for the -l option it states: " The linker searches a standard list of directories for the library..." and "The directories searched include several standard system directories..." What are those standard system directories?

    Read the article

  • Implementing eval and load functions inside a scripting engine with Flex and Bison.

    - by Simone Margaritelli
    Hy guys, i'm developing a scripting engine with flex and bison and now i'm implementing the eval and load functions for this language. Just to give you an example, the syntax is like : import std.*; load( "some_script.hy" ); eval( "foo = 123;" ); println( foo ); So, in my lexer i've implemented the function : void hyb_parse_string( const char *str ){ extern int yyparse(void); YY_BUFFER_STATE prev, next; /* * Save current buffer. */ prev = YY_CURRENT_BUFFER; /* * yy_scan_string will call yy_switch_to_buffer. */ next = yy_scan_string( str ); /* * Do actual parsing (yyparse calls yylex). */ yyparse(); /* * Restore previous buffer. */ yy_switch_to_buffer(prev); } But it does not seem to work. Well, it does but when the string (loaded from a file or directly evaluated) is finished, i get a sigsegv : Program received signal SIGSEGV, Segmentation fault. 0xb7f2b801 in yylex () at src/lexer.cpp:1658 1658 if ( YY_CURRENT_BUFFER_LVALUE->yy_buffer_status == YY_BUFFER_NEW ) As you may notice, the sigsegv is generated by the flex/bison code, not mine ... any hints, or at least any example on how to implement those kind of functions? PS: I've succesfully implemented the include directive, but i need eval and load to work not at parsing time but execution time (kind of PHP's include/require directives).

    Read the article

  • verifying the signature of x509

    - by sid
    Hi All, While verifying the certificate I am getting EVP_F_EVP_PKEY_GET1_DH My Aim - Verify the certificate signature. I am having 2 certificates : 1. a CA certificate 2. certificate issued by CA. I extracted the 'RSA Public Key (key)' Modulus From CA Certificate using, pPublicKey = X509_get_pubkey(x509); buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); if (n != buf_len) LOG(ERROR," : key error\n"); if (key[0] & 0x80) LOG(DEBUG, "00\n"); Now, I have CA public key & CA key length and also having certificate issued by CA in buffer, buffer length & public key. To verify the signature, I have following code int iRet1, iRet2, iRet3, iReason; iRet1 = EVP_VerifyInit(&md_ctx, EVP_sha1()); iRet2 = EVP_VerifyUpdate(&md_ctx, buf, buflen); iRet3 = EVP_VerifyFinal(&md_ctx, (const unsigned char *)CAkey, CAkeyLen, pubkey); iReason = ERR_get_error(); if(ERR_GET_REASON(iReason) == EVP_F_EVP_PKEY_GET1_DH) { LOG(ERROR, "EVP_F_EVP_PKEY_GET1_DH\n"); } LOG(INFO,"EVP_VerifyInit returned %d : EVP_VerifyUpdate returned %d : EVP_VerifyFinal = %d \n", iRet1, iRet2, iRet3); EVP_MD_CTX_cleanup(&md_ctx); EVP_PKEY_free(pubkey); if (iRet3 != 1) { LOG(ERROR,"EVP_VerifyFinal() failed\n"); ret = -1; } LOG(INFO,"signature is valid\n"); I am unable to figure out What might went wrong??? Please if anybody faced same issues? What EVP_F_EVP_PKEY_GET1_DH Error means? Thanks in Advance - opensid

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

  • x509 certificate verification in C

    - by sid
    Hi All, I do have certificates in DER and PEM format, My goal is to retrieve the fields of Issuer and Subject And verify the Certificate with the CA public key and simultaneously verify CA certificate with the Root public key. I am able to retrieve all the details of issuer and subject But unable to verify the certificate. Please help. The API's used, x509 = d2i_X509_fp (fp, &x509); //READING DER Format x509 = PEM_read_X509 (fp, &x509, NULL, NULL); //READING PEM Format X509_NAME_oneline(X509_get_subject_name(x509), subject, sizeof (subject)); //to retrive the Subject X509_NAME_oneline(X509_get_issuer_name(x509), issuer, sizeof (issuer)); //to retrive the Issuer // to store the CA public key (in unsigned char *key)that will be used to verify the certificate (My case Always sha1WithRSAEncryption) RSA *x = X509_get_pubkey(x509)->pkey.rsa; bn = x->n; //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); stored_CA_pubKey = (unsigned char *)malloc (buf_len); i_n = BN_bn2bin (bn, (unsigned char *)stored_CA_pubKey); if (i_n != buf_len) LOG(ERROR," : key error\n"); if (key[0] & 0x80) LOG(DEBUG, "00\n"); stored_CA_pubKeyLen = EVP_PKEY_size(X509_get_pubkey(x509)); For Verification I went through different approaches but unable to verify a) i_x509_verify = X509_verify(cert_x509, ca_pubkey); b) /* verify the signature */ int iRet1, iRet2, iReason; iRet1 = EVP_VerifyInit(&md_ctx, EVP_sha1()); iRet2 = EVP_VerifyUpdate(&md_ctx, cert_code, cert_code_len); rv = EVP_VerifyFinal(&md_ctx, (const unsigned char *)stored_CA_pubKey, stored_CA_pubKeyLen, cert_pubkey); NOTE : cert_code & stored_CA_pubKey is unsigned char buffer. Thanks in Advance

    Read the article

  • Boost binding a function taking a reference

    - by Jamie Cook
    Hi all, I am having problems compiling the following snippet int temp; vector<int> origins; vector<string> originTokens = OTUtils::tokenize(buffer, ","); // buffer is a char[] array // original loop BOOST_FOREACH(string s, originTokens) { from_string(temp, s); origins.push_back(temp); } // I'd like to use this to replace the above loop std::transform(originTokens.begin(), originTokens.end(), origins.begin(), boost::bind<int>(&FromString<int>, boost::ref(temp), _1)); where the function in question is // the third parameter should be one of std::hex, std::dec or std::oct template <class T> bool FromString(T& t, const std::string& s, std::ios_base& (*f)(std::ios_base&) = std::dec) { std::istringstream iss(s); return !(iss >> f >> t).fail(); } the error I get is 1>Compiling with Intel(R) C++ 11.0.074 [IA-32]... (Intel C++ Environment) 1>C:\projects\svn\bdk\Source\deps\boost_1_42_0\boost/bind/bind.hpp(303): internal error: assertion failed: copy_default_arg_expr: rout NULL, no error (shared/edgcpfe/il.c, line 13919) 1> 1> return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); 1> ^ 1> 1>icl: error #10298: problem during post processing of parallel object compilation Google is being unusually unhelpful so I hope that some one here can provide some insights.

    Read the article

  • Projective transformation

    - by mcwehner
    Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation". I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me". An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.

    Read the article

  • OpenAL not playing on Max OS X 10.6

    - by Grimless
    I've been working on getting a basic audio engine running on my Mac using OpenAL. It seems relatively straightforward after working with OpenGL for a while. However, despite the fact that I believe I have everything in place, my sound will not play. Here is the order of things I am doing: //Creating a new device ALCdevice* device = alcOpenDevice(NULL); //Create a new context with the device ALCcontext* context = alcCreateContext(device, NULL); //Make that context current alcMakeContextCurrent(context); //Do lots of loading stuff to bring in an AIFF... voodooAIFF = myAIFFLoader("name"); //Then use that data ALuint buf; alGenBuffers(1, &buf); //Check for errors, but none happen... //Bind buffer data. alBufferData(buf, voodooAIFF.format, voodooAIFF.data, voodooAIFF.sizeInBytes, voodooAIFF.frequency); //Check for errors, none here either... //Create Source ALuint src; alGenSources(1, &src); //Error check again, no errors. //Bind source to buffer alSourcei(src, AL_BUFFER, buf); //Set reference distance alSourcei(sourceID, AL_REFERENCE_DISTANCE, 1); //Set source attributes including gain and pitch to 1 (direction set to 0,0,0) //Check for errors, nothing... //Set up listener attributes. //Check for errors, no errors. //Begin playing. alSourcePlay(src); Observe silence... Any insight, what steps am I missing here?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • How can I render an in-memory UIViewController's view Landscape?

    - by Aaron
    I'm trying to render an in-memory (but not in hierarchy, yet) UIViewController's view into an in-memory image buffer so I can do some interesting transition animations. However, when I render the UIViewController's view into that buffer, it is always rendering as though the controller is in Portrait orientation, no matter the orientation of the rest of the app. How do I clue this controller in? My code in RootViewController looks like this: MyUIViewController* controller = [[MyUIViewController alloc] init]; int width = self.view.frame.size.width; int height = self.view.frame.size.height; int bitmapBytesPerRow = width * 4; unsigned char *offscreenData = calloc(bitmapBytesPerRow * height, sizeof(unsigned char)); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef offscreenContext = CGBitmapContextCreate(offscreenData, width, height, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast); CGContextTranslateCTM(offscreenContext, 0.0f, height); CGContextScaleCTM(offscreenContext, 1.0f, -1.0f); [(CALayer*)[controller.view layer] renderInContext:offscreenContext]; At that point, the offscreen memory buffers contents are portrait-oriented, even when the window is in landscape orientation. Ideas?

    Read the article

  • Error when passing quotes to webservice by AJAX

    - by Radu
    I'm passing data using .ajax and here are my data and contentType attributes: data: '{ "UserInput" : "' + $('#txtInput').val() + '","Options" : { "Foo1":' + bar1 + ', "Foo2":' + Bar2 + ', "Flags":"' + flags + '", "Immunity":' + immunity + '}}', contentType: 'application/json; charset=utf-8', Server side my code looks like this: <WebMethod()> _ Public Shared Function ParseData(ByVal UserInput As String, ByVal Options As Options) As String The userinput is obvious but the Options structure is like the following: Public Structure Options Dim Foo1 As Boolean Dim Foo2 As Boolean Dim Flags As String Dim Immunity As Integer End Structure Everything works fine when $('#txtInput') contains no double-quotes but if they are present I get an error (for an input of asd"): {"Message":"Invalid object passed in, \u0027:\u0027 or \u0027}\u0027 expected. (22): { \"UserInput\" : \"asd\"\",\"Options\" : { \"Foo1\":false, \"Foo2\":false, \"Flags\":\"\", \"Immunity\":0}}","StackTrace":" at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)","ExceptionType":"System.ArgumentException"} Any idea how I can avoid this error? Also, when I pass the same input with quotes directly it works fine.

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >