Search Results

Search found 24721 results on 989 pages for 'int tostring'.

Page 20/989 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How to use the `itemDoubleClicked(QTreeWidgetItem*,int)` signal in qtHaskell

    - by nano
    I want to use the itemDoubleClicked(QTreeWidgetItem*,int) signal in a Haskell program I'm writing where I am using qtHaskell for the GUI. To connect a function I have at other places done the following: dummyWidget <- myQWidget connectSlot object signal dummyWidget "customSlot()" $ f Where object is some QWidget and signal is a string giving the signal, e.g. "triggered()", and f is the function I want to be called when the signaled is send. The definition of connectSlot in the API is: class Qcs x where connectSlot :: QObject a -> String -> QObject b -> String -> x -> IO () where the instances ofQcs are: Qcs () Qcs (QObject c -> String -> IO ()) Qcs (QObject c -> Object d -> IO ()) Qcs (QObject c -> Bool -> IO ()) Qcs (QObject c -> Int -> IO ()) Qcs (QObject c -> IO ()) Qcs (QObject c -> OpenGLVersionFlag -> IO ()) The first Arguments passed is supposed to be the QObject of which I'm using a signal. As you can see, there is no instance where f, the function to connect to the signal, can have two further arguments to recieve the QWidget and the integer send by the signal. Is there a way to nevertheless connect that signal to a custom function?

    Read the article

  • map operator [] operands

    - by Jamie Cook
    Hi all I have the following in a member function int tt = 6; vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; set<int>& egressCandidateStops = temp.at(dest); and the following declaration of a member variable map<int, vector<set<int>>> m_egressCandidatesByDestAndOtMode; However I get an error when compiling (Intel Compiler 11.0) 1>C:\projects\svn\bdk\Source\ZenithAssignment\src\Iteration\PtBranchAndBoundIterationOriginRunner.cpp(85): error: no operator "[]" matches these operands 1> operand types are: const std::map<int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>, std::less<int>, std::allocator<std::pair<const int, std::vector<std::set<int, std::less<int>, std::allocator<int>>, std::allocator<std::set<int, std::less<int>, std::allocator<int>>>>>>> [ const int ] 1> vector<set<int>>& temp = m_egressCandidatesByDestAndOtMode[tt]; 1> ^ I know it's got to be something silly but I can't see what I've done wrong.

    Read the article

  • EF 4.0 Guid or Int as A primary Key

    - by bigb
    I am Implementing custom ASPNetMembership using EF 4.0 Is there any reason why i should use Guid as a primary key in User tables? As far as i know Int as a PK on SQL Server more performanced than strings. And Int is easier to iterate. Also, for security purpose if i need to pass this it id somewhere in url i may encrypt it somehow and pass it like a strings with no probs. But if i want to use auto generated Guid on SQL Server side using EF 4.0 i need to do this trick http://leedumond.com/blog/using-a-guid-as-an-entitykey-in-entity-framework-4/ I can't see any cases why i should use Guid as PK, may be only one if system going to have millions ans millions users, but also, theoretically, Guid could be duplicated sometime isn't so? Anyway Int32 size is 2,147.483.647 it is pretty much even for very-very big system, but if this number is still not enough I may go with Int64, in that cases I may have 9,223.372.036.854.775.807 rows. Pretty much huh? From another hand, M$ using Guids as PK in their ASPNetMembership implementation. [aspnetdb].[aspnet_Users] - PK UserId Type uniqueidentifier, should be some reasons/explanation why the did it?! May be some one has any ideas/experience about that?

    Read the article

  • Can you help me get my head around openssl public key encryption with rsa.h in c++?

    - by Ben
    Hi there, I am trying to get my head around public key encryption using the openssl implementation of rsa in C++. Can you help? So far these are my thoughts (please do correct if necessary) Alice is connected to Bob over a network Alice and Bob want secure communications Alice generates a public / private key pair and sends public key to Bob Bob receives public key and encrypts a randomly generated symmetric cypher key (e.g. blowfish) with the public key and sends the result to Alice Alice decrypts the ciphertext with the originally generated private key and obtains the symmetric blowfish key Alice and Bob now both have knowledge of symmetric blowfish key and can establish a secure communication channel Now, I have looked at the openssl/rsa.h rsa implementation (since I already have practical experience with openssl/blowfish.h), and I see these two functions: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); If Alice is to generate *rsa, how does this yield the rsa key pair? Is there something like rsa_public and rsa_private which are derived from rsa? Does *rsa contain both public and private key and the above function automatically strips out the necessary key depending on whether it requires the public or private part? Should two unique *rsa pointers be generated so that actually, we have the following: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_public, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_private, int padding); Secondly, in what format should the *rsa public key be sent to Bob? Must it be reinterpreted in to a character array and then sent the standard way? I've heard something about certificates -- are they anything to do with it? Sorry for all the questions, Best Wishes, Ben. EDIT: Coe I am currently employing: /* * theEncryptor.cpp * * * Created by ben on 14/01/2010. * Copyright 2010 __MyCompanyName__. All rights reserved. * */ #include "theEncryptor.h" #include <iostream> #include <sys/socket.h> #include <sstream> theEncryptor::theEncryptor() { } void theEncryptor::blowfish(unsigned char *data, int data_len, unsigned char* key, int enc) { // hash the key first! unsigned char obuf[20]; bzero(obuf,20); SHA1((const unsigned char*)key, 64, obuf); BF_KEY bfkey; int keySize = 16;//strlen((char*)key); BF_set_key(&bfkey, keySize, obuf); unsigned char ivec[16]; memset(ivec, 0, 16); unsigned char* out=(unsigned char*) malloc(data_len); bzero(out,data_len); int num = 0; BF_cfb64_encrypt(data, out, data_len, &bfkey, ivec, &num, enc); //for(int i = 0;i<data_len;i++)data[i]=out[i]; memcpy(data, out, data_len); free(out); } void theEncryptor::generateRSAKeyPair(int bits) { rsa = RSA_generate_key(bits, 65537, NULL, NULL); } int theEncryptor::publicEncrypt(unsigned char* data, unsigned char* dataEncrypted,int dataLen) { return RSA_public_encrypt(dataLen, data, dataEncrypted, rsa, RSA_PKCS1_OAEP_PADDING); } int theEncryptor::privateDecrypt(unsigned char* dataEncrypted, unsigned char* dataDecrypted) { return RSA_private_decrypt(RSA_size(rsa), dataEncrypted, dataDecrypted, rsa, RSA_PKCS1_OAEP_PADDING); } void theEncryptor::receivePublicKeyAndSetRSA(int sock, int bits) { int max_hex_size = (bits / 4) + 1; char keybufA[max_hex_size]; bzero(keybufA,max_hex_size); char keybufB[max_hex_size]; bzero(keybufB,max_hex_size); int n = recv(sock,keybufA,max_hex_size,0); n = send(sock,"OK",2,0); n = recv(sock,keybufB,max_hex_size,0); n = send(sock,"OK",2,0); rsa = RSA_new(); BN_hex2bn(&rsa->n, keybufA); BN_hex2bn(&rsa->e, keybufB); } void theEncryptor::transmitPublicKey(int sock, int bits) { const int max_hex_size = (bits / 4) + 1; long size = max_hex_size; char keyBufferA[size]; char keyBufferB[size]; bzero(keyBufferA,size); bzero(keyBufferB,size); sprintf(keyBufferA,"%s\r\n",BN_bn2hex(rsa->n)); sprintf(keyBufferB,"%s\r\n",BN_bn2hex(rsa->e)); int n = send(sock,keyBufferA,size,0); char recBuf[2]; n = recv(sock,recBuf,2,0); n = send(sock,keyBufferB,size,0); n = recv(sock,recBuf,2,0); } void theEncryptor::generateRandomBlowfishKey(unsigned char* key, int bytes) { /* srand( (unsigned)time( NULL ) ); std::ostringstream stm; for(int i = 0;i<bytes;i++){ int randomValue = 65 + rand()% 26; stm << (char)((int)randomValue); } std::string str(stm.str()); const char* strs = str.c_str(); for(int i = 0;bytes;i++)key[i]=strs[i]; */ int n = RAND_bytes(key, bytes); if(n==0)std::cout<<"Warning key was generated with bad entropy. You should not consider communication to be secure"<<std::endl; } theEncryptor::~theEncryptor(){}

    Read the article

  • Lucene.NET - sorting by int

    - by Judah Himango
    In the latest version of Lucene (or Lucene.NET), what is the proper way to get the search results back in sorted order? I have a document like this: var document = new Lucene.Document(); document.AddField("Text", "foobar"); document.AddField("CreationDate", DateTime.Now.Ticks.ToString()); // store the date as an int lucene.AddDocument(document); Now I want do a search and get my results back in order of most recent. How can I do a search that orders results by CreationDate? All the documentation I see is for old Lucene versions that use now-deprecated APIs.

    Read the article

  • How can i convert a string into byte[] of unsigned int 32 C#

    - by Miroo
    i have string like "0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF" i wanna convert it into byte[] key= new byte[] { 0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF}; i thought about splitting the string by ',' then loop on it and setvalue into another byte[] in index of i string Key = "0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF"; string[] arr = Key.Split(','); byte[] keybyte= new byte[8]; for (int i = 0; i < arr.Length; i++) { keybyte.SetValue(Int32.Parse(arr[i].ToString()), i); } but seems like it doesn't work i get error in converting the string into unsigned int32 on the first beginning an help would be appreciated

    Read the article

  • Concatenate int and string in LINQ

    - by Waheed
    HI, I am using the following code from c in Country where c.IsActive.Equals(true) orderby c.CountryName select new { countryIDCode = c.CountryID + "|" + c.TwoDigitCode, countryName = c.CountryName } but following is the error while doing this... Unable to cast the type 'System.Int32' to type 'System.Object'. LINQ to Entities only supports casting Entity Data Model primitive types. CountryID is int type and TwoDigitCode is string type. Can u please let me know how to concatenate.... Thanks....

    Read the article

  • How do I convert an INT into HH:mm:ss using SSRS 2005

    - by user293249
    Ok I need to display total talk time of an agent that is coming into SRSS 2005 from SQL 2005 as an INT. For the life of me I cannot figure out what combination of expression editing or format editing I need to use. For the detail portion I can use: =DATEADD("s", SUM(Fields!Talk_Time.Value), CDate("00:00")) And it will return: 1/1/0001 12:00:14 AM Now I can use =LEFT(DATEADD("s", SUM(Fields!Talk_Time.Value), CDate("00:00")),8) Which will return: 12:00:14 But really what I need is: 00:00:14 Please help!

    Read the article

  • 32 bit unsigned int php

    - by Zeta Two
    Hello! Does anyone know of a class/library/etc. that can simulate 32 bit unsigned integers on a 32 bit platform in PHP? I'm porting a C lib into PHP and it uses a lot of integers that are greater than the maximum for 32 bit signed int.

    Read the article

  • mysql enum performance: is enum slower than INT

    - by JP19
    Hi, Is it better to have a field status enum('active', 'hidden', 'deleted') OR status tinyint(3) with a lookup table. Assume that status can take only one value at a time. In particular, I am interested in knowing, are operations on enum significantly slower than or as fast as operations on int ? There is a related question on SO Mysql: enum confusion but i) It does not discuss performance at all ii) There is very little explanation on WHY one approach is better than the other. regards, JP

    Read the article

  • converting an int to char*

    - by Alexander
    This is a very very basic question and I know one way is to do the following: char buffer[33]; itoa(aq_width, buffer,10); where aq_width is the int, but then I can't guarantee what size of buffer I would need in order to do this... I can always allocate a very large buffer size, but that wouldn't be very nice... any other pretty and simple way to do this?

    Read the article

  • MIN() and MAX() in Swift and converting Int to CGFloat

    - by gotnull
    I'm getting some errors with the following methods: 1) How do I return screenHeight / cellCount as a CGFLoat for the first method? 2) How do I use the equivalent of ObjC's MIN() and MAX() in the second method? func tableView(tableView: UITableView!, heightForRowAtIndexPath indexPath: NSIndexPath!) -> CGFloat { var cellCount = Int(self.tableView.numberOfRowsInSection(indexPath.section)) return screenHeight / cellCount as CGFloat } // #pragma mark - UIScrollViewDelegate func scrollViewDidScroll(scrollView: UIScrollView) { let height = CGFloat(scrollView.bounds.size.height) let position = CGFloat(MAX(scrollView.contentOffset.y, 0.0)) let percent = CGFloat(MIN(position / height, 1.0)) blurredImageView.alpha = percent }

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >