Search Results

Search found 5202 results on 209 pages for 'char'.

Page 193/209 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • When is ¦ not equal to ¦?

    - by Trey Jackson
    Background. I'm working with netlists, and in general, people specify different hierarchies by using /. However, it's not illegal to actually use a / as a part of an instance name. For example, X1/X2/X3/X4 might refer to instance X4 inside another instance named X1/X2/X3. Or it might refer an instance named X3/X4 inside an instance named X2 inside an instance named X1. Got it? There's really no "regular" character that cannot be used as a part of an instance name, so you resort to a non-printable one, or ... perhaps one outside of the standard 0..127 ASCII chars. I thought I'd try (decimal) 166, because for me it shows up as the pipe: ¦. So... I've got some C++ code which constructs the path name using ¦ as the hierarchical separator, so the path above looks like X1¦X2/X3¦X4. Now the GUI is written in Tcl/Tk, and to properly translate this into human readable terms I need to do something like the following: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set humanreadable [join [split $path ¦] /] Basically, replace the ¦ with / (I could also accomplish this with [string map]). Now, the problem is, the ¦ in the string I get from C++ doesn't match the ¦ I can create in Tcl. i.e. This fails: set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 string match $path [format X1%cX2/X3%cX4 166 166] Visually, the two strings look identical, but string match fails. I even tried using scan to see if I'd mixed up the bit values. But set path [getPathFromC++] ;# returns X1¦X2/X3¦X4 set path2 [format X1%cX2/X3%cX4 166 166] for {set i 0} {$i < [string length $path]} {incr i} { set p [string range $path $i $i] set p2 [string range $path2 $i $i] scan %c $p c scan %c $p2 c2 puts [list $p $c :::: $p2 $c2 equal? [string equal $c $c2]] } Produces output which looks like everything should match, except the [string equal] fails for the ¦ characters with a print line: ¦ 166 :::: ¦ 166 equal? 0 For what it's worth, the character in C++ is defined as: const char SEPARATOR = 166; Any ideas why a character outside the regular ASCII range would fail like this? When I changed the separator to (decimal) 28 (^\), things worked fine. I just don't want to get bit by a similar problem on a different platform. (I'm currently using Redhat Linux).

    Read the article

  • Reorganizing MySQL table to multiple rows by timestamp.

    - by Ben Burleson
    OK MySQL Wizards: I have a table of position data from multiple probes defined as follows: +----------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+----------+------+-----+---------+-------+ | time | datetime | NO | | NULL | | | probe_id | char(3) | NO | | NULL | | | position | float | NO | | NULL | | +----------+----------+------+-----+---------+-------+ A simple select outputs something like this: +---------------------+----------+----------+ | time | probe_id | position | +---------------------+----------+----------+ | 2010-05-05 14:16:42 | 00A | 0.0045 | | 2010-05-05 14:16:42 | 00B | 0.0005 | | 2010-05-05 14:16:42 | 00C | 0.002 | | 2010-05-05 14:16:42 | 01A | 0 | | 2010-05-05 14:16:42 | 01B | 0.001 | | 2010-05-05 14:16:42 | 01C | 0.0025 | | 2010-05-05 14:16:43 | 00A | 0.0045 | | 2010-05-05 14:16:43 | 00B | 0.0005 | | 2010-05-05 14:16:43 | 00C | 0.002 | | 2010-05-05 14:16:43 | 01A | 0 | | . | . | . | | . | . | . | | . | . | . | +---------------------+----------+----------+ However, I'd like to output something like this: +---------------------+--------+--------+-------+-----+-------+--------+ | time | 00A | 00B | 00C | 01A | 01B | 01C | +---------------------+--------+--------+-------+-----+-------+--------+ | 2010-05-05 14:16:42 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:43 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:44 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:45 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:46 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | 2010-05-05 14:16:47 | 0.0045 | 0.0005 | 0.002 | 0 | 0.001 | 0.0025 | | . | . | . | . | . | . | . | | . | . | . | . | . | . | . | | . | . | . | . | . | . | . | +---------------------+--------+--------+-------+-----+-------+--------+ Ideally, the different probe position columns are dynamically generated based on data in the table. Is this possible, or am I pulling my hair out for nothing? I've tried GROUP BY time with GROUP_CONCAT that roughly gets the data out, but I can't separate that output into probe_id columns. mysql SELECT time, GROUP_CONCAT(probe_id), GROUP_CONCAT(position) FROM MG41 GROUP BY time LIMIT 10; +---------------------+-------------------------+------------------------------------+ | time | GROUP_CONCAT(probe_id) | GROUP_CONCAT(position) | +---------------------+-------------------------+------------------------------------+ | 2010-05-05 14:16:42 | 00A,00B,00C,01A,01B,01C | 0.0045,0.0005,0.002,0,0.001,0.0025 | | 2010-05-05 14:16:43 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:44 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:45 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:46 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:47 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:48 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:49 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:50 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | | 2010-05-05 14:16:51 | 01C,01B,01A,00C,00B,00A | 0.0025,0.001,0,0.002,0.0005,0.0045 | +---------------------+-------------------------+------------------------------------+

    Read the article

  • QTreeView memory consumption

    - by Eye of Hell
    Hello. I'm testing QTreeView functionality right now, and i was amazed by one thing. It seems that QTreeView memory consumption depends on items count O_O. This is highly unusual, since model-view containers of such type only keeps track for items being displayed, and rest of items are in the model. I have written a following code with a simple model that holds no data and just reports that it has 10 millions items. With MFC, Windows API or .NET tree / list with such model will take no memory, since it will display only 10-20 visible elements and will request model for more upon scrolling / expanding items. But with Qt, such simple model results in ~300Mb memory consumtion. Increasing number of items will increase memory consumption. Maybe anyone can hint me what i'm doing wrong? :) #include <QtGui/QApplication> #include <QTreeView> #include <QAbstractItemModel> class CModel : public QAbstractItemModel { public: QModelIndex index ( int i_nRow, int i_nCol, const QModelIndex& i_oParent = QModelIndex() ) const { return createIndex( i_nRow, i_nCol, 0 ); } public: QModelIndex parent ( const QModelIndex& i_oInex ) const { return QModelIndex(); } public: int rowCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return i_oParent.isValid() ? 0 : 1000 * 1000 * 10; } public: int columnCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return 1; } public: QVariant data ( const QModelIndex& i_oIndex, int i_nRole = Qt::DisplayRole ) const { return Qt::DisplayRole == i_nRole ? QVariant( "1" ) : QVariant(); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QTreeView oWnd; CModel oModel; oWnd.setUniformRowHeights( true ); oWnd.setModel( & oModel ); oWnd.show(); return a.exec(); }

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • Using a map with set_intersection

    - by Robin Welch
    Not used set_intersection before, but I believe it will work with maps. I wrote the following example code but it doesn't give me what I'd expect: #include <map> #include <string> #include <iostream> #include <algorithm> using namespace std; struct Money { double amount; string currency; bool operator< ( const Money& rhs ) const { if ( amount != rhs.amount ) return ( amount < rhs.amount ); return ( currency < rhs.currency ); } }; int main( int argc, char* argv[] ) { Money mn[] = { { 2.32, "USD" }, { 2.76, "USD" }, { 4.30, "GBP" }, { 1.21, "GBP" }, { 1.37, "GBP" }, { 6.74, "GBP" }, { 2.55, "EUR" } }; typedef pair< int, Money > MoneyPair; typedef map< int, Money > MoneyMap; MoneyMap map1; map1.insert( MoneyPair( 1, mn[1] ) ); map1.insert( MoneyPair( 2, mn[2] ) ); map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) MoneyMap map2; map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) map1.insert( MoneyPair( 5, mn[5] ) ); map1.insert( MoneyPair( 6, mn[6] ) ); map1.insert( MoneyPair( 7, mn[7] ) ); MoneyMap out; MoneyMap::iterator out_itr( out.begin() ); set_intersection( map1.begin(), map1.end(), map2.begin(), map2.end(), inserter( out, out_itr ) ); cout << "intersection has " << out.size() << " elements." << endl; return 0; } Since the pair labelled (3) and (4) appear in both maps, I was expecting that I'd get 2 elements in the intersection, but no, I get: intersection has 0 elements. I'm sure this is something to do with the comparitor on the map / pair but can't figure it out.

    Read the article

  • Problem with Executing Mysql stored procedure

    - by karthik
    The stored procedure builds without any problem. The purpose of this is to take backup of selected tables to a script file. when the SP returns a value {Insert statements}. I am using the below MySql stored procedure, created by SQLWAYS [Tool to convert MsSql to MySql]. The actual MsSql SP is from http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx When i execute the SP in MySql Query Browser, It says "Unknown column 'tbl_users' in 'field list'" What would be the problem ? Because there was no error when i build-ed this Converted MySql SP. Help.. DELIMITER $$ DROP PROCEDURE IF EXISTS `demo`.`InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`(v_tableName VARCHAR(100)) SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData NATIONAL VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType NATIONAL VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName NATIONAL VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM `columns` WHERE table_name = v_tableName; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; END $$ DELIMITER ;

    Read the article

  • C program using inotify to monitor multiple directories along with sub-directories?

    - by lakshmipathi
    I have program which monitors a directory (/test) and notify me. I want to improve this to monitor another directory (say /opt). And also how to monitor it's subdirectories , current i'll get notified if any changes made to files under /test . but i'm not getting any inotifcation if changes made sub-directory of /test, that is touch /test/sub-dir/files.txt .. Here my current code - hope this will help /* Simple example for inotify in Linux. inotify has 3 main functions. inotify_init1 to initialize inotify_add_watch to add monitor then inotify_??_watch to rm monitor.you the what to replace with ??. yes third one is inotify_rm_watch() */ #include <sys/inotify.h> int main(){ int fd,wd,wd1,i=0,len=0; char pathname[100],buf[1024]; struct inotify_event *event; fd=inotify_init1(IN_NONBLOCK); /* watch /test directory for any activity and report it back to me */ wd=inotify_add_watch(fd,"/test",IN_ALL_EVENTS); while(1){ //read 1024 bytes of events from fd into buf i=0; len=read(fd,buf,1024); while(i<len){ event=(struct inotify_event *) &buf[i]; /* check for changes */ if(event->mask & IN_OPEN) printf("%s :was opened\n",event->name); if(event->mask & IN_MODIFY) printf("%s : modified\n",event->name); if(event->mask & IN_ATTRIB) printf("%s :meta data changed\n",event->name); if(event->mask & IN_ACCESS) printf("%s :was read\n",event->name); if(event->mask & IN_CLOSE_WRITE) printf("%s :file opened for writing was closed\n",event->name); if(event->mask & IN_CLOSE_NOWRITE) printf("%s :file opened not for writing was closed\n",event->name); if(event->mask & IN_DELETE_SELF) printf("%s :deleted\n",event->name); if(event->mask & IN_DELETE) printf("%s :deleted\n",event->name); /* update index to start of next event */ i+=sizeof(struct inotify_event)+event->len; } } }

    Read the article

  • Flash caroussel xml parse html link

    - by Marvin
    Hello I am trying to modify a carousel script I have in flash. Its normal function is making some icons rotate and when clicked they zoom in, fade all others and display a little text. On that text I would like to have a link like a "read more". If I use CDATA it wont display a thing, if I use alt char like &#60;a href=&#34;www.google.com&#34;&#62; Read more + &#60;/a&#62; It just displays the text as: <a href="www.google.com"> Read more + </a>. The flash dynamic text box wont render it as html. I dont enough as2 to figure out how to add this. My code: var xml:XML = new XML(); xml.ignoreWhite = true; //definições do xml xml.onLoad = function() { var nodes = this.firstChild.childNodes; numOfItems = nodes.length; for(var i=0;i<numOfItems;i++) { var t = home.attachMovie("item","item"+i,i+1); t.angle = i * ((Math.PI*2)/numOfItems); t.onEnterFrame = mover; t.toolText = nodes[i].attributes.tooltip; t.content = nodes[i].attributes.content; t.icon.inner.loadMovie(nodes[i].attributes.image); t.r.inner.loadMovie(nodes[i].attributes.image); t.icon.onRollOver = over; t.icon.onRollOut = out; t.icon.onRelease = released; } } And the xml: <?xml version="1.0" encoding="UTF-8"?> <icons> <icon image="images/product.swf" tooltip="Product" content="Hello this is some random text &#60;a href=&#34;www.google.com&#34;&#62; Read More + &#60;/a&#62; "/> </icons> Any suggestions? Thanks.

    Read the article

  • C++ program Telephone Directory from a file

    - by Stacy Doyle
    I am writing a program for a phone directory. The user inputs a name and the program searches the file and either outputs the number or an error because the persons name is not in the file. The program should also ask the user if they would like to continue using the program and look up another number. So far runs and asks for the name and then prints the error message that I put in place saying that the name is not in the database. I am guessing that I must not really be having my program look in the file but not sure what to do also don't know how to get the program to run again if the user chooses to continue. #include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; char chr; int main() { string first; string last; string number; string firstfile; string lastfile; string numberfile; int cont; ifstream infile; infile.open("name and numbers.dat"); //opening the file infile>>firstfile>>lastfile>>numberfile; cout<<"Enter a first and last name."<<endl; //Asking user for the input cin>>first>>last; //input the data { if(first==firstfile && last==lastfile) //if the entered information matches the information in the file cout<<first<<" "<<last<<"'s number is "<<numberfile<<endl; //this is printed else cout<<"Sorry that is not in our database."<<endl; //if the information doesn't match this is printed } cout<<"Would you like to search for another name? Y or N"<<endl; //user is asked if they would like to continue cin>>cont; infile.close(); //close file cin>>chr; return 0; }

    Read the article

  • Multiplying matrices: error: expected primary-expression before 'struct'

    - by justin
    I am trying to write a program that is supposed to multiply matrices using threads. I am supposed to fill the matrices using random numbers in a thread. I am compiling in g++ and using PTHREADS. I have also created a struct to pass the data from my command line input to the thread so it can generate the matrix of random numbers. The sizes of the two matrices are also passed in the command line as well. I keep getting: main.cpp:7: error: expected primary-expression before 'struct' my code @ line 7 =: struct a{ int Arow; int Acol; int low; int high; }; My inpust are : Sizes of two matrices ( 4 arguments) high and low ranges in which o generate the random numbers between. Complete code: [headers] using namespace std; void *matrixACreate(struct *); void *status; int main(int argc, char * argv[]) { int Arow = atoi(argv[1]); // Matrix A int Acol = atoi(argv[2]); // WxX int Brow = atoi(argv[3]); // Matrix B int Bcol = atoi(argv[4]); // XxZ, int low = atoi(argv[5]); // Range low int high = atoi(argv[6]); struct a{ int Arow; // Matrix A int Acol; // WxX int low; // Range low int high; }; pthread_t matrixAthread; //pthread_t matrixBthread; pthread_t runner; int error, retValue; if (Acol != Brow) { cout << " This matrix cannot be multiplied. FAIL" << endl; return 0; } error = pthread_create(&matrixAthread, NULL, matrixACreate, struct *a); //error = pthread_create(&matrixAthread, NULL, matrixBCreate, sendB); retValue = pthread_join(matrixAthread, &status); //retValue = pthread_join(matrixBthread, &status); return 0; } void matrixACreate(struct * a) { struct a *data = (struct a *) malloc(sizeof(struct a)); data->Arow = Arow; data->Acol = Acol; data->low = low; data->high = high; int range = ((high - low) + 1); cout << Arow << endl<< Acol << endl; }// just trying to print to see if I am in the thread

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Still failing a function, not sure why...ideas on test cases to run?

    - by igor
    I've been trying to get this Sudoku game working, and I am still failing some of the individual functions. All together the game works, but when I run it through an "autograder", some test cases fail.. Currently I am stuck on the following function, placeValue, failing. I do have the output that I get vs. what the correct one should be, but am confused..what is something going on? EDIT: I do not know what input/calls they make to the function. What happens is that "invalid row" is outputted after every placeValue call, and I can't trace why.. Here is the output (mine + correct one) if it's at all helpful: http://pastebin.com/Wd3P3nDA Here is placeValue, and following is getCoords that placeValue calls.. void placeValue(Square board[BOARD_SIZE][BOARD_SIZE]) { int x,y,value; if(getCoords(x,y)) { cin>>value; if(board[x][y].permanent) { cout<< endl << "That location cannot be changed"; } else if(!(value>=1 && value<=9)) { cout << "Invalid number"<< endl; clearInput(); } else if(validMove(board, x, y, value)) { board[x][y].number=value; } } } bool getCoords(int & x, int & y) { char row; y=0; cin>>row>>y; x = static_cast<int>(toupper(row)); if (isalpha(row) && (x >= 'A' && x <= 'I') && y >= 1 && y <= 9) { x = x - 'A'; // converts x from a letter to corresponding index in matrix y = y - 1; // converts y to corresponding index in matrix return (true); } else if (!(x >= 'A' && x <= 'I')) { cout<<"Invalid row"<<endl; clearInput(); return false; } else { cout<<"Invalid column"<<endl; clearInput(); return false; } }

    Read the article

  • How can I prevent segmentation faults in my program?

    - by worlds-apart89
    I have a C assignment. It is a lot longer than the code shown below, and we are given the function prototypes and instructions only. I have done my best at writing code, but I am stuck with segmentation faults. When I compile and run the program below on Linux, at "735 NaN" it will terminate, indicating a segfault occurred. Why? What am I doing wrong? Basically, the program does not let me access table-list_array[735]-value and table-list_array[735]-key. This is of course the first segfault. There might be more following index 735. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { char *key; int value; list_node_t *next; }; typedef struct count_table count_table_t; struct count_table { int size; list_node_t **list_array; }; count_table_t* table_allocate(int size) { count_table_t *ptr = malloc(sizeof(count_table_t)); ptr->size = size; list_node_t *nodes[size]; int k; for(k=0; k<size; k++){ nodes[k] = NULL; } ptr->list_array = nodes; return ptr; } void table_addvalue(count_table_t *table) { int i; for(i=0; i<table->size; i++) { table->list_array[i] = malloc(sizeof(list_node_t)); table->list_array[i]->value = i; table->list_array[i]->key = "NaN"; table->list_array[i]->next = NULL; } } int main() { count_table_t *table = table_allocate(1000); table_addvalue(table); int i; for(i=0; i<table->size; i++) printf("%d %s\n", table->list_array[i]->value, table->list_array[i]->key); return 0; }

    Read the article

  • A better UPDATE method in LINQ to SQL

    - by Refracted Paladin
    The below is a typical, for me, Update method in L2S. I am still fairly new to a lot of this(L2S & business app development) but this just FEELs wrong. Like there MUST be a smarter way of doing this. Unfortunately, I am having trouble visualizing it and am hoping someone can provide an example or point me in the right direction. To take a stab in the dark, would I have a Person Object that has all these fields as Properties? Then what, though? Is that redundant since L2S already mapped my Person Table to a Class? Is this just 'how it goes', that you eventually end up passing 30 parameters(or MORE) to an UPDATE statement at some point? For reference, this is a business app using C#, WinForms, .Net 3.5, and L2S over SQL 2005 Standard. Here is a typical Update Call for me. This is in a file(BLLConnect.cs) with other CRUD methods. Connect is the name of the DB that holds tblPerson When a user clicks save() this is what is eventually called with all of these fields having, potentially, been updated-- public static void UpdatePerson(int personID, string userID, string titleID, string firstName, string middleName, string lastName, string suffixID, string ssn, char gender, DateTime? birthDate, DateTime? deathDate, string driversLicenseNumber, string driversLicenseStateID, string primaryRaceID, string secondaryRaceID, bool hispanicOrigin, bool citizenFlag, bool veteranFlag, short ? residencyCountyID, short? responsibilityCountyID, string emailAddress, string maritalStatusID) { using (var context = ConnectDataContext.Create()) { var personToUpdate = (from person in context.tblPersons where person.PersonID == personID select person).Single(); personToUpdate.TitleID = titleID; personToUpdate.FirstName = firstName; personToUpdate.MiddleName = middleName; personToUpdate.LastName = lastName; personToUpdate.SuffixID = suffixID; personToUpdate.SSN = ssn; personToUpdate.Gender = gender; personToUpdate.BirthDate = birthDate; personToUpdate.DeathDate = deathDate; personToUpdate.DriversLicenseNumber = driversLicenseNumber; personToUpdate.DriversLicenseStateID = driversLicenseStateID; personToUpdate.PrimaryRaceID = primaryRaceID; personToUpdate.SecondaryRaceID = secondaryRaceID; personToUpdate.HispanicOriginFlag = hispanicOrigin; personToUpdate.CitizenFlag = citizenFlag; personToUpdate.VeteranFlag = veteranFlag; personToUpdate.ResidencyCountyID = residencyCountyID; personToUpdate.ResponsibilityCountyID = responsibilityCountyID; personToUpdate.EmailAddress = emailAddress; personToUpdate.MaritalStatusID = maritalStatusID; personToUpdate.UpdateUserID = userID; personToUpdate.UpdateDateTime = DateTime.Now; context.SubmitChanges(); } }

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • boost.asio error on read from socket.

    - by niXman
    The following code of the client: typedef boost::array<char, 10> header_packet; header_packet header; boost::system::error_code error; ... /** send header */ boost::asio::write( _socket, boost::asio::buffer(header, header.size()), boost::asio::transfer_all(), error ); /** send body */ boost::asio::write( _socket, boost::asio::buffer(buffer, buffer.length()), boost::asio::transfer_all(), error ); of the server: struct header { boost::uint32_t header_length; boost::uint32_t id; boost::uint32_t body_length; }; static header unpack_header(const header_packet& data) { header hdr; sscanf(data.data(), "%02d%04d%04d", &hdr.header_length, &hdr.id, &hdr.body_length); return hdr; } void connection::start() { boost::asio::async_read( _socket, boost::asio::buffer(_header, _header.size()), boost::bind( &connection::read_header_handler, shared_from_this(), boost::asio::placeholders::error ) ); } /***************************************************************************/ void connection::read_header_handler(const boost::system::error_code& e) { if ( !e ) { std::cout << "readed header: " << _header.c_array() << std::endl; std::cout << constants::unpack_header(_header); boost::asio::async_read( _socket, boost::asio::buffer(_body, constants::unpack_header(_header).body_length), boost::bind( &connection::read_body_handler, shared_from_this(), boost::asio::placeholders::error ) ); } else { /** report error */ std::cout << "read header finished with error: " << e.message() << std::endl; } } /***************************************************************************/ void connection::read_body_handler(const boost::system::error_code& e) { if ( !e ) { std::cout << "readed body: " << _body.c_array() << std::endl; start(); } else { /** report error */ std::cout << "read body finished with error: " << e.message() << std::endl; } } On the server side the method read_header_handler() is called, but the method read_body_handler() is never called. Though the client has written down the data in a socket. The header is readed and decoded successfully. What's the error?

    Read the article

  • Java: Preventing array going out of bounds.

    - by Troy
    I'm working on a game of checkers, if you want to read more about you can view it here; http://minnie.tuhs.org/I2P/Assessment/assig2.html When I am doing my test to see if the player is able to get to a certain square on the grid (i.e. +1 +1, +1 -1 .etc) from it's current location, I get an java.lang.ArrayIndexOutOfBoundsException error. This is the code I am using to make the move; public static String makeMove(String move, int playerNumber) { // variables to contain the starting and destination coordinates, subtracting 1 to match array size int colStart = move.charAt(1) - FIRSTCOLREF - 1; int rowStart = move.charAt(0) - FIRSTROWREF - 1; int colEnd = move.charAt(4) - FIRSTCOLREF - 1; int rowEnd = move.charAt(3) - FIRSTROWREF - 1; // variable to contain which player is which char player, enemy; if (playerNumber==1) { player= WHITEPIECE; enemy= BLACKPIECE; } else { player= BLACKPIECE; enemy= WHITEPIECE; } // check that the starting square contains a player piece if (grid [ colStart ] [ rowStart ] == player) { // check that the player is making a diagonal move if (grid [ colEnd ] [ rowEnd ] == grid [ (colStart++) ] [ (rowEnd++) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart--) ] [ (rowEnd++) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart++) ] [ (rowEnd--) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart--) ] [ (rowEnd--) ]) { // check that the destination square is free if (grid [ colEnd ] [ rowEnd ] == BLANK) { grid [ colStart ] [ rowStart ] = BLANK; grid [ colEnd ] [ rowEnd ] = player; } } // check if player is jumping over a piece else if (grid [ colEnd ] [ rowEnd ] == grid [ (colStart+2) ] [ (rowEnd+2) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart-2) ] [ (rowEnd+2) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart+2) ] [ (rowEnd-2) ] && grid [ colEnd ] [ rowEnd ] == grid [ (colStart-2) ] [ (rowEnd-2) ]) { // check that the piece in between contains an enemy if ((grid [ (colStart++) ] [ (rowEnd++) ] == enemy ) && (grid [ (colStart--) ] [ (rowEnd++) ] == enemy ) && (grid [ (colStart++) ] [ (rowEnd--) ] == enemy ) && (grid [ (colStart--) ] [ (rowEnd--) ] == enemy )) { // check that the destination is free if (grid [ colEnd ] [ rowEnd ] == BLANK) { grid [ colStart ] [ rowStart ] = BLANK; grid [ colEnd ] [ rowEnd ] = player; } } } } I'm not sure how I can prevent the error from happening, what do you recommend?

    Read the article

  • bad file descriptor with close() socket (c++)

    - by user321246
    hi everybody! I'm running out of file descriptors when my program can't connect another host. The close() system call doesn't work, the number of open sockets increases. I can se it with cat /proc/sys/fs/file-nr Print from console: connect: No route to host close: Bad file descriptor connect: No route to host close: Bad file descriptor .. Code: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <iostream> using namespace std; #define PORT 1238 #define MESSAGE "Yow!!! Are we having fun yet?!?" #define SERVERHOST "192.168.9.101" void write_to_server (int filedes) { int nbytes; nbytes = write (filedes, MESSAGE, strlen (MESSAGE) + 1); if (nbytes < 0) { perror ("write"); } } void init_sockaddr (struct sockaddr_in *name, const char *hostname, uint16_t port) { struct hostent *hostinfo; name->sin_family = AF_INET; name->sin_port = htons (port); hostinfo = gethostbyname (hostname); if (hostinfo == NULL) { fprintf (stderr, "Unknown host %s.\n", hostname); } name->sin_addr = *(struct in_addr *) hostinfo->h_addr; } int main() { for (;;) { sleep(1); int sock; struct sockaddr_in servername; /* Create the socket. */ sock = socket (PF_INET, SOCK_STREAM, 0); if (sock < 0) { perror ("socket (client)"); } /* Connect to the server. */ init_sockaddr (&servername, SERVERHOST, PORT); if (0 > connect (sock, (struct sockaddr *) &servername, sizeof (servername))) { perror ("connect"); sock = -1; } /* Send data to the server. */ if (sock > -1) write_to_server (sock); if (close (sock) != 0) perror("close"); } return 0; }

    Read the article

  • Running daemon through rsh

    - by Max
    I want to run program as daemon in remote machine in Unix. I have rsh connection and I want the program to be running after disconnection. Suppose I have two programs: util.cpp and forker.cpp. util.cpp is some utility, for our purpose let it be just infinite root. util.cpp int main() { while (true) {}; return 0; } forker.cpp takes some program and run it in separe process through fork() and execve(): forker.cpp #include <stdio.h> #include <errno.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char** argv) { if (argc != 2) { printf("./a.out <program_to_fork>\n"); exit(1); } pid_t pid; if ((pid = fork()) < 0) { perror("fork error."); exit(1); } else if (!pid) { // Child. if (execve(argv[1], &(argv[1]), NULL) == -1) { perror("execve error."); exit(1); } } else { // Parent: do nothing. } return 0; } If I run: ./forker util forker is finished very quickly, and bash 'is not paused', and util is running as daemon. But if I run: scp forker remote_server://some_path/ scp program remote_server://some_path/ rsh remote_server 'cd /some_path; ./forker program' then it is all the same (i.e. at the remote_sever forker is finishing quickly, util is running) but my bash in local machine is paused. It is waiting for util stopping (I checked it. If util.cpp is returning than it is ok.), but I don't understand why?! There are two questions: 1) Why is it paused when I run it through rsh? I am sure that I chose some stupid way to run daemon. So 2) How to run some program as daemon in C/C++ in unix-like platforms. Tnx!

    Read the article

  • Making Global Struct in C++ Program

    - by mosg
    Hello world! I am trying to make global structure, which will be seen from any part of the source code. I need it for my big Qt project, where some global variables needed. Here it is: 3 files (global.h, dialog.h & main.cpp). For compilation I use Visual Studio (Visual C++). global.h #ifndef GLOBAL_H_ #define GLOBAL_H_ typedef struct TNumber { int g_nNumber; } TNum; TNum Num; #endif dialog.h #ifndef DIALOG_H_ #define DIALOG_H_ #include <iostream> #include "global.h" using namespace std; class ClassB { public: ClassB() {}; void showNumber() { Num.g_nNumber = 82; cout << "[ClassB][Change Number]: " << Num.g_nNumber << endl; } }; #endif and main.cpp #include <iostream> #include "global.h" #include "dialog.h" using namespace std; class ClassA { public: ClassA() { cout << "Hello from class A!\n"; }; void showNumber() { cout << "[ClassA]: " << Num.g_nNumber << endl; } }; int main(int argc, char **argv) { ClassA ca; ClassB cb; ca.showNumber(); cb.showNumber(); ca.showNumber(); cout << "Exit.\n"; return 0; } When I`m trying to build this little application, compilation works fine, but the linker gives me back an error: 1>dialog.obj : error LNK2005: "struct TNumber Num" (?Num@@3UTNumber@@A) already defined in main.obj Is there exists any solution? Thanks.

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • How to set background in OpenGL captured image from OpenCV

    - by user325487
    Hey All, i'm relatively new to Artoolkitplus and openGL i'm having a tough time getting the image i capture through openCV to be set as the background image in OpenGL ... I also cannot convert the image i take through the camera using opencv to be scaled to 320x280 from 640x480 .. i also have to save my image and load if for things to work... here's my code //////////// int findMarker() { IplImage* image = cvQueryFrame( capture ); if( !capture ) { fprintf( stderr, "ERROR: capture is NULL \n" ); getchar(); return -1; } if( !image ) { fprintf( stderr, "ERROR: frame is null...\n" ); getchar(); } //cvShowImage( "Capture", frame ); //image = cvCloneImage( frame ); try{ if(!cvSaveImage("immagineTmp.jpg",image)) printf("Could not save\n"); } catch(void*) {} image = cvLoadImage("immagineTmp.jpg", 1); cvShowImage( "Image", image ); glLoadIdentity(); ////////////// glDisable(GL_DEPTH_TEST); glOrtho(0,640,0,480,-1,1); glGenTextures(1, &bgid); glBindTexture(GL_TEXTURE_2D, bgid); // Create Linear Filtered Texture glBindTexture(GL_TEXTURE_2D, bgid); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 3, image-width, image-height, 0, GL_RGB, GL_UNSIGNED_BYTE, image-imageData); glBindTexture(GL_TEXTURE_2D, bgid); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.2f, -1.0f, -2.0f); glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.2f, 1.0f, -2.0f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.2f, 1.0f, -2.0f); glEnd(); glEnable(GL_DEPTH_TEST); glLoadIdentity(); //////////// // do the OpenGL camera setup glMatrixMode(GL_PROJECTION); glLoadMatrixf(tracker-getProjectionMatrix()); int markerId = tracker-calc((unsigned char *)(image-imageData)); float conf = tracker-getConfidence(); // use the result of calc() to setup the OpenGL transformation glMatrixMode(GL_MODELVIEW); glLoadMatrixf(tracker-getModelViewMatrix()); if(markerId!=-1) { printf("\n\nFound marker %d (confidence %d%%)\n\nPose-Matrix:\n ", markerId, (int(conf*100.0f))); for(int i=0; i<16; i++) printf("%.2f %s", tracker-getModelViewMatrix()[i], (i%4==3)?"\n " : ""); } cvReleaseImage(&image); return 0; }

    Read the article

  • Problem separating C++ code in header, inline functions and code.

    - by YuppieNetworking
    Hello all, I have the simplest code that I want to separate in three files: Header file: class and struct declarations. No implementations at all. Inline functions file: implementation of inline methods in header. Code file: normal C++ code for more complicated implementations. When I was about to implement an operator[] method, I couldn't manage to compile it. Here is a minimal example that shows the same problem: Header (myclass.h): #ifndef _MYCLASS_H_ #define _MYCLASS_H_ class MyClass { public: MyClass(const int n); virtual ~MyClass(); double& operator[](const int i); double operator[](const int i) const; void someBigMethod(); private: double* arr; }; #endif /* _MYCLASS_H_ */ Inline functions (myclass-inl.h): #include "myclass.h" inline double& MyClass::operator[](const int i) { return arr[i]; } inline double MyClass::operator[](const int i) const { return arr[i]; } Code (myclass.cpp): #include "myclass.h" #include "myclass-inl.h" #include <iostream> inline MyClass::MyClass(const int n) { arr = new double[n]; } inline MyClass::~MyClass() { delete[] arr; } void MyClass::someBigMethod() { std::cout << "Hello big method that is not inlined" << std::endl; } And finally, a main to test it all: #include "myclass.h" #include <iostream> using namespace std; int main(int argc, char *argv[]) { MyClass m(123); double x = m[1]; m[1] = 1234; cout << "m[1]=" << m[1] << endl; x = x + 1; return 0; } void nothing() { cout << "hello world" << endl; } When I compile it, it says: main.cpp:(.text+0x1b): undefined reference to 'MyClass::MyClass(int)' main.cpp:(.text+0x2f): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x49): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x65): undefined reference to 'MyClass::operator[](int)' However, when I move the main method to the MyClass.cpp file, it works. Could you guys help me spot the problem? Thank you.

    Read the article

  • Function returning MYSQL_ROW

    - by Gabe
    I'm working on a system using lots of MySQL queries and I'm running into some memory problems I'm pretty sure have to do with me not handling pointers right... Basically, I've got something like this: MYSQL_ROW function1() { string query="SELECT * FROM table limit 1;"; MYSQL_ROW return_row; mysql_init(&connection); // "connection" is a global variable if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&connection); else{ resp = mysql_store_result(&connection); //"resp" is also global if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error: " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And function2(): MYSQL_ROW function2(MYSQL_ROW row) { string query = "select * from table2 where code = '" + string(row[2]) + "'"; MYSQL_ROW retorno; mysql_init(&connection); if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&conexao); else{ // My "debugging" shows me at this point `row[2]` is already fubar resp = mysql_store_result(&connection); if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error : " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And main() is an infinite loop basically like this: int main( int argc, char* args[] ){ MYSQL_ROW row = NULL; while (1) { row = function1(); if(row != NULL) function2(row); } } (variable and function names have been generalized to protect the innocent) But after the 3rd or 4th call to function2, that only uses row for reading, row starts losing its value coming to a segfault error... Anyone's got any ideas why? I'm not sure the amount of global variables in this code is any good, but I didn't design it and only got until tomorrow to fix and finish it, so workarounds are welcome! Thanks!

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >