Search Results

Search found 6123 results on 245 pages for 'unsigned char'.

Page 206/245 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Effective java hashcode implementation

    - by Scobal
    I was wondering if someone could explain in detail what (int)(l ^ (l >>> 32)); does in the following hashcode implementation (generated by eclipse, but the same as Effective Java): private int i; private char c; private boolean b; private short s; private long l; private double d; private float f; @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + i; result = prime * result + s; result = prime * result + (b ? 1231 : 1237); result = prime * result + c; long t = Double.doubleToLongBits(d); result = prime * result + (int) (t ^ (t >>> 32)); result = prime * result + Float.floatToIntBits(f); result = prime * result + (int) (l ^ (l >>> 32)); return result; } Thanks!

    Read the article

  • GDB not breaking on breakpoints set on object creation in C++

    - by Drew
    Hi all, I've got a c++ app, with the following main.cpp: 1: #include <stdio.h> 2: #include "HeatMap.h" 3: #include <iostream> 4: 5: int main (int argc, char * const argv[]) 6: { 7: HeatMap heatMap(); 8: printf("message"); 9: return 0; 10: } Everything compiles without errors, I'm using gdb (GNU gdb 6.3.50-20050815 (Apple version gdb-1346) (Fri Sep 18 20:40:51 UTC 2009)), and compiled the app with gcc (gcc version 4.2.1 (Apple Inc. build 5646) (dot 1)) with the commands "-c -g". When I add breakpoints to lines 7, 8, and 9, and run gdb, I get the following... (gdb) break main.cpp:7 Breakpoint 1 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:8 Note: breakpoint 1 also set at pc 0x10000177f. Breakpoint 2 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:9 Breakpoint 3 at 0x100001790: file src/main.cpp, line 9. (gdb) run Starting program: /DevProjects/DataManager/build/DataManager Reading symbols for shared libraries ++. done Breakpoint 1, main (argc=1, argv=0x7fff5fbff960) at src/main.cpp:8 8 printf("message"); (gdb) So, why of why, does anyone know, why my app does not break on the breakpoints for the object creation, but does break on the printf line? Drew J. Sonne.

    Read the article

  • problems with openGl on eclipse

    - by lego69
    I'm working on Windows XP I have portable version of Eclipse Galileo, but I didn't find there glut so I decided to add it using this link I made all steps and and now I'm trying to compile this code #include "GL/glut.h" #include "GL/gl.h" #include "GL/glu.h" /////////////////////////////////////////////////////////// // Called to draw scene void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT); // Flush drawing commands glFlush(); } /////////////////////////////////////////////////////////// // Setup the rendering state void SetupRC(void) { glClearColor(0.0f, 0.0f, 1.0f, 1.0f); } /////////////////////////////////////////////////////////// // Main program entry point void main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(800,600); glutCreateWindow("Simple"); glutDisplayFunc(RenderScene); SetupRC(); glutMainLoop(); } and I have this errors Simple.o: In function `RenderScene': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:16: undefined reference to `_imp__glClear' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:20: undefined reference to `_imp__glFlush' Simple.o: In function `SetupRC': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:27: undefined reference to `_imp__glClearColor' Simple.o: In function `main': C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:34: undefined reference to `glutInit' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:35: undefined reference to `glutInitDisplayMode' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:36: undefined reference to `glutInitWindowSize' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:37: undefined reference to `glutCreateWindow' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:38: undefined reference to `glutDisplayFunc' C:/Documents and Settings/Administrator/Desktop/workspace/open/Debug/../Simple.c:42: undefined reference to `glutMainLoop' collect2: ld returned 1 exit status please can somebody help me, thanks in advance

    Read the article

  • When is a>a true ?

    - by Cricri
    Right, I think I really am living a dream. I have the following piece of code which I compile and run on an AIX machine: AIX 3 5 PowerPC_POWER5 processor type IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0003 #include <stdio.h> #include <math.h> #define RADIAN(x) ((x) * acos(0.0) / 90.0) double nearest_distance(double radius,double lon1, double lat1, double lon2, double lat2){ double rlat1=RADIAN(lat1); double rlat2=RADIAN(lat2); double rlon1=lon1; double rlon2=lon2; double a=0,b=0,c=0; a = sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1); printf("%lf\n",a); if (a > 1) { printf("aaaaaaaaaaaaaaaa\n"); } b = acos(a); c = radius * b; return radius*(acos(sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1))); } int main(int argc, char** argv) { nearest_distance(6367.47,10,64,10,64); return 0; } Now, the value of 'a' after the calculation is reported as being '1'. And, on this AIX machine, it looks like 1 1 is true as my 'if' is entered !!! And my acos of what I think is '1' returns NanQ since 1 is bigger than 1. May I ask how that is even possible ? I do not know what to think anymore ! The code works just fine on other architectures where 'a' really takes the value of what I think is 1 and acos(a) is 0.

    Read the article

  • MFC Combo-Box Control is not showing the full list of items when I click the drop-down menu...

    - by shan23
    I'm coding an app in MSVS 2008, which has a ComboBox control which I initialize thru the code as below: static char* OptionString[4] = {"Opt1", "Opt2", "Opt3", "Opt4"}; BOOL CMyAppDlg::OnInitDialog() { CDialog::OnInitDialog(); // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here m_Option.AddString(OptionString[0]); m_Option.AddString(OptionString[1]); m_Option.AddString(OptionString[2]); m_Option.AddString(OptionString[3]); m_Option.SetCurSel(0); return TRUE; // return TRUE unless you set the focus to a control } Now, when I build the app and click the down-arrow, the drop-down box shows the first option ONLY(since I've selected that thru my code). But, if i press down-arrow key on keyboard, it cycles thru the options in the order I've inserted, but never does it show more than 1 option in the box. So, In case an user wants to select option3, he has to cycle through options 1 and 2 !! Though once I select any option using the keyboard, the appropriate event handlers are fired, I'm miffed by this behaviour , as is understandable. I'm listing the properties of the combo-box control as well - only the properties that are true(rest are set to false): Type - Dropdown Vertical Scrollbar Visible Tabstop This has bugged me for weeks now. Can anyone pls enlighten me ?

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • operator<< cannot output std::endl -- Fix?

    - by dehmann
    The following code gives an error when it's supposed to output just std::endl: #include <iostream> #include <sstream> struct MyStream { std::ostream* out_; MyStream(std::ostream* out) : out_(out) {} std::ostream& operator<<(const std::string& s) { (*out_) << s; return *out_; } }; template<class OutputStream> struct Foo { OutputStream* out_; Foo(OutputStream* out) : out_(out) {} void test() { (*out_) << "OK" << std::endl; (*out_) << std::endl; // ERROR } }; int main(int argc, char** argv){ MyStream out(&std::cout); Foo<MyStream> foo(&out); foo.test(); return EXIT_SUCCESS; } The error is: stream1.cpp:19: error: no match for 'operator<<' in '*((Foo<MyStream>*)this)->Foo<MyStream>::out_ << std::endl' stream1.cpp:7: note: candidates are: std::ostream& MyStream::operator<<(const std::string&) So it can output a string (see line above the error), but not just the std::endl, presumably because std::endl is not a string, but the operator<< definition asks for a string. Templating the operator<< didn't help: template<class T> std::ostream& operator<<(const T& s) { ... } How can I make the code work? Thanks!

    Read the article

  • opengl color quadrangle

    - by Tyzak
    hello i try out opengl. i have a programm that creates a black border an white corner (quadrangle). now i want to make the corner of the quadrangle in an different color. i don't know where exactly to write the code, and i don't know much a but color4f, i searcherd on google, but didn't get it. (is there a good description somewhere?) #include <iostream> #include <GL/freeglut.h> void Init() { glColor4f(100,0,0,0); } void RenderScene() //Zeichenfunktion { glLoadIdentity (); glBegin( GL_POLYGON ); glVertex3f( -0.5, -0.5, -0.5 ); glVertex3f( 0.5, -0.5, -0.5 ); glVertex3f( 0.5, 0.5, -0.5 ); glVertex3f( -0.5, 0.5, -0.5 ); glEnd(); glFlush(); } void Reshape(int width,int height) { } void Animate (int value) { std::cout << "value=" << value << std::endl; glutPostRedisplay(); glutTimerFunc(100, Animate, ++value); } int main(int argc, char **argv) { glutInit( &argc, argv ); // GLUT initialisieren glutInitDisplayMode( GLUT_RGB ); // Fenster-Konfiguration glutInitWindowSize( 600, 600 ); glutCreateWindow( "inkrement screen; visual screen" ); // Fenster-Erzeugung glutDisplayFunc( RenderScene ); // Zeichenfunktion bekannt machen glutReshapeFunc( Reshape ); glutTimerFunc( 10, Animate, 0); Init(); glutMainLoop(); return 0; }

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • Creating and using a static lib in xcode

    - by Alasdair Morrison
    I am trying to create a static library in xcode and link to that static library from another program. So as a test i have created a BSD static C library project and just added the following code: //Test.h int testFunction(); //Test.cpp #include "Test.h" int testFunction() { return 12; } This compiles fine and create a .a file (libTest.a). Now i want to use it in another program so I create a new xcode project (cocoa application) Have the following code: //main.cpp #include <iostream> #include "Testlib.h" int main (int argc, char * const argv[]) { // insert code here... std::cout << "Result:\n" <<testFunction(); return 0; } //Testlib.h extern int testFunction(); I right clicked on the project - add - existing framework - add other Selected the .a file and it added it into the project view. I always get this linker error: Build TestUselibrary of project TestUselibrary with configuration Debug Ld build/Debug/TestUselibrary normal x86_64 cd /Users/myname/location/TestUselibrary setenv MACOSX_DEPLOYMENT_TARGET 10.6 /Developer/usr/bin/g++-4.2 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -L/Users/myname/location/TestUselibrary/build/Debug -L/Users/myname/location/TestUselibrary/../Test/build/Debug -F/Users/myname/location/TestUselibrary/build/Debug -filelist /Users/myname/location/TestUselibrary/build/TestUselibrary.build/Debug/TestUselibrary.build/Objects-normal/x86_64/TestUselibrary.LinkFileList -mmacosx-version-min=10.6 -lTest -o /Users/myname/location/TestUselibrary/build/Debug/TestUselibrary Undefined symbols: "testFunction()", referenced from: _main in main.o ld: symbol(s) not found collect2: ld returned 1 exit status I am new to macosx development and fairly new to c++. I am probably missing something fairly obvious, all my experience comes from creating dlls on the windows platform. I really appreciate any help.

    Read the article

  • string Comparison

    - by muhammad-aslam
    I want to compare two user input strings, but not able to do so... #include "stdafx.h" #include "iostream" #include "string" using namespace std; int _tmain(int argc, _TCHAR* argv0[]) { string my_string; string my_string2; cout<<"Enter string"<<endl; cin>>my_string; cout<<"Enter 2nd string"<<endl; cin>>my_string2; cout<<my_string<<" "<<my_string2; strcmp(my_string,my_string2); int result; result= strcmp(my_string,my_string2); cout<<result<<endl; return 0; } This error is appearing. Error 1 error C2664: 'strcmp' : cannot convert parameter 1 from 'std::string' to 'const char *' c:\users\asad\documents\visual studio 2008\projects\string\string\string.cpp 23 String

    Read the article

  • Nokogiri HttParty Xpath Ruby on Rails

    - by Brian
    I am working with a mmorpg (Eve Online) request that returns xml. I am using httparty for the request and I am trying to use nokogiri to obtain attribute values for a specific element. Here's an example of the response: <eveapi version="2"><currentTime>2012-10-19 22:41:56</currentTime><result><rowset name="transactions" key="refID" columns="date,refID,refTypeID,ownerName1,ownerID1,ownerName2,ownerID2,argName1,argID1,amount,balance,reason,taxReceiverID,taxAmount"><row date="2012-10-18 23:41:50" refID="232323" refTypeID="9" ownerName1="University of Caille" ownerID1="32232" ownerName2="name" ownerID2="34343" argName1="" argID1="0" amount="5000.00" balance="5000.00" reason="Starter fund" taxReceiverID="" taxAmount=""/></rowset></result><cachedUntil>2012-10-19 23:03:40</cachedUntil></eveapi> I only need to access attributes for the element "row" and there can be many rows returned. I have read about xpath and from what I understand if I do the following it should return all rows: doc.xpath('row') however it does not return anything. Here's what I have so far: options = {:keyID => 111111, :vCode => 'fddfdfdfdf'} response = HTTParty.post('https://api.eveonline.com/char/WalletJournal.xml.aspx', :body => options) doc = Nokogiri::XML(response.body) doc.xpath('row').each do |r| end The loop is never executed. What am I doing wrong? I need to return all row elements and gain access to each of the row's attributes. Thanks.

    Read the article

  • Finding missing symbols in libstd++ on Debian/squeeze

    - by Florian Le Goff
    I'm trying to use a pre-compiled library provided as a .so file. This file is dynamically linked against a few librairies : $ ldd /usr/local/test/lib/libtest.so linux-gate.so.1 = (0xb770d000) libstdc++-libc6.1-1.so.2 = not found libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb75e1000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7499000) /lib/ld-linux.so.2 (0xb770e000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb747c000) Unfortunately, in Debian/squeeze, there is no libstdc++-libc6.1-1.so.* file. Only a libstdc++.so.* file provided by the libstdc++6 package. I tried to link (using ln -s) libstdc++-libc6.1-1.so.2 to the libstdc++.so.6 file. It does not work, a batch of symbols seems to be lacking when I'm trying to ld my .o files with this lib. /usr/local/test/lib/libtest.so: undefined reference to `__builtin_vec_delete' /usr/local/test/lib/libtest.so: undefined reference to `istrstream::istrstream(int, char const *, int)' /usr/local/test/lib/libtest.so: undefined reference to `__rtti_user' /usr/local/test/lib/libtest.so: undefined reference to `__builtin_new' /usr/local/test/lib/libtest.so: undefined reference to `istream::ignore(int, int)' What would you do ? How may I find in which lib those symbols are exported ?

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • Use of unassigned local variable 'dictionary'

    - by codemonkie
    I got the error Use of unassigned local variable 'dictionary' despite I assigned the value in the following code: private static void UpdateJadProperties(Uri jadUri, Uri jarUri, Uri notifierUri) { Dictionary<String, String> dictionary; try { String[] jadFileContent; // Create an instance of StreamReader to read from a file. // The using statement also closes the StreamReader. using (StreamReader sr = new StreamReader(jadUri.AbsolutePath.ToString())) { Char[] delimiters = { '\r', '\n' }; jadFileContent = sr.ReadToEnd().Split(delimiters, System.StringSplitOptions.RemoveEmptyEntries); } // @@NOTE: Keys contain ": " suffix, values don't! dictionary = jadFileContent.ToDictionary(x => x.Substring(0, x.IndexOf(':') + 2), x => x.Substring(x.IndexOf(':') + 2)); } catch (Exception e) { // Let the user know what went wrong. Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } try { if (dictionary.ContainsKey("MIDlet-Jar-URL: ")) { // Change the value by Remove follow by Add } } catch (ArgumentNullException ane) { throw; } } The error is from the line: if (dictionary.ContainsKey("MIDlet-Jar-URL: ")) Can any one help me out here, pls? TIA

    Read the article

  • instantiate python object within a c function called via ctypes

    - by gwk
    My embedded Python 3.3 program segfaults when I instantiate python objects from a c function called by ctypes. After setting up the interpreter, I can successfully instantiate a python Int (as well as a custom c extension type) from c main: #import <Python/Python.h> #define LOGPY(x) \ { fprintf(stderr, "%s: ", #x); PyObject_Print((PyObject*)(x), stderr, 0); fputc('\n', stderr); } // c function to be called from python script via ctypes. void instantiate() { PyObject* instance = PyObject_CallObject((PyObject*)&PyLong_Type, NULL); LOGPY(instance); } int main(int argc, char* argv[]) { Py_Initialize(); instantiate(); // works fine // run a script that calls instantiate() via ctypes. FILE* scriptFile = fopen("emb.py", "r"); if (!scriptFile) { fprintf(stderr, "ERROR: cannot open script file\n"); return 1; } PyRun_SimpleFileEx(scriptFile, scriptPath, 1); // close on completion return 0; } I then run a python script using PyRun_SimpleFileEx. It appears to run just fine, but when it calls instantiate() via ctypes, the program segfaults inside PyObject_CallObject: import ctypes as ct dy = ct.CDLL('./emb') dy.instantiate() # segfaults lldb output: instance: 0 Process 52068 stopped * thread #1: tid = 0x1c03, 0x000000010000d3f5 Python`PyObject_Call + 69, stop reason = EXC_BAD_ACCESS (code=1, address=0x18) frame #0: 0x000000010000d3f5 Python`PyObject_Call + 69 Python`PyObject_Call + 69: -> 0x10000d3f5: movl 24(%rax), %edx 0x10000d3f8: incl %edx 0x10000d3fa: movl %edx, 24(%rax) 0x10000d3fd: leaq 2069148(%rip), %rax ; _Py_CheckRecursionLimit (lldb) bt * thread #1: tid = 0x1c03, 0x000000010000d3f5 Python`PyObject_Call + 69, stop reason = EXC_BAD_ACCESS (code=1, address=0x18) frame #0: 0x000000010000d3f5 Python`PyObject_Call + 69 frame #1: 0x00000001000d5197 Python`PyEval_CallObjectWithKeywords + 87 frame #2: 0x0000000201100d8e emb`instantiate + 30 at emb.c:9 Why does the call to instantiate() fail from ctypes only? The function only crashes when it calls into the python lib, so perhaps some interpreter state is getting munged by the ctypes FFI call?

    Read the article

  • Objective-C autorelease pool not releasing object

    - by Chris
    Hi I am very new to Objective-C and was reading through memory management. I was trying to play around a bit with the NSAutoreleasePool but somehow it wont release my object. I have a class with a setter and getter which basically sets a NSString *name. After releasing the pool I tried to NSLog the object and it still works but I guess it should not? @interface TestClass : NSObject { NSString *name; } - (void) setName: (NSString *) string; - (NSString *) name; @end @implementation TestClass - (void) setName: (NSString *) string { name = string; } - (NSString *) name { return name; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; TestClass *var = [[TestClass alloc] init]; [var setName:@"Chris"]; [var autorelease]; [pool release]; // This should not be possible? NSLog(@"%@",[var name]); return 0; }

    Read the article

  • error: expected constructor, destructor, or type conversion before '(' token

    - by jonathanasdf
    include/TestBullet.h:12: error: expected constructor, destructor, or type conver sion before '(' token I hate C++ error messages... lol ^^ Basically, I'm following what was written in this post to try to create a factory class for bullets so they can be instantiated from a string, which will be parsed from an xml file, because I don't want to have a function with a switch for all of the classes because that looks ugly. Here is my TestBullet.h: #pragma once #include "Bullet.h" #include "BulletFactory.h" class TestBullet : public Bullet { public: void init(BulletData& bulletData); void update(); }; REGISTER_BULLET(TestBullet); <-- line 12 And my BulletFactory.h: #pragma once #include <string> #include <map> #include "Bullet.h" #define REGISTER_BULLET(NAME) BulletFactory::reg<NAME>(#NAME) #define REGISTER_BULLET_ALT(NAME, CLASS) BulletFactory::reg<CLASS>(NAME) template<typename T> Bullet * create() { return new T; } struct BulletFactory { typedef std::map<std::string, Bullet*(*)()> bulletMapType; static bulletMapType map; static Bullet * createInstance(char* s) { std::string str(s); bulletMapType::iterator it = map.find(str); if(it == map.end()) return 0; return it->second(); } template<typename T> static void reg(std::string& s) { map.insert(std::make_pair(s, &create<T>)); } }; Thanks in advance.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • Issues with reversing bit shifts that roll over the maximum byte size?

    - by Terri
    I have a string of binary numbers that was originally a regular string and will return to a regular string after some bit manipulation. I'm trying to do a simple caesarian shift on the binary string, and it needs to be reversable. I've done this with this method.. public static String cShift(String ptxt, int addFactor) { String ascii = ""; for (int i = 0; i < ptxt.length(); i+=8) { int character = Integer.parseInt(ptxt.substring(i, i+8), 2); byte sum = (byte) (character + addFactor); ascii += (char)sum; } String returnToBinary = convertToBinary(ascii); return returnToBinary; } This works fine in some cases. However, I think when it rolls over being representable by one byte it's irreversable. On the test string "test!22*F ", with an addFactor of 12, the string becomes irreversible. Why is that and how can I stop it?

    Read the article

  • How to find unique values in jagged array

    - by David Liddle
    I would like to know how I can count the number of unique values in a jagged array. My domain object contains a string property that has space delimitered values. class MyObject { string MyProperty; //e.g = "v1 v2 v3" } Given a list of MyObject's how can I determine the number of unique values? The following linq code returns an array of jagged array values. A solution would be to store a temporary single array of items, looped through each jagged array and if values do not exist, to add them. Then a simple count would return the unique number of values. However, was wondering if there was a nicer solution. db.MyObjects.Where(t => !String.IsNullOrEmpty(t.MyProperty)) .Select(t => t.Categories.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries)) .ToArray() Below is a more readable example: array[0] = { "v1", "v2", "v3" } array[1] = { "v1" } array[2] = { "v4", "v2" } array[3] = { "v1", "v5" } From all values the unique items are v1, v2, v3, v4, v5. The total number of unique items is 5. Is there a solution, possibly using linq, that returns either only the unique values or returns the number of unique values?

    Read the article

  • How to pass parameters to manage_shared_memory.construct() in Boost.Interprocess

    - by recipriversexclusion
    I've stared at the Boost.Interprocess documentation for hours but still haven't been able to figure this out. In the doc, they have an example of creating a vector in shared memory like so: //Define an STL compatible allocator of ints that allocates from the managed_shared_memory. //This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef vector<int, ShmemAllocator> MyVector; int main(int argc, char *argv[]) { //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a vector named "MyVector" in shared memory with argument alloc_inst MyVector *myvector = segment.construct<MyVector>("MyVector")(alloc_inst); Now, I understand this. What I'm stuck is how to pass a second parameter to segment.construct() to specify the number of elements. The interprocess document gives the prototype for construct() as MyType *ptr = managed_memory_segment.construct<MyType>("Name") (par1, par2...); but when I try MyVector *myvector = segment.construct<MyVector>("MyVector")(100, alloc_inst); I get compilation errors. My questions are: Who actually gets passed the parameters par1, par2 from segment.construct, the constructor of the object, e.g. vector? My understanding is that the template allocator parameter is being passed. Is that correct? How can I add another parameter, in addition to alloc_inst that is required by the constructor of the object being created in shared memory? There's very little information other than the terse Boost docs on this.

    Read the article

  • slot function not getting called

    - by chappar
    I am learning QT and trying out some examples. I am trying to make a dialog that disappears a label when a button is pressed and makes it appear when the same button is pressed again. Below is the code. #include <QApplication> #include <QPushButton> #include <QLabel> #include <QDialog> #include <QObject> #include <QHBoxLayout> int main(int argc, char ** argv) { QApplication app(argc, argv); QDialog *dialog = new QDialog; QPushButton *testButton = new QPushButton(QObject::tr("test")); QLabel * testLabel = new QLabel (QObject::tr("test")); QHBoxLayout * layout = new QHBoxLayout; layout->addWidget(testButton); layout->addWidget(testLabel); QObject::connect(testButton, SIGNAL(toggled(bool)), testLabel, SLOT(setVisible(bool))); dialog->setLayout(layout); dialog->show(); return app.exec(); } It is not working. Whenever i press the test button nothing happens. But if i change the signal slot connections as QObject::connect(testButton, SIGNAL(clicked(bool)), testLabel, SLOT(setVisible(bool))); it makes the label disappear. So, why it is not working with signal "toggled". What i am guessing is, it is not able to find that signal. Can you guys throw some light?

    Read the article

  • Constructor is being invoked twice

    - by Knowing me knowing you
    In code: LINT a = "12"; LINT b = 3; a = "3";//WHY THIS LINE INVOKES CTOR? std::string t = "1"; //LINT a = t;//Err NO SUITABLE CONV FROM STRING TO LINT. Shouldn't ctor do it? #pragma once #include "LINT_rep.h" class LINT { private: typedef LINT_rep value_type; const value_type* my_data_; template<class T> void init_(const T&); public: LINT(const char* = 0); LINT(const std::string&); LINT(const LINT&); LINT(const long_long&); LINT& operator=(const LINT&); virtual ~LINT(void); LINT operator+()const; //DONE LINT operator+(const LINT&)const;//DONE LINT operator-()const; //DONE LINT operator-(const LINT&)const;//DONE LINT operator*(const LINT&)const;//DONE LINT operator/(const LINT&)const;///WAITS FOR APPROVAL LINT& operator+=(const LINT&);//DONE LINT& operator-=(const LINT&);//DONE LINT& operator*=(const LINT&);//DONE LINT operator/=(const LINT&);///WAITS FOR APPROVAL }; in line number 3 instead of assignment optor ctor is invoked. Why? I'm willing to uppload entire solution on some server otherwise it's hard to put everything in here. I can also upload video file. Another thing is that when I implement this assignment optor I'm getting an error that this optor is already in obj file? What's going on?

    Read the article

  • Stringing multiple ShellExecute calls

    - by IVlad
    Consider the following code and its executable - runner.exe: #include <iostream> #include <string> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { SHELLEXECUTEINFO shExecInfo; shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO); shExecInfo.fMask = NULL; shExecInfo.hwnd = NULL; shExecInfo.lpVerb = "open"; shExecInfo.lpFile = argv[1]; string Params = ""; for ( int i = 2; i < argc; ++i ) Params += argv[i] + ' '; shExecInfo.lpParameters = Params.c_str(); shExecInfo.lpDirectory = NULL; shExecInfo.nShow = SW_SHOWNORMAL; shExecInfo.hInstApp = NULL; ShellExecuteEx(&shExecInfo); return 0; } These two batch files both do what they're supposed to, which is run notepad.exe and run notepad.exe and tell it to try to open test.txt: 1. runner.exe notepad.exe 2. runner.exe notepad.exe test.txt Now, consider this batch file: 3. runner.exe runner.exe notepad.exe This one should run runner.exe and send notepad.exe as one of its command line arguments, shouldn't it? Then, that second instance of runner.exe should run notepad.exe - which doesn't happen. If I print the argc argument, it's 14 for the second instance of runner.exe, and they are all weird stuff like Files\Microsoft, SQL, Files\Common and so on. I can't figure out why this happens. I want to be able to string as many runner.exe calls using command line arguments as possible, or at least 2. How can I do that? I am using Windows 7 if that makes a difference.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >