Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 90/336 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Problems with CGPoint/touches event

    - by Jason
    I'm having some problems with storing variables from my touch events. The warning I get when I run this is that coord and icoord are unused, but I used them in the viewDidLoad implementation, is there a reason why this does not work? Any suggestions? #import "iGameViewController.h" @implementation iGameViewController @synthesize player; -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[event allTouches] anyObject]; CGPoint icoord = [touch locationInView:touch.view]; } -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[event allTouches] anyObject]; CGPoint coord = [touch locationInView:touch.view]; } - (void)viewDidLoad { if (coord.x > icoord.x) { player.center = CGPointMake(player.center.x + 5, player.center.y); } if (coord.x < icoord.x) { player.center = CGPointMake(player.center.x - 5, player.center.y); } if (coord.y > icoord.y) { player.center = CGPointMake(player.center.x, player.center.y - 5); } if (coord.y < icoord.y) { player.center = CGPointMake(player.center.x, player.center.y + 5); } } Thanks.

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • VB6 Game Development

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "near-zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do suggest solutions with good performance and as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) UPDATE: Here is a screen shot of GearHead Garage. This picture ought to describe what i was attempting in words above... :) Thank You

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • Adroid's DateFormat replacement - missing the format() with FieldPosition

    - by user331244
    Hi, I need to split a date string into pieces and I'm doing it using the public final StringBuffer format (Object object, StringBuffer buffer, FieldPosition field) from the java.text.DateFormat class. However, the implementation of this function is really slow, hence Android has an own implementation in android.text.format.DateFormat. BUT, in my case, I want to extract the different pieces of the date string (year, minute and so on). Since I need to be locale independent, I can not use SimpleDateFormat and custom strings. I do it as follows: Calendar c = ... // find out what field to extract int field = getField(); // Create a date string Field calendarField = DateFormat.Field.ofCalendarField(field); FieldPosition fieldPosition = new FieldPosition(calendarField); StringBuffer label = new StringBuffer(); label = getDateFormat().format(c.getTime(), label, fieldPosition); // Find the piece that we are looking for int beginIndex = fieldPosition.getBeginIndex(); int endIndex = fieldPosition.getEndIndex(); String asString = label.substring(beginIndex, endIndex); For some reason, the format() overload with the FieldPosition argument is not included in the android platform. Any ideas of how to do this in another way? Is there any easy way to tokenize the pattern string? Any other ideas?

    Read the article

  • C++ Multiple inheritance with interfaces?

    - by umanga
    Greetings all, I come from Java background and I am having difficulty with multiple inheritance. I have an interface called IView which has init() method.I want to derive a new class called PlaneViewer implementing above interface and extend another class. (QWidget). My implementation is as: IViwer.h (only Header file , no CPP file) : #ifndef IVIEWER_H_ #define IVIEWER_H_ class IViewer { public: //IViewer(); ///virtual //~IViewer(); virtual void init()=0; }; #endif /* IVIEWER_H_ */ My derived class. PlaneViewer.h #ifndef PLANEVIEWER_H #define PLANEVIEWER_H #include <QtGui/QWidget> #include "ui_planeviewer.h" #include "IViewer.h" class PlaneViewer : public QWidget , public IViewer { Q_OBJECT public: PlaneViewer(QWidget *parent = 0); ~PlaneViewer(); void init(); //do I have to define here also ? private: Ui::PlaneViewerClass ui; }; #endif // PLANEVIEWER_H PlaneViewer.cpp #include "planeviewer.h" PlaneViewer::PlaneViewer(QWidget *parent) : QWidget(parent) { ui.setupUi(this); } PlaneViewer::~PlaneViewer() { } void PlaneViewer::init(){ } My questions are: Is it necessary to declare method init() in PlaneViewer interface also , because it is already defined in IView? 2.I cannot complie above code ,give error : PlaneViewer]+0x28): undefined reference to `typeinfo for IViewer' collect2: ld returned 1 exit status Do I have to have implementation for IView in CPP file?

    Read the article

  • What is the PastryKit Framework?

    - by Kerrick
    I'm trying to find any information I can on the PastryKit Javascript Framework. It appears to be in use on the iPhone User Guide that is displayed on the iPhone itself in Mobile Safari, but I cannot find any documentation or API. If you want to see it in action, open Safari 4, set your user agent to iPhone 3 (In the Develop menu) and check out the guide. Overall, it seems to be a way to write an HTML/CSS/Javascript application that acts like a native iPhone app. When it comes to Javascript, I used the JS Beautifier on (what I assume to be) the framework file and it was over 3,400 lines! Beautified, (again what I assume to be) their implementation of it was over 1,200 lines. On the CSS side, I used Clean CSS on (again what I assume to be) the framework CSS, and it came out to over 700 lines. Their implementation was shy of 500. Does anybody have, or know where to find, any information, documentation, or APIs on PastryKit? Or, can anybody figure out how to implement it?

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • How to copy generically superclass instances to subclass instances?

    - by gerry
    Hi @all, I have a class hierarchy / inheritance like this: public class A { private String name; // with getters & setters public void doAWithName(){ ... } } public class B extends A { public void doBWithName(){ // a differnt implementation to what I do in class A } } public class C extends B { public void doCWithName(){ // a differnt implementation to what I do in class A and B } } So at one time there is a instance of class A with the initialized field "name". Later I want this instance of A get wrapped into instance of B or C. So the superclasses should be get wrapped with a subclass! How can I make this most efficent with respect to DRY? I've thought about a constructor that does some copying with the getters/setters. But in this case I have to repeat myself - and this doesn't respect anymore to my initial requirement of DRY! So, how can I warp A to B by just initializing B's new fields (with default values) and delegating the rest to a method in A (which knows more than B about which fields of A should be accessed...). In the same way: If A should be wrapped into C only a method in c should init C's 'new' fields, delegate to B's wrap method (which therefore inits B's 'new' fields in C) and at last B delegates to A which copies it's fields to the fields of C). So in the end I have a new instance of C which has the values of A wrapped (and some default init values to the fields which the inheritance hierarchy has added).

    Read the article

  • How are declared private ivars different from synthesized ivars?

    - by lemnar
    I know that the modern Objective-C runtime can synthesize ivars. I thought that synthesized ivars behaved exactly like declared ivars that are marked @private, but they don't. As a result, come code compiles only under the modern runtime that I expected would work on either. For example, a superclass: @interface A : NSObject { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *d; @end @implementation A @synthesize d=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end and a subclass: @interface B : A { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *e; @end @implementation B @synthesize e=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end A subclass can't have a declared ivar with the same name as one of its superclass's declared ivars, even if the superclass's ivar is private. This seems to me like a violation of the meaning of @private, since the subclass is affected by the superclass's choice of something private. What I'm more concerned about, however, is how should I think about synthesized ivars. I thought they acted like declared private ivars, but without the fragile base class problem. Maybe that's right, and I just don't understand the fragile base class problem. Why does the above code compile only in the modern runtime? Does the fragile base class problem exist when all superclass instance variables are private?

    Read the article

  • How to call a function from another class file

    - by Guy Parker
    I am very familiar with writing VB based applications but am new to Xcode (and Objective C). I have gone through numerous tutorials on the web and understand the basics and how to interact with Interface Builder etc. However, I am really struggling with some basic concepts of the C language and would be grateful for any help you can offer. Heres my problem… I have a simple iphone app which has a view controller (FirstViewController) and a subview (SecondViewController) with associated header and class files. In the FirstViewController.m have a function defined @implementation FirstViewController (void) writeToServer:(const uint8_t ) buf { [oStream write:buf maxLength:strlen((char)buf)]; } It doesn't really matter what the function is. I want to use this function in my SecondViewController, so in SecondViewController.m I import FirstViewController.h import "SecondViewController.h" import "FirstViewController.h" @implementation SecondViewController -(IBAction) SetButton: (id) sender { NSString *s = [@"Fill:" stringByAppendingString: FillLevelValue.text]; NSString *strToSend = [s stringByAppendingString: @":"]; const uint8_t *str = (uint8_t *) [strToSend cStringUsingEncoding:NSASCIIStringEncoding]; FillLevelValue.text = strToSend; [FirstViewController writeToServer:str]; } This last line is where my problem is. XCode tells me that FirstViewController may not respond to writeToServer. And when I try to run the application it crashes when this function is called. I guess I don't fully understand how to share functions and more importantly, the relationship between classes. In an ideal world I would create a global class to place my functions in and call them as required. Any advice gratefully received.

    Read the article

  • Abstract base class puzzle

    - by 0x80
    In my class design I ran into the following problem: class MyData { int foo; }; class AbstraktA { public: virtual void A() = 0; }; class AbstraktB : public AbstraktA { public: virtual void B() = 0; }; template<class T> class ImplA : public AbstraktA { public: void A(){ cout << "ImplA A()"; } }; class ImplB : public ImplA<MyData>, public AbstraktB { public: void B(){ cout << "ImplB B()"; } }; void TestAbstrakt() { AbstraktB *b = (AbstraktB *) new ImplB; b->A(); b->B(); }; The problem with the code above is that the compiler will complain that AbstraktA::A() is not defined. Interface A is shared by multiple objects. But the implementation of A is dependent on the template argument. Interface B is the seen by the outside world, and needs to be abstrakt. The reason I would like this is that it would allow me to define object C like this: Define the interface C inheriting from abstrakt A. Define the implementation of C using a different datatype for template A. I hope I'm clear. Is there any way to do this, or do I need to rethink my design?

    Read the article

  • Calling a subclass method from a superclass

    - by Shaun
    Preface: This is in the context of a Rails application. The question, however, is specific to Ruby. Let's say I have a Media object. class Media < ActiveRecord::Base end I've extended it in a few subclasses: class Image < Media def show # logic end end class Video < Media def show # logic end end From within the Media class, I want to call the implementation of show from the proper subclass. So, from Media, if self is a Video, then it would call Video's show method. If self is instead an Image, it would call Image's show method. Coming from a Java background, the first thing that popped into my head was 'create an abstract method in the superclass'. However, I've read in several places (including Stack Overflow) that abstract methods aren't the best way to deal with this in Ruby. With that in mind, I started researching typecasting and discovered that this is also a relic of Java thinking that I need to banish from my mind when dealing with Ruby. Defeated, I started coding something that looked like this: def superclass_method # logic this_media = self.type.constantize.find(self.id) this_media.show end I've been coding in Ruby/Rails for a while now, but since this was my first time trying out this behavior and existing resources didn't answer my question directly, I wanted to get feedback from more-seasoned developers on how to accomplish my task. So, how can I call a subclass's implementation of a method from the superclass in Rails? Is there a better way than what I ended up (almost) implementing?

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

  • Loading Views dynamically

    - by Dann
    Case 1: I have created View-based sample application and tried execute below code. When I press on "Job List" button it should load another view having "Back Btn" on it. In test function, if I use [self.navigationController pushViewController:jbc animated:YES]; nothing gets loaded, but if I use [self presentModalViewController:jbc animated:YES]; it loads another view haveing "Back Btn" on it. Case 2: I did create another Navigation Based Applicaton and used [self.navigationController pushViewController:jbc animated:YES]; it worked as I wanted. Can someone please explain why it was not working in Case 1. Does it has something to do with type of project that is selected? @interface MWViewController : UIViewController { } -(void) test; @end @interface JobViewCtrl : UIViewController { } @end @implementation MWViewController (void)viewDidLoad { UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Job List!" forState:UIControlStateNormal]; [btn addTarget:self action:@selector(test) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:btn]; [super viewDidLoad]; } -(void) test { JobViewCtrl* jbc = [[JobViewCtrl alloc] init]; [self.navigationController pushViewController:jbc animated:YES]; //[self presentModalViewController:jbc animated:YES]; [jbc release]; } (void)dealloc { [super dealloc]; } @end @implementation JobViewCtrl -(void) loadView { self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; self.view.backgroundColor = [UIColor grayColor]; UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Back Btn!" forState:UIControlStateNormal]; [self.view addSubview:btn]; } @end

    Read the article

  • Is there a pattern for initializing objects created wth a DI container

    - by Igor Zevaka
    I am trying to get Unity to manage the creation of my objects and I want to have some initialization parameters that are not known until run-time: At the moment the only way I could think of the way to do it is to have an Init method on the interface. interface IMyIntf { void Initialize(string runTimeParam); string RunTimeParam { get; } } Then to use it (in Unity) I would do this: var IMyIntf = unityContainer.Resolve<IMyIntf>(); IMyIntf.Initialize("somevalue"); In this scenario runTimeParam param is determined at run-time based on user input. The trivial case here simply returns the value of runTimeParam but in reality the parameter will be something like file name and initialize method will do something with the file. This creates a number of issues, namely that the Initialize method is available on the interface and can be called multiple times. Setting a flag in the implementation and throwing exception on repeated call to Initialize seems way clunky. At the point where I resolve my interface I don't want to know anything about the implementation of IMyIntf. What I do want, though, is the knowledge that this interface needs certain one time initialization parameters. Is there a way to somehow annotate(attributes?) the interface with this information and pass those to framework when the object is created? Edit: Described the interface a bit more.

    Read the article

  • CCNet exception during build of vs2010 project

    - by sonee
    We have two build machines. Lately, we've migrated our projects to vs2010 from vs2005. But the problem is that one of the machines occurs error during build. Another machine works well, but just one machine shows error. The differences between the machines are os and computer spec. The machine which is working well is installed windows server 2003 and the other is windows7. the error message is unhandled exception: System.NullReferenceException: Microsoft.VisualStudio.Shell.ThreadHelper.InvokeOnUIThread(InvokableBase invokable) Microsoft.VisualStudio.Shell.ThreadHelper.Invoke(Action action)Microsoft.VisualStudio.Project.VS.Implementation.VSShellServices.InvokeOnUIThread(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.ApartmentMarshaler.Invoke(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.VCConfigBuildJob.BuildCompleted(BuildSubmission ar) Microsoft.VisualStudio.Project.Contracts.Implementation.BuildProjectBase.BuildCompletedCallbackManager.BuildCompleted(BuildSubmission buildSubmission) Microsoft.Build.Execution.BuildSubmission.<CheckForCompletion>b__0(Object state) System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() System.Threading.ThreadPoolWorkQueue.Dispatch() System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() Curiously enough, when I run building project in command line on the machine which occurs error, it works well. The machine just shows error when launched by ccnet. I've installed latest version of ccnet to all machines. Is there anybody who faced like this problem?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Google app engine: Poor Performance with JDO + Datastore

    - by Bosh
    I have a simple data model that includes USERS: store basic information (key, name, phone # etc) RELATIONS: describe, e.g. a friendship between two users (supplying a relationship_type + two user keys) I'm getting very poor performance, for instance, if I try to print the first names of all of a user's friends. Say the user has 500 friends: I can fetch the list of friend user_ids very easily in a single query. But then, to pull out first names, I have to do 500 back-and-forth trips to the Datastore, each of which seems to take on the order of 30 ms. If this were SQL, I'd just do a JOIN and get the answer out fast. I understand there are rudimentary facilities for performing joins across un-owned relations in a relaxed implementation of JDO (as described at http://gae-java-persistence.blogspot.com) but they sound experimental and non-standard (e.g. my code won't work in any other JDO implementation). Is this really my best bet? Otherwise, how do people extract satisfactory performance from JDO/Datastore in this kind of (very common) situation? -Bosh

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • Indices instead of pointers in STL containers?

    - by zvrba
    Due to specific requirements [*], I need a singly-linked list implementation that uses integer indices instead of pointers to link nodes. The indices are always interpreted with respect to a vector containing the list nodes. I thought I might achieve this by defining my own allocator, but looking into the gcc's implementation of , they explicitly use pointers for the link fields in the list nodes (i.e., they do not use the pointer type provided by the allocator): struct _List_node_base { _List_node_base* _M_next; ///< Self-explanatory _List_node_base* _M_prev; ///< Self-explanatory ... } (For this purpose, the allocator interface is also deficient in that it does not define a dereference function; "dereferencing" an integer index always needs a pointer to the underlying storage.) Do you know a library of STL-like data structures (i am mostly in need of singly- and doubly-linked list) that use indices (wrt. a base vector) instead of pointers to link nodes? [*] Saving space: the lists will contain many 32-bit integers. With two pointers per node (STL list is doubly-linked), the overhead is 200%, or 400% on 64-bit platform, not counting the overhead of the default allocator.

    Read the article

  • stealing inside the move constructor

    - by FredOverflow
    During the implementation of the move constructor of a toy class, I noticed a pattern: array2D(array2D&& that) { data_ = that.data_; that.data_ = 0; height_ = that.height_; that.height_ = 0; width_ = that.width_; that.width_ = 0; size_ = that.size_; that.size_ = 0; } The pattern obviously being: member = that.member; that.member = 0; So I wrote a preprocessor macro to make stealing less verbose and error-prone: #define STEAL(member) member = that.member; that.member = 0; Now the implementation looks as following: array2D(array2D&& that) { STEAL(data_); STEAL(height_); STEAL(width_); STEAL(size_); } Are there any downsides to this? Is there a cleaner solution that does not require the preprocessor?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >