Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 267/336 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • How does one gets started with Winforms style applications on Win32?

    - by Billy ONeal
    EDIT: I'm extremely tired and frustrated at the moment -- please ignore that bit in this question -- I'll edit it in the morning to be better. Okay -- a bit of background: I'm a C++ programmer mostly, but the only GUI stuff I've ever done was on top of .NET's WinForms platform. I'm completely new to Windows GUI programming, and despite Petzold's excellent book, I'm extremely confused. Namely, it seems that most every reference on getting started with Win32 is all about drawing lines and curves and things -- a topic about which (at least at present time) I couldn't care less. I need a checked list box, a splitter, and a textbox -- something that would take less than 10 minutes to do in Winforms land. It has been recommended to me to use the WTL library, which provides an implementation of all three of these controls -- but I keep getting hung up on simple things, such as getting the damn controls to use the right font, and getting High DPI working correctly. I've spent two freaking days on this, and I can't help but think there has to be a better reference for these kinds of things than I've been able to find. Petzold's book is good, but it hasn't been updated since Windows 95 days, and there's been a LOT changed w.r.t. how applications should be correctly developed since it was published. I guess what I'm looking for is a modern Petzold book. Where can I find such a resource, if any?

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • JavaScript: why `null == 0` is false?

    - by lemonedo
    I had to write a routine that increments the value of a variable by 1 if it is a number, or assigns 0 to the variable if it is not a number. The variable can be incremented by the expression, or be assigned null. No other write access to the variable is allowed. So, the variable can be in three states: it is 0, a positive integer, or null. My first implementation was: v >= 0 ? v += 1 : v = 0 (Yes, I admit that v === null ? v = 0 : v += 1 is the exact solution, but I wanted to be concise then.) It failed since null >= 0 is true. I was confused, since if a value is not a number, an numeric expression involving it must be false always. Then I found that null is like 0, since null + 1 == 1, 1 / null == Infinity, Math.pow(2.718281828, null) == 1, ... Strangely enough, however, null == 0 is evaluated to false. I guess null is the only value that makes the following expression false: (v == 0) === (v >= 0 && v <= 0) So why null is so special in JavaScript?

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Concise description of how .h and .m files interact in objective c?

    - by RJ86
    I have just started learning objective C and am really confused how the .h and .m files interact with each other. This simple program has 3 files: Fraction.h #import <Foundation/NSObject.h> @interface Fraction : NSObject { int numerator; int denominator; } - (void) print; - (void) setNumerator: (int) n; - (void) setDenominator: (int) d; - (int) numerator; - (int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end Main.m #import <stdio.h> #import "Fraction.h" int main(int argc, char *argv[]) { Fraction *frac = [[Fraction alloc] init]; [frac setNumerator: 1]; [frac setDenominator: 3]; printf( "The fraction is: " ); [frac print]; printf( "\n" ); [frac release]; return 0; } From what I understand, the program initially starts running the main.m file. I understand the basic C concepts but this whole "class" and "instance" stuff is really confusing. In the Fraction.h file the @interface is defining numerator and denominator as an integer, but what else is it doing below with the (void)? and what is the purpose of re-defining below? I am also quite confused as to what is happening with the (void) and (int) portions of the Fraction.m and how all of this is brought together in the main.m file. I guess what I am trying to say is that this seems like a fairly easy program to learn how the different portions work with each other - could anyone explain in non-tech jargon?

    Read the article

  • Questions on Juval Lowy's IDesign C# Coding Standard

    - by Jan
    We are trying to use the IDesign C# Coding standard. Unfortunately, I found no comprehensive document to explain all the rules that it gives, and also his book does not always help. Here are the open questions that remain for me (from chapter 2, Coding Practices): No. 26: Avoid providing explicit values for enums unless they are integer powers of 2 No. 34: Always explicitly initialize an array of reference types using a for loop No. 50: Avoid events as interface members No. 52: Expose interfaces on class hierarchies No. 73: Do not define method-specific constraints in interfaces No. 74: Do not define constraints in delegates Here's what I think about those: I thought that providing explicit values would be especially useful when adding new enum members at a later point in time. If these members are added between other already existing members, I would provide explicit values to make sure the integer representation of existing members does not change. No idea why I would want to do this. I'd say this totally depends on the logic of my program. I see that there is alternative option of providing "Sink interfaces" (simply providing already all "OnXxxHappened" methods), but what is the reason to prefer one over the other? Unsure what he means here: Could this mean "When implementing an interface explicitly in a non-sealed class, consider providing the implementation in a protected virtual method that can be overridden"? (see Programming .NET Components 2nd Edition, end of chapter “Interfaces and Class Hierarchies”). I suppose this is about providing a "where" clause when using generics, but why is this bad on an interface? I suppose this is about providing a "where" clause when using generics, but why is this bad on a delegate?

    Read the article

  • Putting all methods in class definition

    - by Amnon
    When I use the pimpl idiom, is it a good idea to put all the methods definitions inside the class definition? For example: // in A.h class A { class impl; boost::scoped_ptr<impl> pimpl; public: A(); int foo(); } // in A.cpp class A::impl { // method defined in class int foo() { return 42; } // as opposed to only declaring the method, and defining elsewhere: float bar(); }; A::A() : pimpl(new impl) { } int A::foo() { return pimpl->foo(); } As far as I know, the only problems with putting a method definition inside a class definition is that (1) the implementation is visible in files that include the class definition, and (2) the compiler may make the method inline. These are not problems in this case since the class is defined in a private file, and inlining has no effect since the methods are called in only one place. The advantage of putting the definition inside the class is that you don't have to repeat the method signature. So, is this OK? Are there any other issues to be aware of?

    Read the article

  • Not all symbols of an DLL-exported class is exported (VS9)

    - by mandrake
    I'm building a DLL from a group of static libraries and I'm having a problem where only parts of classes are exported. What I'm doing is declaring all symbols I want to export with a preprocessor definition like: #if defined(MYPROJ_BUILD_DLL) //Build as a DLL # define MY_API __declspec(dllexport) #elif defined(MYPROJ_USE_DLL) //Use as a DLL # define MY_API __declspec(dllimport) #else //Build or use as a static lib # define MY_API #endif For example: class MY_API Foo{ ... } I then build static library with MYPROJ_BUILD_DLL & MYPROJ_USE_DLL undefined causing a static library to be built. In another build I create a DLL from these static libraries. So I define MYPROJ_BUILD_DLL causing all symbols I want to export to be attributed with __declspec(dllexport) (this is done by including all static library headers in the DLL-project source file). Ok, so now to the problem. When I use this new DLL I get unresolved externals because not all symbols of a class is exported. For example in a class like this: class MY_API Foo{ public: Foo(char const* ); int bar(); private: Foo( char const*, char const* ); }; Only Foo::Foo( char const*, char const*); and int Foo::bar(); is exported. How can that be? I can understand if the entire class was missing, due to e.g. I forgot to include the header in the DLL-build. But it's only partial missing. Also, say if Foo::Foo( char const*) was not implemented; then the DLL build would have unresolved external errors. But the build is fine (I also double checked for declarations without implementation). Note: The combined size of the static libraries I'm combining is in the region of 30MB, and the resulting DLL is 1.2MB. I'm using Visual Studio 9.0 (2008) to build everything. And Depends to check for exported symbols.

    Read the article

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • On XP, how do I get the tooltip to appear above a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Edit: this appears to be an XP-only problem. Vista and Windows 7 work correctly. I'd still like to find a workaround to get the tooltip to appear above the form on XP. Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

  • Prolog singleton variables in Python

    - by Rubens
    I'm working on a little set of scripts in python, and I came to this: line = "a b c d e f g" a, b, c, d, e, f, g = line.split() I'm quite aware of the fact that these are decisions taken during implementation, but shouldn't (or does) python offer something like: _, _, var_needed, _, _, another_var_needed, _ = line.split() as well as Prolog does offer, in order to exclude the famous singleton variables. I'm not sure, but wouldn't it avoid unnecessary allocation? Or creating references to the result of the split call does not count up as overhead? EDIT: Sorry, my point here is: in Prolog, as far as I'm concerned, in an expression like: test(L, N) :- test(L, 0, N). test([], N, N). test([_|T], M, N) :- V is M + 1, test(T, V, N). The variable represented by _ is not accessible, for what I suppose the reference to the value that does exist in the list [_|T] is not even created. But, in Python, if I use _, I can use the last value assigned to _, and also, I do suppose the assignment occurs for each of the variables _ -- which may be considered an overhead. My question here is if shouldn't there be (or if there is) a syntax to avoid such unnecessary attributions.

    Read the article

  • Cocos2d and MPMoviePlayerViewController - NSNotificationCenter not working

    - by digi_0315
    I'm using cocos2d with MPMoviePlayerViewController class, but when I tryed to catch notification status when the movie is finished I got this error: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSCFString movieFinishedCallback]: unrecognized selector sent to instance 0x5d23730' my playVideoController.m are: @implementation PlayVideoViewController +(id) scene{ CCScene *scene = [CCScene node]; CCLayer *layer = [credits node]; [scene addChild: layer]; return scene; } -(id)initWithPath:(NSString *)moviePath{ if ((self = [super init])){ movieURL = [NSURL fileURLWithPath:moviePath]; [movieURL retain]; playerViewController = [[MPMoviePlayerViewController alloc] initWithContentURL:movieURL]; player = [playerViewController moviePlayer]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(movieFinishedCallback) name:MPMoviePlayerPlaybackDidFinishNotification object:player]; [[[CCDirector sharedDirector] openGLView] addSubview:playerViewController.view]; [player play]; } return self; } -(void)movieFinishedCallback{ CCLOG(@"video finished!!"); } in .h: #import <UIKit/UIKit.h> #import "cocos2d.h" #import <MediaPlayer/MediaPlayer.h> @interface PlayVideoViewController : CCLayer { NSURL *movieURL; MPMoviePlayerViewController *playerViewController; MPMoviePlayerController *player; } +(id) scene; @end and I call it in appDelegate.m: - (void) applicationDidFinishLaunching:(UIApplication*)application { CC_DIRECTOR_INIT(); CCDirector *director = [CCDirector sharedDirector]; [director setDeviceOrientation:kCCDeviceOrientationLandscapeLeft]; EAGLView *glView = [director openGLView]; [glView setMultipleTouchEnabled:YES]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888];//kEAGLColorFormatRGBA8 NSString *path = [[NSBundle mainBundle] pathForResource:@"intro" ofType:@"mov" inDirectory:nil]; vi ewController = [[[PlayVideoViewController alloc] initWithPath:path] autorelease]; } what i'm doing wrong? anyone can help me please?? I'm try to solve it since a lot of hours ago but I can't!

    Read the article

  • implementing a read/write field for an interface that only defines read

    - by PaulH
    I have a C# 2.0 application where a base interface allows read-only access to a value in a concrete class. But, within the concrete class, I'd like to have read/write access to that value. So, I have an implementation like this: public abstract class Base { public abstract DateTime StartTime { get; } } public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } internal set { start_time_ = value; } } } But, this gives me the error: Foo.cs(200,22): error CS0546: 'Foo.StartTime.set': cannot override because 'Base.StartTime' does not have an overridable set accessor I don't want the base class to have write access. But, I do want the concrete class to provide read/write access. Is there a way to make this work? Thanks, PaulH Unfortunately, Base can't be changed to an interface as it contains non-abstract functionality also. Something I should have thought to put in the original problem description. public abstract class Base { public abstract DateTime StartTime { get; } public void Buzz() { // do something interesting... } } My solution is to do this: public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } } internal void SetStartTime { start_time_ = value; } } It's not as nice as I'd like, but it works.

    Read the article

  • Java: Tracking a user login session - Session EJBs vs HTTPSession

    - by bguiz
    If I want to keep track of a conversational state with each client using my web application, which is the better alternative - a Session Bean or a HTTP Session - to use? Using HTTP Session: //request is a variable of the class javax.servlet.http.HttpServletRequest //UserState is a POJO HttpSession session = request.getSession(true); UserState state = (UserState)(session.getAttribute("UserState")); if (state == null) { //create default value .. } String uid = state.getUID(); //now do things with the user id Using Session EJB: In the implementation of ServletContextListener registered as a Web Application Listener in WEB-INF/web.xml: //UserState NOT a POJO this this time, it is //the interface of the UserStateBean Stateful Session EJB @EJB private UserState userStateBean; public void contextInitialized(ServletContextEvent sce) { ServletContext servletContext = sce.getServletContext(); servletContext.setAttribute("UserState", userStateBean); ... In a JSP: public void jspInit() { UserState state = (UserState)(getServletContext().getAttribute("UserState")); ... } Elsewhere in the body of the same JSP: String uid = state.getUID(); //now do things with the user id It seems to me that the they are almost the same, with the main difference being that the UserState instance is being transported in the HttpRequest.HttpSession in the former, and in a ServletContext in the case of the latter. Which of the two methods is more robust, and why?

    Read the article

  • Deleting elements from stl set while iterating through it does not invalidate the iterators.

    - by pedromanoel
    I need to go through a set and remove elements that meet a predefined criteria. This is the test code I wrote: #include <set> #include <algorithm> void printElement(int value) { std::cout << value << " "; } int main() { int initNum[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; std::set<int> numbers(initNum, initNum + 10); // print '0 1 2 3 4 5 6 7 8 9' std::for_each(numbers.begin(), numbers.end(), printElement); std::set<int>::iterator it = numbers.begin(); // iterate through the set and erase all even numbers for (; it != numbers.end(); ++it) { int n = *it; if (n % 2 == 0) { // wouldn't invalidate the iterator? numbers.erase(it); } } // print '1 3 5 7 9' std::for_each(numbers.begin(), numbers.end(), printElement); return 0; } At first, I thought that erasing an element from the set while iterating through it would invalidate the iterator, and the increment at the for loop would have undefined behavior. Even though, I executed this test code and all went well, and I can't explain why. My question: Is this the defined behavior for std sets or is this implementation specific? I am using gcc 4.3.3 on ubuntu 10.04 (32-bit version), by the way. Thanks!

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • Problem with "moveable-only types" in VC++ 2010

    - by Luc Touraille
    I recently installed Visual Studio 2010 Professional RC to try it out and test the few C++0x features that are implemented in VC++ 2010. I instantiated a std::vector of std::unique_ptr, without any problems. However, when I try to populate it by passing temporaries to push_back, the compiler complains that the copy constructor of unique_ptr is private. I tried inserting an lvalue by moving it, and it works just fine. #include <utility> #include <vector> int main() { typedef std::unique_ptr<int> int_ptr; int_ptr pi(new int(1)); std::vector<int_ptr> vec; vec.push_back(std::move(pi)); // OK vec.push_back(int_ptr(new int(2)); // compiler error } As it turns out, the problem is neither unique_ptr nor vector::push_back but the way VC++ resolves overloads when dealing with rvalues, as demonstrated by the following code: struct MoveOnly { MoveOnly() {} MoveOnly(MoveOnly && other) {} private: MoveOnly(const MoveOnly & other); }; void acceptRValue(MoveOnly && mo) {} int main() { acceptRValue(MoveOnly()); // Compiler error } The compiler complains that the copy constructor is not accessible. If I make it public, the program compiles (even though the copy constructor is not defined). Did I misunderstand something about rvalue references, or is it a (possibly known) bug in VC++ 2010 implementation of this feature?

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • Java/Hibernate using interfaces over the entities.

    - by Dennetik
    I am using annoted Hibernate, and I'm wondering whether the following is possible. I have to set up a series of interfaces representing the objects that can be persisted, and an interface for the main database class containing several operations for persisting these objects (... an API for the database). Below that, I have to implement these interfaces, and persist them with Hibernate. So I'll have, for example: public interface Data { public String getSomeString(); public void setSomeString(String someString); } @Entity public class HbnData implements Data, Serializable { @Column(name = "some_string") private String someString; public String getSomeString() { return this.someString; } public void setSomeString(String someString) { this.someString = someString; } } Now, this works fine, sort of. The trouble comes when I want nested entities. The interface of what I'd want is easy enough: public interface HasData { public Data getSomeData(); public void setSomeData(Data someData); } But when I implement the class, I can follow the interface, as below, and get an error from Hibernate saying it doesn't know the class "Data". @Entity public class HbnHasData implements HasData, Serializable { @OneToOne(cascade = CascadeType.ALL) private Data someData; public Data getSomeData() { return this.someData; } public void setSomeData(Data someData) { this.someData = someData; } } The simple change would be to change the type from "Data" to "HbnData", but that would obviously break the interface implementation, and thus make the abstraction impossible. Can anyone explain to me how to implement this in a way that it will work with Hibernate?

    Read the article

  • Is there a way to load an existing connection string for Linq to SQL from an app.config file?

    - by Brian Surowiec
    I'm running into a really annoying problem with my Linq to SQL project. When I add everything in under the web project everything goes as expected and I can tell it to use my existing connection string stored in the web.config file and the Linq code pulls directly from the ConfigurationManager. This all turns ugly once I move the code into its own project. I’ve created an app.config file, put the connection string in there as it was in the web.config but when I try to add another table in the IDE keeps forcing me to either hardcode the connection string or creates a Settings file and puts it in there, which then adds a new entry into the app.config file with a new name. Is there a way keep my Linq code in its own project yet still refer back to my config file without the IDE continuously hardcoding the connection string or creating the Settings file? I’m converting part of my DAL over to use Linq to SQL so I’d like to use the existing connection string that our old code is using as well as keep the value in a common location, and one spot, instead of in a number of spots. Manually changing the mode to WebSettings instead of AppSettings works untill I try to add a new table, then it goes back to hardcoding the value or recreating the Settings file. I also tried to switch the project type to be a web project and then rename my app.config to web.config and then everything works as I’d like it to. I’m just not sure if there are any downfalls to keeping this as a web project since it really isn't one. The project only contains the Linq to SQL code and an implementation of my repository classes. My project layout looks like this Website -connectionString.config -web.config (refers to connectionString.config) Middle Tier -Business Logic -Repository Interfaces -etc. DAL -Linq to SQL code -Existing SPROC code -connectionString.config (linked from the web poject) -app.config (refers to connectionString.config)

    Read the article

  • "'Objects' may not respond to 'functions'" warnings.

    - by Andrew
    Hello all, for the last couple of weeks I've finally gotten into Obj-C from regular C and have started my first app. I've watched tutorials and read through a book along with a lot of webpages, but I know I've only just begun. Anyway, for most of the night and this morning, I've been trying to get this code to work, and now that it will compile, I have a few warnings. I've searched and found similar problems with solutions, but still no dice. What I'm trying to do is put an array made from a txt document into the popup list in a combo box. AwesomeBoxList.h: #import <Cocoa/Cocoa.h> @interface AwesomeBoxList : NSObject { IBOutlet NSComboBox *ComboBoz; } -(NSArray *) getStringzFromTxtz; - (void) awesomeBoxList; @end AwesomeBoxList.m: #import "AwesomeBoxList.h" @implementation AwesomeBoxList -(NSArray *)getStringzFromTxtz { ... return combind; } - (void) awesomeBoxList { [ComboBoz setUsesDataSource:YES]; [ComboBoz setDataSource: [ComboBoz getStringzFromTxtz]: //'NSComboBox' may not respond to 'getStringzFromTxtz' [ComboBoz comboBox:(NSComboBox *)ComboBoz objectValueForItemAtIndex: [ComboBoz numberOfItemsInComboBox:(NSComboBox *)ComboBoz]]]; /*'NSComboBox' may not respond to '-numberOfItemsInComboBox:' 'NSComboBox' may not respond to '-comboBox:objectValueForItemAtIndex:' 'NSComboBox' may not respond to '-setDataSource:' */ } @end So, with all of these errors and my still shallow knowledge of Obj-C, I must be making some sort of n00b mistake. Thanks for the help.

    Read the article

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • Trying to write priority queue in Java but getting "Exception in thread "main" java.lang.ClassCastEx

    - by Dan
    For my data structure class, I am trying to write a program that simulates a car wash and I want to give fancy cars a higher priority than regular ones using a priority queue. The problem I am having has something to do with Java not being able to type cast "Object" as an "ArrayQueue" (a simple FIFO implementation). What am I doing wrong and how can I fix it? public class PriorityQueue<E> { private ArrayQueue<E>[] queues; private int highest=0; private int manyItems=0; public PriorityQueue(int h) { highest=h; queues = (ArrayQueue[]) new Object[highest+1]; <----problem is here } public void add(E item, int priority) { queues[priority].add(item); manyItems++; } public boolean isEmpty( ) { return (manyItems == 0); } public E remove() { E answer=null; int counter=0; do { if(!queues[highest-counter].isEmpty()) { answer = queues[highest-counter].remove(); counter=highest+1; } else counter++; }while(highest-counter>=0); return answer; } }

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >