Search Results

Search found 5644 results on 226 pages for 'somewhat confused'.

Page 217/226 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • template pass by const reference

    - by 7vies
    Hi, I've looked over a few similar questions, but I'm still confused. I'm trying to figure out how to explicitly (not by compiler optimization etc) and C++03-compatible avoid copying of an object when passing it to a template function. Here is my test code: #include <iostream> using namespace std; struct C { C() { cout << "C()" << endl; } C(const C&) { cout << "C(C)" << endl; } ~C() { cout << "~C()" << endl; } }; template<class T> void f(T) { cout << "f<T>" << endl; } template<> void f(C c) { cout << "f<C>" << endl; } // (1) template<> void f(const C& c) { cout << "f<C&>" << endl; } // (2) int main() { C c; f(c); return 0; } (1) accepts the object of type C, and makes a copy. Here is the output: C() C(C) f<C> ~C() ~C() So I've tried to specialize with a const C& parameter (2) to avoid this, but this simply doesn't work (apparently the reason is explained in this question). Well, I could "pass by pointer", but that's kind of ugly. So is there some trick that would allow to do that somehow nicely? EDIT: Oh, probably I wasn't clear. I already have a templated function template<class T> void f(T) {...} But now I want to specialize this function to accept a const& to another object: template<> void f(const SpecificObject&) {...} But it only gets called if I define it as template<> void f(SpecificObject) {...}

    Read the article

  • Tab Bar and Nav Controller: Where did I go wrong in my Interface Builder wiring?

    - by editor
    Even if you don't know how I've shot myself in the foot, a story which I've tried to lay out below, if you think I've done a good job showing the parameters of my problem I'd appreciate an upvote so that I might be able to grab some attention for my question. I've been working on an iPhone application in XCode and Interface Builder of the Tab Bar project type. After getting a table view of topics (business sectors) working fine I realized that I would need to add a Navigation Control to allow the user to drill into a subtopics (subsectors) table. As a green Objective-C developer, that was confusing, but I managed to get it working by reading various documentation trying out a few different IB options. My current setup is a Tab Bar Controller with Tab 1 as a Navigation Controller and Tab 2 a plain view with a Table View placed into it. The wiring works: I can log when table rows are selected and I'm ready to push a new View Controller onto the stack so that I can display the subtopics Table View. My problem: For some reason the first tab's Table View is a delegate and dataSource of the second ta. It doesn't make sense to me and I can't figure out why that's the only setup that works. Here is the wiring: Navigation Controller (Sectors) is a delegate of Tab Bar Navigation Bar is a delegate of Navigation Controller (Sectors) View Contoller (Sectors) has a view of Table View Table View (in Navigation Controller (Sectors)) is a delegate of First View Controller (Companies) Table View (in Navigation Controller (Sectors)) is a dataSource outlet of First View Controller (Companies) First View Controller (Companies) First View Contoller (Sectors) has a view of Table View Table View (in First View Controller (Companies)) is not hooked up to a dataSource outlet and is not a delegate When I click the tab buttons and look at the Inspector I see that the first tab is correctly hooked up to my MainWindow.xib and the second tab has selected a nib called SecondView.xib. It's in the File's Owner of MainWindow.xib where I inherit UITableViewDataSource and UITableViewDelegate (and also UITabBarControllerDelegate) in the .h, and in the .m where I implement the delegate methods. Why does this setup only work when the Table View in my first tab (View Controller (Sectors)) is a delegate and dataSource of the second tab? I'm confused: why wouldn't it need to be hooked up to the Navigation Controller-enabled tab in which the Table View is seen (Navigation Controller (Sectors))? The Table View seen on the second tab has neither dataSource and is not a delegate. I'm having trouble getting a pushViewController to fire (self.navigationController is not nil but the new View Controller still doesn't load) and I suspect that I need to work out this IB wiring issue before I can troubleshoot why the Nav Controller won't push a new View Controller onto the stack. if(nil == self.navigationController) { NSLog(@"self.navigationController is nil."); } else { NSLog(@"self.navigationController is not nil."); SectorList *subsectorViewController = [[SectorList alloc] initWithNibName:@"SectorList" bundle:nil]; subsectorViewController.title = @"Subsectors"; [[self navigationController] pushViewController:subsectorViewController animated:YES]; [subsectorViewController release]; }

    Read the article

  • Separation of presentation and business logic in PHP

    - by Markus Ossi
    I am programming my first real PHP website and am wondering how to make my code more readable to myself. The reference book I am using is PHP and MySQL Web Development 4th ed. The aforementioned book gives three approaches to separating logic and content: include files function or class API template system I haven't chosen any of these yet, as wrapping my brains around these concepts is taking some time. However, my code has become some hybrid of the first two as I am just copy-pasting away here and modifying as I go. On presentation side, all of my pages have these common elements: header, top navigation, sidebar navigation, content, right sidebar and footer. The function-based examples in the book suggest that I could have these display functions that handle all the presentation example. So, my page code will be like this: display_header(); display_navigation(); display_content(); display_footer(); However, I don't like this because the examples in the book have these print statements with HTML and PHP mixed up like this: echo "<tr bgcolor=\"".$color."\"><td><a href=\"".$url."\">" ... I would rather like to have HTML with some PHP in the middle, not the other way round. I am thinking of making my pages so that at the beginning of my page, I will fetch all the data from database and put it in arrays. I will also get the data for variables. If there are any errors in any of these processes, I will put them into error strings. Then, at the HTML code, I will loop through these arrays using foreach and display the content. In some cases, there will be some variables that will be shown. If there is an error variable that is set, I will display that at the proper position. (As a side note: The thing I do not understand is that in most example code, if some database query or whatnot gives an error, there is always: else echo 'Error'; This baffles me, because when the example code gives an error, it is sometimes echoed out even before the HTML has started...) For people who have used ASP.NET, I have gotten somewhat used to the code-behind files and lblError and I am trying to do something similar here. The thing I haven't figured out is how could I do this "do logic first, then presentation" thing so that I would not have to replicate for example the navigation logic and navigation presentation in all of the pages. Should I do some include files or could I use functions here but a little bit differently? Are there any good articles where these "styles" of separating presentation and logic are explained a little bit more thoroughly. The book I have only has one paragraph about this stuff. What I am thinking is that I am talking about some concepts or ways of doing PHP programming here, but I just don't know the terms for them yet. I know this isn't a straight forward question, I just need some help in organizing my thoughts.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • c++ global operator not playing well with template class

    - by John
    ok, i found some similar posts on stackoverflow, but I couldn't find any that pertained to my exact situation and I was confused with some of the answers given. Ok, so here is my problem: I have a template matrix class as follows: template <typename T, size_t ROWS, size_t COLS> class Matrix { public: template<typename, size_t, size_t> friend class Matrix; Matrix( T init = T() ) : _matrix(ROWS, vector<T>(COLS, init)) { /*for( int i = 0; i < ROWS; i++ ) { _matrix[i] = new vector<T>( COLS, init ); }*/ } Matrix<T, ROWS, COLS> & operator+=( const T & value ) { for( vector<T>::size_type i = 0; i < this->_matrix.size(); i++ ) { for( vector<T>::size_type j = 0; j < this->_matrix[i].size(); j++ ) { this->_matrix[i][j] += value; } } return *this; } private: vector< vector<T> > _matrix; }; and I have the following global function template: template<typename T, size_t ROWS, size_t COLS> Matrix<T, ROWS, COLS> operator+( const Matrix<T, ROWS, COLS> & lhs, const Matrix<T, ROWS, COLS> & rhs ) { Matrix<T, ROWS, COLS> returnValue = lhs; return returnValue += lhs; } To me, this seems to be right. However, when I try to compile the code, I get the following error (thrown from the operator+ function): binary '+=' : no operator found which takes a right-hand operand of type 'const matrix::Matrix<T,ROWS,COLS>' (or there is no acceptable conversion) I can't figure out what to make of this. Any help if greatly appreciated!

    Read the article

  • Chinese Characters in email sent via PHP not showing up.

    - by rye
    hi All, a funny problem. I send mail via PHP from my testing server with Chinese chars in it and it sends perfectly. Encoding is utf-8. When I upload the same PHP file to another server and try to send from there, the e-mail will look 90% fine in one mail client (web-based mail actually, gmail), but in my mail client (Apple Mail) it's all gibberish even when I try changing the encoding in the mail client. I'm stuck here because everything works fine on one server, but not on another so I'm not sure where to start looking for solutions. What's even more puzzling is that on the production server, the mail looks somewhat ok (strange case of some characters not showing) but in other mail apps it looks like garbage. any idea where I can start looking to solve this? thanks for any help here! Regards.. php script $books = json_decode ($_POST['books']); $body = ' ?? ' . $_POST['name'] . ',?????????,????????,???????? '; $iLen = count($books); for ($i = 0; $i ' . $book-title . '' . $book-author . ''; $body .= '??: ' . $book-synopsis . ''; $body .= '???: ' . $book-age . ''; $body .= '??: ' . $book-setting . ''; $body .= '??: ' . $book-purpose . ''; $body .= '???: ' . $book-call . ''; $body .= '???: ' . $book-publisher . ''; } $body .= ' ????,Name '; $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'Content-type: text/html; charset=utf-8' . "\r\n"; $headers .= 'From: Name ' . "\r\n"; $ok = mail ($_POST['email'], '???????:???????????', $body, $headers); result ä? å¥? ryan, 以丗æ?¯ä? ä»/å–œä’ ç?Œç«?,ç»?å–©å–?è®”æ??亗è¯=稗,昕蜙æ±?ç°=䒜籟å?ŸåŸ? ç‘?ç‘?ævŒæ?˜å¤°çv±ä? 麜å?—å¸8é?·å°p, å±±å§? Synopsis: ç?—å?¯çv±ç°=å°?å?‰å®?å®?æ•/ä’v牨å®8痬瘒ç°=戒åp?å‚‘å?‰åœvåœv说å®8æ?˜å¤°çv±å®8ã•? Age Group: 4 - 6 å”™ Setting: ç=¤ä?„ Purpose: ä»·å•pè§?å‚‘ä¿8è¿?五å–?ç°=æ=ƒæ8? Call no: JP MAC Publisher: 麜å?—å¸8é?·å°p, å±±å§?. ç‘?ç‘?ævŒæ?˜å¤°çv±ä? .丅海 : 尌咴å=¿ç«¥åOºç˜vç¤=, 2005.

    Read the article

  • How to do left joins with least-n-per-group query?

    - by Nate
    I'm trying to get a somewhat complicated query working and am not having any luck whatsoever. Suppose I have the following tables: cart_items: +--------------------------------------------+ | item_id | cart_id | movie_name | quantity | +--------------------------------------------+ | 0 | 0 | braveheart | 4 | | 1 | 0 | braveheart | 9 | | . | . | . | . | | . | . | . | . | | . | . | . | . | | . | . | . | . | +--------------------------------------------+ movies: +------------------------------+ | movie_id | movie_name | ... | +------------------------------+ | 0 | braveheart | . | | . | . | . | | . | . | . | | . | . | . | | . | . | . | +------------------------------+ pricing: +-----------------------------------------+ | id | movie_name | quantity | price_per | +-----------------------------------------+ | 0 | braveheart | 1 | 1.99 | | 1 | braveheart | 2 | 1.50 | | 2 | braveheart | 4 | 1.25 | | 3 | braveheart | 8 | 1.00 | | . | . | . | . | | . | . | . | . | | . | . | . | . | | . | . | . | . | | . | . | . | . | +-----------------------------------------+ I need to join the data from the tables, but with the added complexity that I need to get appropriate price_per from the pricing table. Only one price should be returned for each cart_item, and that should be the lowest price from the pricing table where the quantity for the cart item is at least the quantity in the pricing table. So, the query should return for each item in cart_items the following: +---------------------------------------------+ | item_id | movie_name | quantity | price_per | +---------------------------------------------+ Example 1: Variable passed to the query: cart_id = 0. Return: +---------------------------------------------+ | item_id | movie_name | quantity | price_per | +---------------------------------------------+ | 0 | braveheart | 4 | 1.25 | | 1 | braveheart | 9 | 1.00 | +---------------------------------------------+ Note that this is a minimalist example and that additional data will be pulled from the tables mentioned (particularly the movies table). How could this query be composed? I have tried using left joins and subqueries, but the difficult part is getting the price and nothing I have tried has worked. Thanks for your help. EDIT: I think this is similar to what I have working with my "real" tables: SELECT t1.item_id, t2.movie_name, t1.quantity FROM cart_items t1 LEFT JOIN movies t2 ON t2.movie_name = t1.movie_name WHERE t1.cart_id = 0 Assuming I wrote that correctly (I quickly tried to "port over" my real query), then the output would currently be: +---------------------------------+ | item_id | movie_name | quantity | +---------------------------------+ | 0 | braveheart | 4 | | 1 | braveheart | 9 | +---------------------------------+ The trouble I'm having is joining the price at a certain quantity for a movie. I simply cannot figure out how to do it.

    Read the article

  • jQuery multiple class selection

    - by morpheous
    I am a bit confused with this: I have a page with a set of buttons (i.e. elements with class attribute 'button'. The buttons belong to one of two classes (grp1 and grp2). These are my requirements For buttons with class enabled, when the mouse hovers over them, a 'button-hover' class is added to them (i.e. the element the mouse is hovering over). Otherwise, the hover event is ignored When one of the buttons with class grp2 is clicked on (it has to be 'enabled' first), then I disable (i.e. remove the 'enabled' class for all elements with class 'enabled' (should probably selecting for elements with class 'button' AND 'enabled' - but I am having enough problems as it is, so I need to keep things simple for now). This is what my page looks like: <html> <head> <title>Demo</title> <style type="text/css" .button {border: 1px solid gray; color: gray} .enabled {border: 1px solid red; color: red} .button-hover {background-color: blue; } </style> <script type="text/javascript" src="jquery.js"></script> </head> <body> <div class="btn-cntnr"> <span class="grp1 button enabled">button 1</span> <span class="grp2 button enabled">button 2</span> <span class="grp2 button enabled">button 3</span> <span class="grp2 button enabled">button 4</span> </div> <script type="text/javascript"> /* <![CDATA[ */ $(document).ready(function(){ $(".button.enabled").hover(function(){ $(this).toggleClass('button-hover'); }, function() { $(this).toggleClass('button-hover'); }); $('.grp2.enabled').click(function(){ $(".grp2").removeClass('enabled');} }); /* ]]> */ </script> </body> </html> Currently, when a button with class 'grp2' is clicked on, the other elements with class 'grp2' have the 'enabled' class removed (works correctly). HOWEVER, I notice that even though the class no longer have a 'enabled' class, SOMEHOW, the hover behaviour is still applied to these elemets (WRONG). Once an element has been 'disabled', I no longer want it to respond to the hover() event. How may I implement this behavior, and what is wrong with the code above (i.e. why is it not working? (I am still learning jQuery)

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

  • WCF App using Peer Chat app as example does not work.

    - by splate
    I converted a VB .Net 3.5 app to use peer to peer WCF using the available Microsoft example of the Chat app. I made sure that I copied the app.config file for the sample(modified the names for my app), added the appropriate references. I followed all the tutorials and added the appropriate tags and structure in my app code. Everything runs without any errors, but the clients only get messages from themselves and not from the other clients. The sample chat application does run just fine with multiple clients. The only difference I could find is that the server on the sample is targeting the framework 2.0, but I assume that is wrong and it is building it in at least 3.0 or the System.ServiceModel reference would break. Is there something that has to be registered that the sample is doing behind the scenes or is the sample a special project type? I am confused. My next step is to copy all my classes and logic from my app to the sample app, but that is likely a lot of work. Here is my Client App.config: <client><endpoint name="thldmEndPoint" address="net.p2p://thldmMesh/thldmServer" binding="netPeerTcpBinding" bindingConfiguration="PeerTcpConfig" contract="THLDM_Client.IGameService"></endpoint></client> <bindings><netPeerTcpBinding> <binding name="PeerTcpConfig" port="0"> <security mode="None"></security> <resolver mode="Custom"> <custom address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig"></custom> </resolver> </binding></netPeerTcpBinding> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Here is my server App.config: <services> <service name="System.ServiceModel.PeerResolvers.CustomPeerResolverService"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/thldmServer"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig" contract="System.ServiceModel.PeerResolvers.IPeerResolverContract"> </endpoint> </service> </services> <bindings> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Thanks ahead of time.

    Read the article

  • Visual studio 2010 colourizers, intellisense and the rest. Where to start!!

    - by Owen
    Ok, before I begin I realize that there is a lot of documentation on this subject but I have thus far failed to get even basic colourization working for VS2010. My goal is to simply get to a point where I can open a document and everything is coloured red, from here I can implement the relevant parsing logic. Here's what I have tried/found: 1) Downloaded all the relevent SDK's and such- Found the ook sample (http://code.msdn.microsoft.com/ookLanguage) - didn't build, didn't work. 2) Knowing almost nothing about MEF read through "Implementing a Language Service By Using the Managed Package Framework" - http://msdn.microsoft.com/en-us/library/bb166533(v=VS.100).aspx This was pretty much a copy and paste of all the basic stuff here, and also updating some references which were out of date with the sample see: http://social.msdn.microsoft.com/Forums/en-US/vsx/thread/a310fe67-afd2-4592-b295-3fc86fec7996 Now, I have got to a point where when running the package MEF appears to have hooked up correctly (I know this because with the debugger open I can see that the packages initialize and FDoIdle methods are being hit). When I open a file of the extension I have registered with the ProvideLanguageExtensionAttribute everything dies as if in an endless loop, yet no debug symbols hit (though they are loaded). Looking at the ook sample and the MEF examples they seem to be totally different approaches to the same problem. In the ook sample there are notions of Clasifications and Completion controllers which aren't mentioned in the MEF example. Also, they don't seem to create a Package or Language service, so I have no idea how it should work? With the MEF example, my assumption is that I need to hook into the "IScanner.ScanTokenAndProvideInfoAboutIt" to provide syntax highlighting? Which would be fine if I could ever hit this method. So my first question I guess is which approach should I be taking here? Or do they both somehow tie together? My second questions is, where can I find a basic fully working project that implements bog standard basic syntax highlighting and intellisense or VS2010? Thirdly, in the MEF example when I created a Package there were a bunch of test projects created for me. I appears that the integration tests launch the VS2010 test rig somehow, but the test fails. It would be good to write my service with tests but I have no idea what/how I can test each interaction so any references to testing Language services would be helpful. Finally, please throw any resource/book links my way that I may find useful. Cheers, Chris. N.B. Sorry I realize this is part question part rant, but I have never been so confused.

    Read the article

  • iPhone memory management (with specific examples/questions)

    - by donkim
    Hey all. I know this question's been asked but I still don't have a clear picture of memory management in Objective-C. I feel like I have a pretty good grasp of it, but I'd still like some correct answers for the following code. I have a series of examples that I'd love for someone(s) to clarify. Setting a value for an instance variable. Say I have an NSMutableArray variable. In my class, when I initialize it, do I need to call a retain on it? Do I do fooArray = [[[NSMutableArray alloc] init] retain]; or fooArray = [[NSMutableArray alloc] init]; Does doing [[NSMutableArray alloc] init] already set the retain count to 1, so I wouldn't need to call retain on it? On the other hand, if I called a method that I know returns an autoreleased object, I would for sure have to call retain on it, right? Like so: fooString = [[NSString stringWithFormat:@"%d items", someInt] retain]; Properties. I ask about the retain because I'm a bit confused about how @property's automatic setter works. If I had set fooArray to be a @property with retain set, Objective-C will automatically create the following setter, right? - (void)setFooArray:(NSMutableArray *)anArray { [fooArray release]; fooArray = [anArray retain]; } So, if I had code like this: self.fooArray = [[NSMutableArray alloc] init]; (which I believe is valid code), Objective-C creates a setter method that calls retain on the value assigned to fooArray. In this case, will the retain count actually be 2? Correct way of setting a value of a property. I know there are questions on this and (possibly) debates, but which is the right way to set a @property? This? self.fooArray = [[NSMutableArray alloc] init]; Or this? NSMutableArray *anArray = [[NSMutableArray alloc] init]; self.fooArray = anArray; [anArray release]; I'd love to get some clarification on these examples. Thanks!

    Read the article

  • How do I remove descendant elements from a jQuery wrapped set?

    - by Bungle
    I'm a little confused about which jQuery method and/or selectors to use when trying to select an element, and then remove certain descendant elements from the wrapped set. For example, given the following HTML: <div id="article"> <div id="inset"> <ul> <li>This is bullet point #1.</li> <li>This is bullet point #2.</li> <li>This is bullet point #3.</li> </ul> </div> <p>This is the first paragraph of the article</p> <p>This is the second paragraph of the article</p> <p>This is the third paragraph of the article</p> </div> I want to select the article: var $article = $('#article'); but then remove <div id="inset"></div> and its descendants from the wrapped set. I tried the following: var $article = $('#article').not('#inset'); but that didn't work, and in retrospect, I think I can see why. I also tried using remove() unsuccessfully. What would be the correct way to do this? Ultimately, I need to set this up in such a way that I can define a configuration array, such as: var selectors = [ { select: '#article', exclude: ['#inset'] } ]; where select defines a single element that contains text content, and exclude is an optional array that defines one or more selectors to disregard text content from. Given the final wrapped set with the excluded elements removed, I would like to be able to call jQuery's text() method to end up with the following text: This is the first paragraph of the article.This is the second paragraph of the article.This is the third paragraph of the article. The configuration array doesn't need to work exactly like that, but it should provide roughly equivalent configuration potential. Thanks for any help you can provide!

    Read the article

  • javascript error: variable is not defined

    - by Hristo
    to is not defined [Break on this error] setTimeout('updateChat(from, to)', 1); I'm getting this error... I'm using Firebug to test and this comes up in the Console. The error corresponds to line 71 of chat.js and the whole function that wraps this line is: function updateChat(from, to) { $.ajax({ type: "POST", url: "process.php", data: { 'function': 'getFromDB', 'from': from, 'to': to }, dataType: "json", cache: false, success: function(data) { if (data.text != null) { for (var i = 0; i < data.text.length; i++) { $('#chat-box').append($("<p>"+ data.text[i] +"</p>")); } document.getElementById('chat-box').scrollTop = document.getElementById('chat-box').scrollHeight; } instanse = false; state = data.state; setTimeout('updateChat(from, to)', 1); // gives error }, }); } This links to process.php with function call getFromDB and the code for that is: case ('getFromDB'): // get the sender and receiver user IDs from their user names $from = mysql_real_escape_string($_POST['from']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$from' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $fromID = $row['user_id']; $to = mysql_real_escape_string($_POST['to']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$to' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $toID = $row['user_id']; $query = "SELECT * FROM `Messages` WHERE `from_id` = '$fromID' AND `to_id` = '$toID' LIMIT 1"; $result = mysql_query($query); while($row = mysql_fetch_assoc($result)) { $text[] = $line = $row['message']; $log['text'] = $text; } break; So I'm confused with the line that is giving the error. setTimeout('updateChat(from,to)',1); aren't the parameters to updateChat the same parameters that came into the function? Or are they being pulled in from somewhere else and I have to define to and from else where? Any ideas how to fix this error? Thanks, Hristo

    Read the article

  • Java compiler rejects variable declaration with parameterized inner class

    - by Johansensen
    I have some Groovy code which works fine in the Groovy bytecode compiler, but the Java stub generated by it causes an error in the Java compiler. I think this is probably yet another bug in the Groovy stub generator, but I really can't figure out why the Java compiler doesn't like the generated code. Here's a truncated version of the generated Java class (please excuse the ugly formatting): @groovy.util.logging.Log4j() public abstract class AbstractProcessingQueue <T> extends nz.ac.auckland.digitizer.AbstractAgent implements groovy.lang.GroovyObject { protected int retryFrequency; protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; public AbstractProcessingQueue (int processFrequency, int timeout, int retryFrequency) { super ((int)0, (int)0); } private enum ProcessState implements groovy.lang.GroovyObject { NEW, FAILED, FINISHED; } private class ProcessingQueueMember<E> extends java.lang.Object implements groovy.lang.GroovyObject { public ProcessingQueueMember (E object) {} } } The offending line in the generated code is this: protected java.util.Queue<nz.ac.auckland.digitizer.AbstractProcessingQueue.ProcessingQueueMember<T>> items; which produces the following compile error: [ERROR] C:\Documents and Settings\Administrator\digitizer\target\generated-sources\groovy-stubs\main\nz\ac\auckland\digitizer\AbstractProcessingQueue.java:[14,96] error: improperly formed type, type arguments given on a raw type The column index of 96 in the compile error points to the <T> parameterization of the ProcessingQueueMember type. But ProcessingQueueMember is not a raw type as the compiler claims, it is a generic type: private class ProcessingQueueMember <E> extends java.lang.Object implements groovy.lang.GroovyObject { ... I am very confused as to why the compiler thinks that the type Queue<ProcessingQueueMember<T>> is invalid. The Groovy source compiles fine, and the generated Java code looks perfectly correct to me too. What am I missing here? Is it something to do with the fact that the type in question is a nested class? (in case anyone is interested, I have filed this bug report relating to the issue in this question) Edit: Turns out this was indeed a stub compiler bug- this issue is now fixed in 1.8.9, 2.0.4 and 2.1, so if you're still having this issue just upgrade to one of those versions. :)

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • C++ vector of strings, pointers to functions, and the resulting frustration.

    - by Kyle
    So I am a first year computer science student, for on of my final projects, I need to write a program that takes a vector of strings, and applies various functions to these. Unfortunately, I am really confused on how to use pointer to pass the vector from function to function. Below is some sample code to give an idea of what I am talking about. I also get an error message when I try to deference any pointer. thanks. #include <iostream> #include <cstdlib> #include <vector> #include <string> using namespace std; vector<string>::pointer function_1(vector<string>::pointer ptr); void function_2(vector<string>::pointer ptr); int main() { vector<string>::pointer ptr; vector<string> svector; ptr = &svector[0]; function_1(ptr); function_2(ptr); } vector<string>::pointer function_1(vector<string>::pointer ptr) { string line; for(int i = 0; i < 10; i++) { cout << "enter some input ! \n"; // i need to be able to pass a reference of the vector getline(cin, line); // through various functions, and have the results *ptr.pushback(line); // reflectedin main(). But I cannot use member functions } // of vector with a deferenced pointer. return(ptr); } void function_2(vector<string>::pointer ptr) { for(int i = 0; i < 10; i++) { cout << *ptr[i] << endl; } }

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • Representation of a DateTime as the local to remote user

    - by TwoSecondsBefore
    Hello! I was confused in the problem of time zones. I am writing a web application that will contain some news with dates of publication, and I want the client to see the date of publication of the news in the form of corresponding local time. However, I do not know in which time zone the client is located. I have three questions. I have to ask just in case: does DateTimeOffset.UtcNow always returns the correct UTC date and time, regardless of whether the server is dependent on daylight savings time? For example, if the first time I get the value of this property for two minutes before daylight savings time (or before the transition from daylight saving time back) and the second time in 2 minutes after the transfer, whether the value of properties in all cases differ by only 4 minutes? Or here require any further logic? (Question #1) Please see the following example and tell me what you think. I posted the news on the site. I assume that DateTimeOffset.UtcNow takes into account the time zone of the server and the daylight savings time, and so I immediately get the correct UTC server time when pressing the button "Submit". I write this value to a MS SQL database in the field of type datetime2(0). Then the user opens a page with news and no matter how long after publication. This may occur even after many years. I did not ask him to enter his time zone. Instead, I get the offset of his current local time from UTC using the javascript function following way: function GetUserTimezoneOffset() { var offset = new Date().getTimezoneOffset(); return offset; } Next I make the calculation of the date and time of publication, which will show the user: public static DateTime Get_Publication_Date_In_User_Local_DateTime( DateTime Publication_Utc_Date_Time_From_DataBase, int User_Time_Zone_Offset_Returned_by_Javascript) { int userTimezoneOffset = User_Time_Zone_Offset_Returned_by_Javascript; // For // example Javascript returns a value equal to -300, which means the // current user's time differs from UTC to 300 minutes. Ie offset // is UTC +6. In this case, it may be the time zone UTC +5 which // currently operates summer time or UTC +6 which currently operates the // standard time. // Right? (Question #2) DateTimeOffset utcPublicationDateTime = new DateTimeOffset(Publication_Utc_Date_Time_From_DataBase, new TimeSpan(0)); // get an instance of type DateTimeOffset for the // date and time of publication for further calculations DateTimeOffset publication_DateTime_In_User_Local_DateTime = utcPublicationDateTime.ToOffset(new TimeSpan(0, - userTimezoneOffset, 0)); return publication_DateTime_In_User_Local_DateTime.DateTime;// return to user } Is the value obtained correct? Is this the right approach to solving this problem? (Question #3)

    Read the article

  • How to create multiple Repository object inside a Repository class using Unit Of Work?

    - by Santosh
    I am newbie to MVC3 application development, currently, we need following Application technologies as requirement MVC3 framework IOC framework – Autofac to manage object creation dynamically Moq – Unit testing Entity Framework Repository and Unit Of Work Pattern of Model class I have gone through many article to explore an basic idea about the above points but still I am little bit confused on the “Repository and Unit Of Work Pattern “. Basically what I understand Unit Of Work is a pattern which will be followed along with Repository Pattern in order to share the single DB Context among all Repository object, So here is my design : IUnitOfWork.cs public interface IUnitOfWork : IDisposable { IPermitRepository Permit_Repository{ get; } IRebateRepository Rebate_Repository { get; } IBuildingTypeRepository BuildingType_Repository { get; } IEEProjectRepository EEProject_Repository { get; } IRebateLookupRepository RebateLookup_Repository { get; } IEEProjectTypeRepository EEProjectType_Repository { get; } void Save(); } UnitOfWork.cs public class UnitOfWork : IUnitOfWork { #region Private Members private readonly CEEPMSEntities context = new CEEPMSEntities(); private IPermitRepository permit_Repository; private IRebateRepository rebate_Repository; private IBuildingTypeRepository buildingType_Repository; private IEEProjectRepository eeProject_Repository; private IRebateLookupRepository rebateLookup_Repository; private IEEProjectTypeRepository eeProjectType_Repository; #endregion #region IUnitOfWork Implemenation public IPermitRepository Permit_Repository { get { if (this.permit_Repository == null) { this.permit_Repository = new PermitRepository(context); } return permit_Repository; } } public IRebateRepository Rebate_Repository { get { if (this.rebate_Repository == null) { this.rebate_Repository = new RebateRepository(context); } return rebate_Repository; } } } PermitRepository .cs public class PermitRepository : IPermitRepository { #region Private Members private CEEPMSEntities objectContext = null; private IObjectSet<Permit> objectSet = null; #endregion #region Constructors public PermitRepository() { } public PermitRepository(CEEPMSEntities _objectContext) { this.objectContext = _objectContext; this.objectSet = objectContext.CreateObjectSet<Permit>(); } #endregion public IEnumerable<RebateViewModel> GetRebatesByPermitId(int _permitId) { // need to implment } } PermitController .cs public class PermitController : Controller { #region Private Members IUnitOfWork CEEPMSContext = null; #endregion #region Constructors public PermitController(IUnitOfWork _CEEPMSContext) { if (_CEEPMSContext == null) { throw new ArgumentNullException("Object can not be null"); } CEEPMSContext = _CEEPMSContext; } #endregion } So here I am wondering how to generate a new Repository for example “TestRepository.cs” using same pattern where I can create more then one Repository object like RebateRepository rebateRepo = new RebateRepository () AddressRepository addressRepo = new AddressRepository() because , what ever Repository object I want to create I need an object of UnitOfWork first as implmented in the PermitController class. So if I would follow the same in each individual Repository class that would again break the priciple of Unit Of Work and create multiple instance of object context. So any idea or suggestion will be highly appreciated. Thank you

    Read the article

  • Returning different data types C#

    - by user1810659
    i have create a class library (DLL) with many different methods. and the return different types of data(string string[] double double[]). Therefore i have created one class i called CustomDataType for all the methods containing different data types so each method in the Library can return object of the custom class and this way be able to return multiple data types I have done it like this: public class CustomDataType { public double Value; public string Timestamp; public string Description; public string Unit; // special for GetparamterInfo public string OpcItemUrl; public string Source; public double Gain; public double Offset; public string ParameterName; public int ParameterID; public double[] arrayOfValue; public string[] arrayOfTimestamp; // public string[] arrayOfParameterName; public string[] arrayOfUnit; public string[] arrayOfDescription; public int[] arrayOfParameterID; public string[] arrayOfItemUrl; public string[] arrayOfSource; public string[] arrayOfModBusRegister; public string[] arrayOfGain; public string[] arrayOfOffset; } The Library contains methods like these: public CustomDataType GetDeviceParameters(string deviceName) { ...................... code getDeviceParametersObj.arrayOfParameterName; return getDeviceParametersObj; } public CustomDataType GetMaxMin(string parameterName, string period, string maxMin) { .....................................code getMaxMingObj.Value = (double)reader["MaxMinValue"]; getMaxMingObj.Timestamp = reader["MeasurementDateTime"].ToString(); getMaxMingObj.Unit = reader["Unit"].ToString(); getMaxMingObj.Description = reader["Description"].ToString(); return getMaxMingObj; } public CustomDataType GetSelectedMaxMinData(string[] parameterName, string period, string mode) {................................code selectedMaxMinObj.arrayOfValue = MaxMinvalueList.ToArray(); selectedMaxMinObj.arrayOfTimestamp = MaxMintimeStampList.ToArray(); selectedMaxMinObj.arrayOfDescription = MaxMindescriptionList.ToArray(); selectedMaxMinObj.arrayOfUnit = MaxMinunitList.ToArray(); return selectedMaxMinObj; } As illustrated thi different methods returns different data types,and it works fine for me but when i import the DLL and want to use the methods Visual studio shwos all the data types in the CustomDataType class as suggestion for all the methods even though the return different data.This is illusrtated in the picture below. As we can see from the picture with the suggestion of all the different return data the user can get confused and choose wrong return data for some of the methods. So my question is how can i improve this. so Visual studio suggest just the belonging return data type for each method.

    Read the article

  • Please clarify how create/update happens against child entities of an aggregate root

    - by christian
    After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing. If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost. Here is a fabricated example...hope it holds enough water to discuss. Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company). Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root. Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z). If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy. One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution. From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like: auto = AutoFactory.createAuto(); auto.setX auto.setY or if sticking to immutability, AutoBuilder.new().setX().setY().build() and then have it get sorted out when you say Policy.addAuto(auto) This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example. The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of. I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >