Search Results

Search found 25564 results on 1023 pages for 'design studio'.

Page 368/1023 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Unexpected output from Bubblesort program with MSVC vs TCC

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b &lt; sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer &gt; 0; outer-- ) { for ( inner = 0 ; inner &lt; outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] &gt; a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c &lt; sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort... Edit: I know the code is not correct (it reads beyond the last element), but I am curious why the variable name makes a difference.

    Read the article

  • How can i convert this to a factory/abstract factory?

    - by Amitd
    I'm using MigraDoc to create a pdf document. I have business entities similar to the those used in MigraDoc. public class Page{ public List<PageContent> Content { get; set; } } public abstract class PageContent { public int Width { get; set; } public int Height { get; set; } public Margin Margin { get; set; } } public class Paragraph : PageContent{ public string Text { get; set; } } public class Table : PageContent{ public int Rows { get; set; } public int Columns { get; set; } //.... more } In my business logic, there are rendering classes for each type public interface IPdfRenderer<T> { T Render(MigraDoc.DocumentObjectModel.Section s); } class ParagraphRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Paragraph> { BusinessEntities.PDF.Paragraph paragraph; public ParagraphRenderer(BusinessEntities.PDF.Paragraph p) { paragraph = p; } public MigraDoc.DocumentObjectModel.Paragraph Render(MigraDoc.DocumentObjectModel.Section s) { var paragraph = s.AddParagraph(); // add text from paragraph etc return paragraph; } } public class TableRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Tables.Table> { BusinessEntities.PDF.Table table; public TableRenderer(BusinessEntities.PDF.Table t) { table =t; } public MigraDoc.DocumentObjectModel.Tables.Table Render(Section obj) { var table = obj.AddTable(); //fill table based on table } } I want to create a PDF page as : var document = new Document(); var section = document.AddSection();// section is a page in pdf var page = GetPage(1); // get a page from business classes foreach (var content in page.Content) { //var renderer = createRenderer(content); // // get Renderer based on Business type ?? // renderer.Render(section) } For createRenderer() i can use switch case/dictionary and return type. How can i get/create the renderer generically based on type ? How can I use factory or abstract factory here? Or which design pattern better suits this problem?

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Exporting classes containing std:: objects (vector, map, etc) from a dll

    - by RnR
    I'm trying to export classes from a DLL that contain objects such as std::vectors and std::stings - the whole class is declared as dll export through: class DLL_EXPORT FontManager { The problem is that for members of the complex types I get this warning: warning C4251: 'FontManager::m__fonts' : class 'std::map<_Kty,_Ty' needs to have dll-interface to be used by clients of class 'FontManager' with [ _Kty=std::string, _Ty=tFontInfoRef ] I'm able to remove some of the warnings by putting the following forward class declaration before them even though I'm not changing the type of the member variables themselves: template class DLL_EXPORT std::allocator<tCharGlyphProviderRef>; template class DLL_EXPORT std::vector<tCharGlyphProviderRef,std::allocator<tCharGlyphProviderRef> >; std::vector<tCharGlyphProviderRef> m_glyphProviders; Looks like the forward declaration "injects" the DLL_EXPORT for when the member is compiled but is it safe? Does it realy change anything when the client compiles this header and uses the std container on his side? Will it make all future uses of such a container DLL_EXPORT (and possibly not inline?)? And does it really solve the problem that the warning tries to warn about? Is this warning anything I should be worried about or would it be best to disable it in the scope of these constructs? The clients and the dll will always be built using the same set of libraries and compilers and those are header only classes... I'm using Visual Studio 2003 with the standard STD library. ---- Update ---- I'd like to target you more though as I see the answers are general and here we're talking about std containers and types (such as std::string) - maybe the question really is: Can we disable the warning for standard containers and types available to both the client and the dll through the same library headers and treat them just as we'd treat an int or any other built-in type? (It does seem to work correctly on my side.) If so would should be the conditions under which we can do this? Or should maybe using such containers be prohibited or at least ultra care taken to make sure no assignment operators, copy constructors etc will get inlined into the dll client? In general I'd like to know if you feel designing a dll interface having such objects (and for example using them to return stuff to the client as return value types) is a good idea or not and why - I'd like to have a "high level" interface to this functionality... maybe the best solution is what Neil Butterworth suggested - creating a static library?

    Read the article

  • Foreign key pointing to different tables

    - by Álvaro G. Vicario
    I'm implementing a table per subclass design I discussed in a previous question. It's a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. I have a master table that holds common attributes: product_type ============ product_type_id INT product_type_name VARCHAR E.g.: 1 'Magazine' 2 'Web site' product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id valid_since DATETIME valid_to DATETIME E.g. 1 'Foo Magazine' 1 '1998-12-01' NULL 2 'Bar Weekly Review' 1 '2005-01-01' NULL 3 'E-commerce App' 2 '2009-10-15' NULL 4 'CMS' 2 '2010-02-01' NULL ... and one subtable for each product type: item_magazine ============= item_magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id issue_number INT pages INT copies INT close_date DATETIME release_date DATETIME E.g. 1 'Foo Magazine Regular Issue' 1 89 52 150000 '2010-06-25' '2010-06-31' 2 'Foo Magazine Summer Special' 1 90 60 175000 '2010-07-25' '2010-07-31' 3 'Bar Weekly Review Regular Issue' 2 12 16 20000 '2010-06-01' '2010-06-02' item_web_site ============= item_web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id bandwidth INT hits INT date_from DATETIME date_to DATETIME E.g. 1 'The Carpet Store' 3 10 90000 '2010-06-01' NULL 2 'Penauts R Us' 3 20 180000 '2010-08-01' NULL 3 'Springfield Cattle Fair' 4 15 150000 '2010-05-01' '2010-10-31' Now I want to add some fees that relate to one specific item. Since there are very little subtypes, it's feasible to do this: fee === fee_id INT fee_description VARCHAR item_magazine_id INT -> Foreign key to item_magazine.item_magazine_id item_web_site_id INT -> Foreign key to item_web_site.item_web_site_id net_price DECIMAL E.g.: 1 'Front cover' 2 NULL 1999.99 2 'Half page' 2 NULL 500.00 3 'Square banner' NULL 3 790.50 4 'Animation' NULL 3 2000.00 I have tight foreign keys to handle cascaded editions and I presume I can add a constraint so only one of the IDs is NOT NULL. However, my intuition suggests that it would be cleaner to get rid of the item_WHATEVER_id columns and keep a separate table: fee_to_item =========== fee_id INT -> Foreign key to fee.fee_id product_id INT -> Foreign key to product.product_id item_id INT -> ??? But I can't figure out how to create foreign keys on item_id since the source table varies depending on product_id. Should I stick to my original idea?

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

  • Grid sorting with persistent master sort

    - by MikeWyatt
    I have a UI with a grid. Each record in the grid is sorted by a "master" sort column, let's call it a page number. Each record is a story in a magazine. I want the user to be able to drag and drop a record to a new position in the grid and automatically update the page number field to reflect the updated position. Easy enough, right? Now imagine that I also want to have the grid sortable by any other column (story title, section, author name, etc.). How does the drag and drop operation work now? Revert to page number sort during or after the drag and drop operation? This could confuse the user (why did my sort just change?). It would also result in arbitrary row positioning. Would the story now be before the row that was after it when the user dropped it? Or, would it be after the row that was before it? Those rows may now be widely separated after the master order sort. Disable the drag and drop feature if the grid isn't currently sorted by the page number? This would be easy, but the user might wonder why he can't drag and drop at certain times. Knowing to first sort by page number may not be very intuitive. Let the user rearrange his rows, but not make any changes to the page number? Require the user to enter a "Arrange Stories" mode, in which the grid sort is temporarily switched to page number and drag and drop is enabled? They would then exit the mode, and the previous sort would be reapplied. The big difference between this and the second option is that it would be more explicit than simply clicking on a column header. Any other ideas, or reasons why one of the above is the way to go? EDIT I'd like to point out that any of the above is technically possible, and easy to implement. My question is design-related. What is the most intuitive way to solve this problem, from the user's perspective?

    Read the article

  • Making Visual C++ DLL from C++ class

    - by prosseek
    I have the following C++ code to make dll (Visual Studio 2010). class Shape { public: Shape() { nshapes++; } virtual ~Shape() { nshapes--; }; double x, y; void move(double dx, double dy); virtual double area(void) = 0; virtual double perimeter(void) = 0; static int nshapes; }; class __declspec(dllexport) Circle : public Shape { private: double radius; public: Circle(double r) : radius(r) { }; virtual double area(void); virtual double perimeter(void); }; class __declspec(dllexport) Square : public Shape { private: double width; public: Square(double w) : width(w) { }; virtual double area(void); virtual double perimeter(void); }; I have the __declspec, class __declspec(dllexport) Circle I could build a dll with the following command CL.exe /c example.cxx link.exe /OUT:"example.dll" /DLL example.obj When I tried to use the library, Square* square; square->area() I got the error messages. What's wrong or missing? example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall ... Square::area(void)" (?area@Square@@UAENXZ) ADDED Following wengseng's answer, I modified the header file, and for DLL C++ code, I added #define XYZLIBRARY_EXPORT However, I still got errors. example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Circle::Circle(double)" (__imp_??0Circle@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: __th iscall Square::Square(double)" (__imp_??0Square@@QAE@N@Z) referenced in function "protected: virtual void __thiscall TestOne::SetUp(void)" (?SetUp@TestOne@@MAEXXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::area(void)" (?area@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Square::perimeter(void)" (?perimeter@Square@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::area(void)" (?area@Circle@@UAENXZ) example_unittest.obj : error LNK2001: unresolved external symbol "public: virtual double __thiscall Circle::perimeter(void)" (?perimeter@Circle@@UAENXZ)

    Read the article

  • What block is not being tested in my test method? (VS08 Test Framework)

    - by daft
    I have the following code: private void SetControlNumbers() { string controlString = ""; int numberLength = PersonNummer.Length; switch (numberLength) { case (10) : controlString = PersonNummer.Substring(6, 4); break; case (11) : controlString = PersonNummer.Substring(7, 4); break; case (12) : controlString = PersonNummer.Substring(8, 4); break; case (13) : controlString = PersonNummer.Substring(9, 4); break; } ControlNumbers = Convert.ToInt32(controlString); } Which is tested using the following test methods: [TestMethod()] public void SetControlNumbers_Length10() { string pNummer = "9999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length11() { string pNummer = "999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length12() { string pNummer = "199999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length13() { string pNummer = "1999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } For some reason Visual Studio says that I have 1 block that is not tested despite showing all code in the method under test in blue (ie. the code is covered in my unit tests). Is this because of the fact that I don't have a default value defined in the switch? When the SetControlNumbers() method is called, the string on which it operates have already been validated and checked to see that it conforms to the specification and that the various Substring calls in the switch will generate a string containing 4 chars. I'm just curious as to why it says there is 1 untested block. I'm no unit test guru at all, so I'd love some feedback on this. Also, how can I improve on the conversion after the switch to make it safer other than adding a try-catch block and check for FormatExceptions and OverflowExceptions?

    Read the article

  • C++ include statement required if defining a map in a headerfile.

    - by Justin
    I was doing a project for computer course on programming concepts. This project was to be completed in C++ using Object Oriented designs we learned throughout the course. Anyhow, I have two files symboltable.h and symboltable.cpp. I want to use a map as the data structure so I define it in the private section of the header file. I #include <map> in the cpp file before I #include "symboltable.h". I get several errors from the compiler (MS VS 2008 Pro) when I go to debug/run the program the first of which is: Error 1 error C2146: syntax error : missing ';' before identifier 'table' c:\users\jsmith\documents\visual studio 2008\projects\project2\project2\symboltable.h 22 Project2 To fix this I had to #include <map> in the header file, which to me seems strange. Here are the relevant code files: // symboltable.h #include <map> class SymbolTable { public: SymbolTable() {} void insert(string variable, double value); double lookUp(string variable); void init(); // Added as part of the spec given in the conference area. private: map<string, double> table; // Our container for variables and their values. }; and // symboltable.cpp #include <map> #include <string> #include <iostream> using namespace std; #include "symboltable.h" void SymbolTable::insert(string variable, double value) { table[variable] = value; // Creates a new map entry, if variable name already exist it overwrites last value. } double SymbolTable::lookUp(string variable) { if(table.find(variable) == table.end()) // Search for the variable, find() returns a position, if thats the end then we didnt find it. throw exception("Error: Uninitialized variable"); else return table[variable]; } void SymbolTable::init() { table.clear(); // Clears the map, removes all elements. }

    Read the article

  • Are there any modern GUI toolkits which implement a heirarchical menu buffer zone?

    - by scomar
    In Bruce Tognazzini's quiz on Fitt's Law, the question discussing the bottleneck in the hierarchical menu (as used in almost every modern desktop UI), talks about his design for the original Mac: The bottleneck is the passage between the first-level menu and the second-level menu. Users first slide the mouse pointer down to the category menu item. Then, they must carefully slide the mouse directly across (horizontally) in order to move the pointer into the secondary menu. The engineer who originally designed hierarchicals apparently had his forearm mounted on a track so that he could move it perfectly in a horizontal direction without any vertical component. Most of us, however, have our forarms mounted on a pivot we like to call our elbow. That means that moving our hand describes an arc, rather than a straight line. Demanding that pivoted people move a mouse pointer along in a straight line horizontally is just wrong. We are naturally going to slip downward even as we try to slide sideways. When we are not allowed to slip downward, the menu we're after is going to slam shut just before we get there. The Windows folks tried to overcome the pivot problem with a hack: If they see the user move down into range of the next item on the primary menu, they don't instantly close the second-level menu. Instead, they leave it open for around a half second, so, if users are really quick, they can be inaccurate but still get into the second-level menu before it slams shut. Unfortunately, people's reactions to heightened chance of error is to slow down, rather than speed up, a well-established phenomenon. Therefore, few users will ever figure out that moving faster could solve their problem. Microsoft's solution is exactly wrong. When I specified the Mac hierarchical menu algorthm in the mid-'80s, I called for a buffer zone shaped like a <, so that users could make an increasingly-greater error as they neared the hierarchical without fear of jumping to an unwanted menu. As long as the user's pointer was moving a few pixels over for every one down, on average, the menu stayed open, no matter how slow they moved. (Cancelling was still really easy; just deliberately move up or down.) This just blew me away! Such a simple idea which would result in a huge improvement in usability. I'm sure I'm not the only one who regularly has the next level of a menu slam shut because I don't move the mouse pointer in a perfectly horizontal line. So my question is: Are there any modern UI toolkits which implement this brilliant idea of a < shaped buffer zone in hierarchical menus? And if not, why not?!

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • When does logic belong in the Business Object/Entity, and when does it belong in a Service?

    - by Casey
    In trying to understand Domain Driven Design I keep returning to a question that I can't seem to definitively answer. How do you determine what logic belongs to a Domain entity, and what logic belongs to a Domain Service? Example: We have an Order class for an online store. This class is an entity and an aggregate root (it contains OrderItems). Public Class Order:IOrder { Private List<IOrderItem> OrderItems Public Order(List<IOrderItem>) { OrderItems = List<IOrderItem> } Public Decimal CalculateTotalItemWeight() //This logic seems to belong in the entity. { Decimal TotalWeight = 0 foreach(IOrderItem OrderItem in OrderItems) { TotalWeight += OrderItem.Weight } return TotalWeight } } I think most people would agree that CalculateTotalItemWeight belongs on the entity. However, at some point we have to ship this order to the customer. To accomplish this we need to do two things: 1) Determine the postage rate necessary to ship this order. 2) Print a shipping label after determining the postage rate. Both of these actions will require dependencies that are outside the Order entity, such as an external webservice to retrieve postage rates. How should we accomplish these two things? I see a few options: 1) Code the logic directly in the domain entity, like CalculateTotalItemWeight. We then call: Order.GetPostageRate Order.PrintLabel 2) Put the logic in a service that accepts IOrder. We then call: PostageService.GetPostageRate(Order) PrintService.PrintLabel(Order) 3) Create a class for each action that operates on an Order, and pass an instance of that class to the Order through Constructor Injection (this is a variation of option 1 but allows reuse of the RateRetriever and LabelPrinter classes): Public Class Order:IOrder { Private List<IOrderItem> OrderItems Private RateRetriever _Retriever Private LabelPrinter _Printer Public Order(List<IOrderItem>, RateRetriever Retriever, LabelPrinter Printer) { OrderItems = List<IOrderItem> _Retriever = Retriever _Printer = Printer } Public Decimal GetPostageRate { _Retriever.GetPostageRate(this) } Public void PrintLabel { _Printer.PrintLabel(this) } } Which one of these methods do you choose for this logic, if any? What is the reasoning behind your choice? Most importantly, is there a set of guidelines that led you to your choice?

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • Single entity with single view or two views in mvc3 vs2010?

    - by user2905798
    I have the following entity model public class Employee { public int Employee ID{get;set;} public string employeename{get;set;} public datetime employeeDOb{get;set;} public datetime? employeeDateOfJoin{get;set;} public string empFamilyname{get;set;} public datetime empFamilyDob{get;set;} } here I have to design a view for collecting employee information and employee family information. Since I am working on already available data, where in empFamilyDob was not mandatory. But now it is being made mandatory, the previous data doesn't contain EmpFamilyDob. So naturally I have added this new property EmpFamilyDob to the Model and made it required through DataAnnotations. Now there are two set of views to be developed. 1. A view which simply allows to collect the employee information without employee family information. i.e, empFamilyName and EmpFamilyDob.--This view is used by the Hr section to insert empplyee details Since the empFamilyname and EmpFamilyDob being now made mandatory, some other section will edit the data and update the EmpFamilyName and EmpFamilyDob as and when the information about employee family details are received. I have action controller for CreateNew and Edit Which is being generated by using the default model. There are two user actions being performed. 1.When the user clicks the Create new -- he will be able to update only the Employee information 2.As and when the other section receives the employee family details they update the familyname and family date of birth. i.e, EmployeeFamilyname and EmployeFamilyDob. While creating new record the uses should be able to update employee information only and while editing the information he should be able to update the employeefamily information. Since I have a single view with most of these fields as required and not allowing null , How can I achieve this in a sincle view? I have recorrected the model like this public class Employee { public int Employee ID{get;set;} public string employeename{get;set;} public datetime employeeDOb{get;set;} public datetime? employeeDateOfJoin{get;set;} public string empFamilyname{get;set;} public datetime? empFamilyDob{get;set;} } Now by default I hope the createnew action would insert null value for empFamilyname(string datatype) and empFamilyDob . In the Edit action the user should be made to enter empFamilyname and empFamilyDob(mandatory). As there is every chance that the user might edit other information about the employee(like employeeDob) I don't want to go for partial views. Can you help me out with some illustration. Thanks in advance

    Read the article

  • Add Service Reference is generating Message Contracts

    - by JohnIdol
    OK, this has been haunting me for a while, can't find much on Google and I am starting to lose hope so I am reverting to the SO community. When I import a given service using "Add service Reference" on Visual Studio 2008 (SP1) all the Request/Response messages are being unnecessarily wrapped into Message Contracts (named as -- "operationName" + "Request"/"Response" + "1" at the end). The code generator says: // CODEGEN: Generating message contract since the operation XXX is neither RPC nor document wrapped. The guys who are generating the wsdl from a Java service say they are specifying DOCUMENT-LITERAL/WRAPPED. Any help/pointer/clue would be highly appreciated. Update: this is a sample of my wsdl for one of the operations that look suspicious. Note the mismatch on the message element attribute for the request, compared to the response. <!- imports namespaces and defines elements --> <wsdl:types> <xsd:schema targetNamespace="http://WHATEVER/" xmlns:xsd_1="http://WHATEVER_1/" xmlns:xsd_2="http://WHATEVER_2/"> <xsd:import namespace="http://WHATEVER_1/" schemaLocation="WHATEVER_1.xsd"/> <xsd:import namespace="http://WHATEVER_2/" schemaLocation="WHATEVER_2.xsd"/> <xsd:element name="myOperationResponse" type="xsd_1:MyOperationResponse"/> <xsd:element name="myOperation" type="xsd_1:MyOperationRequest"/> </xsd:schema> </wsdl:types> <!- declares messages - NOTE the mismatch on the request element attribute compared to response --> <wsdl:message name="myOperationRequest"> <wsdl:part element="tns:myOperation" name="request"/> </wsdl:message> <wsdl:message name="myOperationResponse"> <wsdl:part element="tns:myOperationResponse" name="response"/> </wsdl:message> <!- operations --> <wsdl:portType name="MyService"> <wsdl:operation name="myOperation"> <wsdl:input message="tns:myOperationRequest"/> <wsdl:output message="tns:myOperationResponse"/> <wsdl:fault message="tns:myOperationFault" name="myOperationFault"/> <wsdl:fault message="tns:myOperationFault1" name="myOperationFault1"/> </wsdl:operation> </wsdl:portType> Update 2: I pulled all the types that I had in my imported namespace (they were in a separate xsd) into the wsdl, as I suspected the import could be triggering the message contract generation. To my surprise it was not the case and having all the types defined in the wsdl did not change anything. I then (out of desperation) started constructing wsdls from scratch and playing with the maxOccurs attributes of element attributes contained in a sequence attribute I was able to reproduce the undesired message contract generation behavior. Here's a sample of an element: <xsd:element name="myElement"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="arg1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> Playing with maxOccurs on elements that are used as messages (all requests and responses basically) the following happens: maxOccurs = "1" does not trigger the wrapping macOcccurs 1 triggers the wrapping maxOccurs = "unbounded" triggers the wrapping I was not able to reproduce this on my production wsdl yet because the nesting of the types goes very deep, and it's gonna take me time to inspect it thoroughly. In the meanwhile I am hoping it might ring a bell - any help highly appreciated.

    Read the article

  • VS2010 Assembly Load Error

    - by Nate
    I am getting the following error when I try to build an ASP.NET 4 project in Visual Studio 2010: "Could not load file or assembly 'file:///C:\Dev\project\trunk\bin\Elmah.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515)". I have verified that the dll does, in fact, exist, and is getting copied to the bin folder correctly. I have also tried removing and then re-adding the reference to the project. The build only fails when I switch the Solution Configuration to "Release". It does not fail when the Solution Configuration is set to "Debug". The only difference between the two configurations (that I know of) is shown in the following Web.config transform, Web.Release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="SqlServer" connectionString="" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors mode="On" xdt:Transform="Replace"> <error statusCode="404" redirect="lost.htm" /> <error statusCode="500" redirect="uhoh.htm" /> </customErrors> </system.web> </configuration> I have tried using Fusion Log Viewer to track down the assembly binding issue, but it looks like it is finding and loading the assembly correctly. Here is the log: *** Assembly Binder Log Entry (6/8/2010 @ 10:01:54 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\sgen.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = User LOG: Where-ref bind. Location = C:\Dev\project\trunk\bin\Elmah.dll LOG: Appbase = file:///c:/Program Files (x86)/Microsoft SDKs/Windows/v7.0A/bin/NETFX 4.0 Tools/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = sgen.exe Calling assembly : (Unknown). === LOG: This bind starts in LoadFrom load context. WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load(). LOG: No application configuration file found. LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Attempting download of new URL file:///C:/Dev/project/trunk/bin/Elmah.dll. LOG: Assembly download was successful. Attempting setup of file: C:\Dev\project\trunk\bin\Elmah.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: Elmah, Version=1.1.11517.0, Culture=neutral, PublicKeyToken=null LOG: Re-apply policy for where-ref bind. LOG: Where-ref bind Codebase does not match what is found in default context. Keep the result in LoadFrom context. LOG: Binding succeeds. Returns assembly from C:\Dev\project\trunk\bin\Elmah.dll. LOG: Assembly is loaded in LoadFrom load context. I feel like there is a fundamental lack of understanding on my part as to what exactly is going on here. Any explanation/help is much appreciated!

    Read the article

  • Contructor parameters for dependent classes with Unity Framework

    - by Onisemus
    I just started using the Unity Application Block to try to decouple my classes and make it easier for unit testing. I ran into a problem though that I'm not sure how to get around. Looked through the documentation and did some Googling but I'm coming up dry. Here's the situation: I have a facade-type class which is a chat bot. It is a singleton class which handles all sort of secondary classes and provides a central place to launch and configure the bot. I also have a class called AccessManager which, well, manages access to bot commands and resources. Boiled down to the essence, I have the classes set up like so. public class Bot { public string Owner { get; private set; } public string WorkingDirectory { get; private set; } private IAccessManager AccessManager; private Bot() { // do some setup // LoadConfig sets the Owner & WorkingDirectory variables LoadConfig(); // init the access mmanager AccessManager = new MyAccessManager(this); } public static Bot Instance() { // singleton code } ... } And the AccessManager class: public class MyAccessManager : IAccessManager { private Bot botReference; public MyAccesManager(Bot botReference) { this.botReference = botReference; SetOwnerAccess(botReference.Owner); } private void LoadConfig() { string configPath = Path.Combine( botReference.WorkingDirectory, "access.config"); // do stuff to read from config file } ... } I would like to change this design to use the Unity Application Block. I'd like to use Unity to generate the Bot singleton and to load the AccessManager interface in some sort of bootstrapping method that runs before anything else does. public static void BootStrapSystem() { IUnityContainer container = new UnityContainer(); // create new bot instance Bot newBot = Bot.Instance(); // register bot instance container.RegisterInstance<Bot>(newBot); // register access manager container.RegisterType<IAccessManager,MyAccessManager>(newBot); } And when I want to get a reference to the Access Manager inside the Bot constructor I can just do: IAcessManager accessManager = container.Resolve<IAccessManager>(); And elsewhere in the system to get a reference to the Bot singleton: // do this Bot botInstance = container.Resolve<Bot>(); // instead of this Bot botInstance = Bot.Instance(); The problem is the method BootStrapSystem() is going to blow up. When I create a bot instance it's going to try to resolve IAccessManager but won't be able to because I haven't registered the types yet (that's the next line). But I can't move the registration in front of the Bot creation because as part of the registration I need to pass the Bot as a parameter! Circular dependencies!! Gah!!! This indicates to me I have a flaw in the way I have this structured. But how do I fix it? Help!!

    Read the article

  • Asp.net mvc application deployment / security issues

    - by WestDiscGolf
    I'll start with appologies; I wasn't sure if this was best posted here of Server Fault so if its in the wrong place then please move :-) Basic information I have written the first module of a new application at work. This is written using Visual Studio 2010, targetting .net 3.5 (at the moment) and asp.net mvc 2. This has been working fine during development running on the built in Development server from VS but however does not work once deployed to IIS 7/7.5. To deploy the application, I have built it in release mode and created a deployment package by right clicking on the project in the solution explorer (this will be done with an automated build in tfs once upgrade from the beta). This has then been imported into IIS on the server. The application is using windows/domain authentication. Issue #1 I can fire up internet explorer and browse to the application from a client computer as well as on a remote desktop connection. I can execute the code which reads/stores data in Session fine from the IE instance on the remote desktop but if I browse to it from the client pc it seems to lose the session state. I click on the form submit and the page refreshes and doesn't execute the required code. I've tried setting with; InProc, SQLServer and StateServer. but with no luck :-( Issue #2 As part of the application it views PDF and Tiff documents on the fly which are on a network share on the office network and creates thumbnails if the document hasn't been viewed before. This works if running on the machine the application is deployed to; however when browsing from a client pc I get an error saying: Access to the path '\\fileserver\folder\file.tif' is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the path '\\fileserver\folder\file.TIF' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. As this is on a different server the user is not accessible. To get round this I have tried: 1 - setting the application pool to run as domain administrator (I know this is a security risk, but I'm just trying to get it to work at the moment!) 2 - to set the log on account for World Wide Web Publishing service to be the domain admin . When trying to restart the service I get ... Windows could not start the World Wide Web Publishing Service service on the Local Computer. Error 1079: The account specified for this service is different from the account specified fro the other services running in the same process. Any pointers/help would be much appriciated as I'm pulling my hair out (of what little I have left). Update I've been using this funky little tool I found - DelegConfig v2 beta (Delegation / Kerberos Configuration Tool). This has been really usefull. So I've got the accessing of the file share working (there is a test page which will read the files) so now I've just got the issue of passing through the users credentials through to the SQL Server (wans't my choice to do it this way!!) to execute the queries etc. but I can't get it to log on as the user. It tries to access it as "NT Authority\Network Service" which doesn't have a sql login (as should be the logged on user). My connection string is: <add name="User" connectionString="Data Source=.;Integrated Security=True" providerName="System.Data.SqlClient" /> No initial catalog is specified as the system is over multiple dbs (also wasn't my choice!!). I really appriciate all the help so far! :-) Any further hints?!

    Read the article

  • How can I get my Web API app to run again after upgrading to MVC 5 and Web API 2?

    - by Clay Shannon
    I upgraded my Web API app to the funkelnagelneu versions using this guidance: http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2 However, after going through the steps (it seems all this should be automated, anyway), I tried to run it and got, "A project with an Output Type of Class Library cannot be started directly" What in Sam Hills Brothers Coffee is going on here? Who said this was a class library? So I opened Project Properties, and changed it (it was marked as "Class Library" for some reason - it either wasn't yesterday, or was and worked fine) to an Output Type of "Windows Application" ("Console Application" and "Class Library" being the only other options). Now it won't compile, complaining: "*Program 'c:\Platypus_Server_WebAPI\PlatypusServerWebAPI\PlatypusServerWebAPI\obj\Debug\PlatypusServerWebAPI.exe' does not contain a static 'Main' method suitable for an entry point...*" How can I get my Web API app back up and running in view of this quandary? UPDATE Looking in packages.config, two entries seem chin-scratch-worthy: <package id="Microsoft.AspNet.Providers" version="1.2" targetFramework="net40" /> <package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net40" /> All the others target net451. Could this be the problem? Should I remove these packages? UPDATE 2 I tried to uninstall the Microsoft.Web.Infrastructure package (its description leads me to believe I don't need it; also, it has no dependencies) via the NuGet package manager, but it tells me, "NuGet failed to install or uninstall the selected package in the following project(s). [mine]" UPDATE 3 I went through the steps in again, and found that I had missed one step. I had to change this entry in the Application web.config File : <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-5.0.0.0" newVersion="5.0.0.0" /> </dependentAssembly> (from "4.0.0.0" to "5.0.0.0") ...but I still get the same result - it rebuilds/compiles, but won't run, with "A project with an Output Type of Class Library cannot be started directly" UPDATE 4 Thinking about the err msg, that it can't directly open a class library, I thought, "Sure you can't/won't -- this is a web app, not a project. So I followed a hunch, closed the project, and reopened it as a website (instead of reopening a project). That has gotten me further, at least; now I see a YSOD: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. UPDATE 5 Note: The project is now (after being opened as a web site) named "localhost_48614" And...there is no longer a "References" folder...?!?!? As to that YSOD I'm getting, the official instructions (http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2) said to do this, and I quote: "Update all elements that contain “System.Web.WebPages.Razor” from version “2.0.0.0” to version“3.0.0.0”." UPDATE 6 When I select Tools Library Package Manager Manage NuGet Packages for Solution now, I get, "Operation failed. Unable to locate the solution directory. Please ensure that the solution has been saved." So I save it, and it saves it with this funky new name (C:\Users\clay\Documents\Visual Studio 2013\Projects\localhost_48614\localhost_48614.sln) I get the Yellow Strip of Enlightenment across the top of the NuGet Package Manager telling me, "Some NuGet packages are missing from this solution. Click to restore from your online package sources." I do (click the "Restore" button, that is), and it downloads the missing packages ... I end up with the 30 packages. I try to run the app/site again, and ... the erstwhile YSOD becomes a compilation error: The pre-application start initialization method Start on type System.Web.Mvc.PreApplicationStartCode threw an exception with the following error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. Argghhhh!!! (and it's not even talk-like-a-pirate day).

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • initializing structs using user-input information

    - by johnny boy
    I am trying to make a program that works with poker (texas holdem) starting hands; each hand has a value from 1 to 169, and i want to be able to input each card and whether they are suited or not, and have those values correspond to a series of structs. Here is the code so far, i cant seem to get it to work (im a beginning programmer). oh and im using visual studio 2005 by the way #include "stdafx.h" #include <iostream> int main() { using namespace std; struct FirstCard { struct SecondCard { int s; //suited int n; //non-suited }; SecondCard s14; SecondCard s13; SecondCard s12; SecondCard s11; SecondCard s10; SecondCard s9; SecondCard s8; SecondCard s7; SecondCard s6; SecondCard s5; SecondCard s4; SecondCard s3; SecondCard s2; }; FirstCard s14; //ace FirstCard s13; //king FirstCard s12; //queen FirstCard s11; //jack FirstCard s10; FirstCard s9; FirstCard s8; FirstCard s7; FirstCard s6; FirstCard s5; FirstCard s4; FirstCard s3; FirstCard s2; s14.s14.n = 169; // these are the values that each combination s13.s13.n = 168; // will evaluate to, would eventually have s12.s12.n = 167; // hand combinations all the way down to 1 s11.s11.n = 166; s14.s13.s = 165; s14.s13.s = 164; s10.s10.n = 163; //10, 10, nonsuited s14.s13.n = 162; s14.s11.s = 161; s13.s12.s = 160;// king, queen, suited s9.s9.n = 159; s14.s10.s = 158; s14.s12.n = 157; s13.s11.s = 156; s8.s8.n = 155; s12.s11.s = 154; s13.s10.s = 153; s14.s9.s = 152; s14.s11.n = 151; cout << "enter first card: " << endl; cin >> somthing?//no idea what to put here, but this would somehow //read out the user input (a number from 2 to 14) //and assign it to the corresponding struct cout << firstcard.secondcard.suited_or_not << endl; //this would change depending //on what the user inputs system("Pause"); }

    Read the article

  • MSBuild condition across projects

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >