Search Results

Search found 14545 results on 582 pages for 'design patterns'.

Page 57/582 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • What are some ways to separate game logic from animations and the draw loop?

    - by TMV
    I have only previously made flash games, using MovieClips and such to separate out my animations from my game logic. Now I am getting into trying my hand at making a game for Android, but the game programming theory around separating these things still confuses me. I come from a background of developing non game web applications so I am versed in more MVC like patterns and am stuck in that mindset as I approach game programming. I want to do things like abstract my game by having, for example, a game board class that contains the data for a grid of tiles with instances of a tile class that each contain properties. I can give my draw loop access to this and have it draw the game board based on the properties of each tile on the game board, but I don't understand where exactly animation should go. As far as I can tell, animation sort of sits between the abstracted game logic (model) and the draw loop (view). With my MVC mindset, it's frustrating trying to decide where animation is actually supposed to go. It would have quite a bit of data associated with it like a model, but seemingly needs to be very closely coupled with the draw loop in order to have things like frame independent animation. How can I break out of this mindset and start thinking about patterns that make more sense for games?

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • Is there a way to put relationships/contraints into CSS?

    - by hekevintran
    In every design tool or art principle I've heard of, relationships are a central theme. By relationships I mean the thing you can do in Adobe Illustrator to specify that the height of one shape is equal to half the height of another. You cannot express this information in CSS. CSS hard-codes all values. Using a language like LESS that allows variables and arithmetic you can get closer to relationships but it's still a CSS variant. This inability in my mind is the biggest problem with CSS. CSS is supposed to be a language that describes the visual component of a Web page but it ignores relationships and contraints, ideas that are at the core of art. How possible is it to imagine a new Web design language that can express relationships and contraints that can be implemented in JavaScript using the current CSS properties?

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

  • Java regex patterns - compile time constants or instance members?

    - by KepaniHaole
    Currently, I have a couple of singleton objects where I'm doing matching on regular expressions, and my Patterns are defined like so: class Foobar { private final Pattern firstPattern = Pattern.compile("some regex"); private final Pattern secondPattern = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } But I was told by someone the other day that this is bad style, and Patterns should always be defined at the class level, and look something like this instead: class Foobar { private static final Pattern FIRST_PATTERN = Pattern.compile("some regex"); private static final Pattern SECOND_PATTERN = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } The lifetime of this particular object isn't that long, and my main reason for using the first approach is because it doesn't make sense to me to hold on to the Patterns once the object gets GC'd. Any suggestions / thoughts?

    Read the article

  • Opensource showcase for MVC in Java Swing

    - by Regular John
    I've allready created small desktop CRUD applications using Java/Swing. In hindsight I'm not quite sure if the overall design of these applications is good. I've also done some reading on MVC and looked at different Swing-tutorials. My problem is, that I've got a very theroatical knowledge of MVC and on the other hand, most Swing-resources don't implement the MVC-pattern. Now I would like to get my hands dirty and see how MVC is implemented in Swing in a real-world-application. Are there any opensource project you could recommend? It would be also interesting to have more than one project, to see different approaches. Best fit would be a software, that uses a relational database in the backend, to see an overall design, that I can compare to my former applications.

    Read the article

  • Patterns for non-layered applications

    - by Paul Stovell
    In Patterns of Enterprise Application Architecture, Martin Fowler writes: This book is thus about how you decompose an enterprise application into layers and how those layers work together. Most nontrivial enterprise applications use a layered architecture of some form, but in some situations other approaches, such as pipes and filters, are valuable. I don't go into those situations, focussing instead on the context of a layered architecture because it's the most widely useful. What patterns exist for building non-layered applications/parts of an application? Take a statistical modelling engine for a financial institution. There might be a layer for data access, but I expect that most of the code would be in a single layer. Would you still expect to see Gang of Four patterns in such a layer? How about a domain model? Would you use OO at all, or would it be purely functional? The quote mentions pipes and filters as alternate models to layers. I can easily imagine a such an engine using pipes as a way to break down the data processing. What other patterns exist? Are there common patterns for areas like task scheduling, results aggregation, or work distribution? What are some alternatives to MapReduce?

    Read the article

  • how to avoid returning mocks from a mocked object list

    - by koen
    I'm trying out mock/responsibility driven design. I seem to have problems to avoid returning mocks from mocks in the case of finder objects. An example could be an object that checks whether the bills from last month are paid. It needs a service that retrieves a list of bills for that. So I need to mock that service that retrieves the bills. At the same time I need that mock to return mocked Bills (since I don't want my test to rely on the correctness bill implementation). Is my design flawed? Is there a better way to test this? Or is this the way it will need to be when using finder objects (the finding of the bills in this case)?

    Read the article

  • Why implement DB connection pointer object as a reference counting pointer? (C++)

    - by DVK
    At our company one of the core C++ classes (Database connection pointer) is implemented as a reference counting pointer. To be clear, the objects are NOT DB connections themselves, but pointers to a DB connection object. The library is very old, and nobody who designed is around anymore. So far, nether I, nor any C++ experts in the company that I asked have come up with a good reason for why this particular design was chosen. Any ideas? It is introducing some problems (partially due to awful reference pointer implementation used), and I'm trying to understand if this design actually has some deep underlying reasons? The usage pattern these days seems to be that the DB connection pointer object is returned by a DB connection manager class, and it's somewhat unclear whether DB connection pointers were designed to be able to be used independently of DB connection manager.

    Read the article

  • Plan variable and call dependencies

    - by Gerenuk
    I'd like to write down the design of my program to understand the dependencies and calls better. I know there are class diagrams which show inheritance and attribute variables. However I'd also like to document the input parameters to method functions and in particular which calls the methods function executes inside (e.g. on the input parameters). Also sometimes it might be useful to show how actual objects are connected (if there is a standard structure). This way I can have a better understanding of the modules and design before starting to program. Can you suggest a method to do this software design? It should be one-to-one to programming code structure so that I really notice all quirks beforehand (instead of high-level design where thing are hard to implement without further work). Maybe some special diagram or tool or a combination? It is static dependency and call design rather than time dependent execution monitoring. (I use Python if you have any specialized recommendations).

    Read the article

  • How can be data oriented programming applied for GUI system?

    - by Miro
    I've just learned basics of Data oriented programming design, but I'm not very familiar with that yet. I've also read Pitfalls of Object Oriented Programming GCAP 09. It seems that data oriented programming is much better idea for games, than OOP. I'm just creating my own GUI system and it's completely OOP. I'm thinking if is data oriented programming design applicable for structured things like GUI. The main problem I see is that every type widget has different data, so I can hardly group them into arrays. Also every type of widget renders differently so I still need to call virtual functions.

    Read the article

  • How do I reuse a state machine in a slightly different way?

    - by JoJo
    Problem I have a big state machine. The design requirements of the project have changed such that I need to re-use this state machine in another place. All the states remain the same in this new place, but a few states run slightly different stuff. What design pattern allows me to reuse this state machine? Motivation I am building a video player. It is modeled by a state machine with these states: stopped, loading, playing, paused, crashed, and some more... This video player needs to be used on two web pages. When the player crashes on the first page, it should show an error message below. If the player crashes on the second page, the error message should appear in the center of the video and pulsate a few times.

    Read the article

  • Is it okay to have many Abstract classes in your application?

    - by JoseK
    We initially wanted to implement a Strategy pattern with varying implementations of the methods in a commmon interface. These will get picked up at runtime based on user inputs. As it's turned out, we're having Abstract classes implementing 3 - 5 common methods and only one method left for a varying implementation i.e. the Strategy. Update: By many abstract classes I mean there are 6 different high level functionalities i.e. 6 packages , and each has it's Interface + AbstractImpl + (series of Actual Impl). Is this a bad design in any way? Any negative views in terms of later extensibility - I'm preparing for a code/design review with seniors.

    Read the article

  • How can i implement the NULL Object Design Pattern in a generic form?

    - by Colour Blend
    Is there a way to implement the null object design pattern in a generic form so that i don't need to implement it for every buisness object. For me, there are two high level classes you'll need for every business class. One for a single record and another for a list. So i think there should be a way to implement the NULL Object design pattern at a high level and not have to implement it for every class. Is there a way and how please?

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • iOS app with a lot of text

    - by rdurand
    I just asked a question on StackOverflow, but I'm thinking that a part of it belongs here, as questions about design pattern are welcomed by the faq. Here is my situation. I have developed almost completely a native iOS app. The last section I need to implement is all the rules of a sport, so that's a lot of text. It has one main level of sections, divided in subsections, containing a lot of structured text (paragraphs, a few pictures, bulleted/numbered lists, tables). I have absolutely no problem with coding, I'm just looking for advice to improve and make the best design pattern possible for my app. My first shot (the last one so far) was a UITableViewController containing the sections, sending the user to another UITableViewController with the subsections of the selected section, and then one strange last UITableViewController where the cells contain UITextViews, sections header help structure the content, etc. What I would like is your advice on how to improve the structure of this section. I'm perfectly ready to destroy/rebuild the whole thing, I'm really lost in my design here.. As I said on SO, I've began to implement a UIWebView in a UIViewController, showing a html page with JQuery Mobile to display the content, and it's fine. My question is more about the 2 views taking the user to that content. I used UITableViewControllers because that's what seemed the most appropriate for a structured hierarchy like this one. But that doesn't seem like the best solution in term of user experience.. What structure / "view-flow" / kind of presentation would you try to implement in my situation? As always, any help would be greatly appreciated! Just so you can understand better the hierarchy, with a simple example : -----> Section 1 -----> SubSection 1.1 -----> Content | -----> SubSection 1.2 -----> Content | -----> SubSection 1.3 -----> Content | | | UINavigationController -------> Section 2 -----> SubSection 2.1 -----> Content | -----> SubSection 2.2 -----> Content | -----> SubSection 2.3 -----> Content | -----> SubSection 2.4 -----> Content | -----> SubSection 2.5 -----> Content | -----> Section 3 -----> SubSection 3.1 -----> Content -----> SubSection 3.2 -----> Content |------------------| |--------------------| |-------------| 1 UITableViewController 3 UITableViewControllers 10 UIViewControllers (3 rows) (with different with a UIWebView number of rows)

    Read the article

  • Passing a file with multiple patterns to grep

    - by Michael Goldshteyn
    Let's say we have two files. match.txt: A file containing patterns to match: fed ghi tsr qpo data.txt: A file containing lines of text: abc fed ghi jkl mno pqr stu vwx zyx wvu tsr qpo Now, I want to issue a grep command that should return the first and third line from data.txt: abc fed ghi jkl zyx wvu tsr qpo ... because each of these two lines match one of the patterns in match.txt. I have tried: grep -F -f match.txt data.txt but that returns no results. grep info: GNU grep 2.6.3 (cygwin) OS info: Windows 2008 R2 Update: It seems, that grep is confused by the space in the search pattern lines, but with the -F flag, it should be treating each line in match.txt as an individual match pattern.

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • design in agile process

    - by ying
    Recently I had an interview with dev team in a company. The team uses agile + TDD. The code exercise implements a video rental store which generates statement to calc total rental fee for each type of video (new release, children, etc) for a customer. The existing code use object like: Statement to generate statement and calc fee where big switch statement sits to use enum to determine how to calc rental fee customer holds a list of rentals movie base class and derived class for each type of movie (NEW, CHILDREN, ACTION, etc) The code originally doesn't compile as the owner was assumed to be hit by a bus. So here is what I did: outlined the improvement over object model to have better responsibility for each class. use strategy pattern to replace switch statement and weave them in config But the team says it's waste of time because there is no requirement for it and UAT test suite works and is the only guideline goes into architecture decision. The underlying story is just to get pricing feature out and not saying anything about how to do it. So the discussion is focused on why should time be spent on refactor the switch statement. In my understanding, agile methodology doesn't mean zero design upfront and such code smell should be avoided at the beginning. Also any unit/UAT test suite won't detect such code smell, otherwise sonar, findbugs won't exist. Here I want to ask: is there such a thing called agile design in the agile methodology? Just like agile documentation. how to define agile design upfront? how to know enough is enough? In my understanding, ballpark architecture and data contract among components should be defined before/when starting project, not the details. Am I right? anyone can explain what the team is really looking for in this kind of setup? is it design aspect or agile aspect? how to implement minimum viable product concept in the agile process in the real world project? Is it must that you feel embarrassed to be MVP?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >