Search Results

Search found 63182 results on 2528 pages for 'data driven tests'.

Page 90/2528 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Explicitly pass context object versus injecting with IoC

    - by SonOfPirate
    I have a layered service application where the service layer delegates operations into the domain layer for execution. Many of these operations need to know the context under which they are operation. (The context included the identity of the current user, culture information, etc. received from the caller.) For example, I have an API method that returns a list of announcements. The list is based on the current user's role and each announcement is localized to their culture. The API is a thin-facade that delegates to an Application Service in my domain layer. The Application Service method obviously needs to know the context of the current request/operation as another call to the same API from another user should result in a different list. Within this method, we also have logging that uses some of the context information so we a clear understanding of the context when the operation was performed (this is especially useful if something goes wrong.) While this is a contrived example, in the real world, my Application Services will coordinate operations with many collaborative components, any number of them also needing the context information. My choice is to pass the context to the Application Service which would then pass it with any calls to collaborators or have the IoC container satisfy the dependency the Application Service and any collaborators have on the context. I am wondering if it is considered good/bad, best practices/code smell, etc. if I pass the context object as a parameter to the domain methods or if injecting the context via an IoC container is preferred. (EDIT: I should mention that the context object is instantiated per-request.)

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • Advanced Analytics Oracle Data Mining - NEW 2-Day Training Course

    - by Mike.Hallett(at)Oracle-BI&EPM
    A NEW 2-Day Oracle University (OU) Instructor Led Course on Oracle Data Mining has been developed for partners and customers to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database. Oracle Data Mining, provides data mining algorithms that run native for high performance in-database model building and model deployment. This OU course is a great way to learn the advantages and benefits of "big data analytics"; mining data, building and deploying "predictive analytics" all inside the Oracle Database and to work with OBI. To register for a class, click here, then click on View Schedule to see the latest scheduled classes and/or submit your information expressing interest in attending a class.

    Read the article

  • How to reduce tight coupling between two data sources

    - by fstuijt
    I'm having some trouble finding a proper solution to the following architecture problem. In our setting (sketched below) we have 2 data sources, where data source A is the primary source for items of type Foo. A secondary data source exists which can be used to retrieve additional information on a Foo; however this information does not always exist. Furthermore, data source A can be used to retrieve items of type Bar. However, each Bar refers to a Foo. The difficulty here is that each Bar should refer to a Foo which, if available, also contains the information as augmented by data source B. My question is: how to remove the tight coupling between data source A and B?

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • Ubiquitous Language and Custom types

    - by EdvRusj
    Note that my question is referring to those attributes that even on their own already represent a concept ( ie on their own provide a cohesive meaning ). Thus such attribute needs no additional functional support and as such is self-contained. I'm also well-aware that even with self-contained attributes the custom types may prove beneficial ( for example, they give the ability to add new behavior later, when business requirements change ). Thus, my question focuses only on whether custom types for self-contained attributes really enrich Ubiquitous Language UL a) I've read that in most cases, even simple, self-contained attributes should have custom, more descriptive types rather than basic value types ( double, string ... ), because among other things, descriptive types add to the UL, while the use of basic types instead weakens the language. I understand the importance of UL, but how does having a basic type for a self-contained attribute weaken the language, since with self-contained attributes the name of the attribute already adequately describes the concept and thus contributes to the UL vocabulary? For example, the term person_age already adequately explains the concept of quantifying the number of years a person has: class Person { string person_age; } so what could we possibly gain by also introducing the term ThingAge to the UL: class person { ThingAge person_age; } thanks

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Can the following Domain Entity contain logic for creating/deleting other entities?

    - by user702769
    a) As far as I understand it, in most cases Domain Model DM doesn't contain code for creating/deleting domain entities, but instead it is the job of layers ( ie service layer or UI layer ) on top of DM to create/delete domain entities? b) Domain entities are modelled after real world entities. Assuming particular real world entity being abstracted does have the functionality of creating/deleting other real world entities, then I assume the domain entity abstracting this real world entity could also contain logic for creating/deleting other entities? class RobotDestroyerCreator { ... void heavyThinking() { ... if(...) unitOfWork.registerDelete(robot); ... if(...) { var robotNew = new Robot(...); unitOfWork.registerNew(robotNew); { ... } } Thank you

    Read the article

  • Applying DDD principles in a RESTish web service

    - by Andy
    I am developing an RESTish web service. I think I got the idea of the difference between aggregation and composition. Aggregation does not enforce lifecycle/scope on the objects it references. Composition does enforce lifecycle/scope on the objects it contain/own. If I delete a composite object then all the objects it contain/own are deleted as well, while the deleting an aggregate root does not delete referenced objects. 1) If it is true that deleting aggregate roots does not necessary delete referenced objects, what sense does it make to not have a repository for the references objects? Or are aggregate roots as a term referring to what is known as composite object? 2) When you create an web service you will have multiple endpoints, in my case I have one entity Book and another named Comment. It does not make sense to leave the comments in my application if the book is deleted. Therefore, book is a composite object. I guess I should not have a repository for comments since that would break the enforcement of lifecycle and rules that the book class may have. However I have URL such as (examples only): GET /books/1/comments POST /books/1/comments Now, if I do not have a repository for comments, does that mean I have to load the book object and then return the referenced comments? Am I allowed to return a list of Comment entities from the BookRepository, does that make sense? The repository for Book may eventually become rather big with all sorts of methods. Am I allowed to write JPQL (JPA queries) that targets comments and not books inside the repository? What about pagination and filtering of comments. When adding a new comment triggered by the POST endpoint, do you need to load the book, add the comment to the book, and then update the whole book object? What I am currently doing is having a own CommentRepository, even though the comments are deleted with the book. I could need some direction on how to do it correct. Since you are exposing not only root objects in RESTish services I wonder how to handle this at the backend. I am using Hibernate and Spring.

    Read the article

  • Video: Analyzing Big Data using Oracle R Enterprise

    - by Sherry LaMonica
    Learn how Oracle R Enterprise is used to generate new insight and new value to business, answering not only what happened, but why it happened. View this YouTube Oracle Channel video overview describing how analyzing big data using Oracle R Enterprise is different from other analytics tools at Oracle. Oracle R Enterprise (ORE),  a component of the Oracle Advanced Analytics Option, couples the wealth of analytics packages in R with the performance, scalability, and security of Oracle Database. ORE executes base R functions transparently on database data without having to pull data from Oracle Database. As an embedded component of the database, Oracle R Enterprise can run your R script and open source packages via embedded R where the database manages the data served to the R engine and user-controlled data parallelism. The result is faster and more secure access to data. ORE also works with the full suite of in-database analytics, providing integrated results to the analyst.

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • Where we should put validation for domain model

    - by adisembiring
    I still looking best practice for domain model validation. Is that good to put the validation in constructor of domain model ? my domain model validation example as follows: public class Order { private readonly List<OrderLine> _lineItems; public virtual Customer Customer { get; private set; } public virtual DateTime OrderDate { get; private set; } public virtual decimal OrderTotal { get; private set; } public Order (Customer customer) { if (customer == null) throw new ArgumentException("Customer name must be defined"); Customer = customer; OrderDate = DateTime.Now; _lineItems = new List<LineItem>(); } public void AddOderLine //.... public IEnumerable<OrderLine> AddOderLine { get {return _lineItems;} } } public class OrderLine { public virtual Order Order { get; set; } public virtual Product Product { get; set; } public virtual int Quantity { get; set; } public virtual decimal UnitPrice { get; set; } public OrderLine(Order order, int quantity, Product product) { if (order == null) throw new ArgumentException("Order name must be defined"); if (quantity <= 0) throw new ArgumentException("Quantity must be greater than zero"); if (product == null) throw new ArgumentException("Product name must be defined"); Order = order; Quantity = quantity; Product = product; } } Thanks for all of your suggestion.

    Read the article

  • How to model an address type in DDD?

    - by Songo
    I have an User entity that has a Set of Address where Address is a value object: class User{ ... private Set<Address> addresses; ... public setAddresses(Set<Address> addresses){ //set all addresses as a batch } ... } A User can have a home address and a work address, so I should have something that acts as a look up in the database: tbl_address_type ------------------------------------------------ | address_type_id | address_type | ------------------------------------------------ | 1 | work | ------------------------------------------------ | 2 | home | ------------------------------------------------ and correspondingly tbl_address ------------------------------------------------------------------------------------- | address_id | address_description |address_type_id| user_id | ------------------------------------------------------------------------------------- | 1 | 123 main street | 1 | 100 | ------------------------------------------------------------------------------------- | 2 | 456 another street | 1 | 100 | ------------------------------------------------------------------------------------- | 3 | 789 long street | 2 | 200 | ------------------------------------------------------------------------------------- | 4 | 023 short street | 2 | 200 | ------------------------------------------------------------------------------------- Should the address type be modeled as an Entity or Value type? and Why? Is it OK for the Address Value object to hold a reference to the Entity AdressType (in case it was modeled as an entity)? Is this something feasible using Hibernate/NHibernate? If a user can change his home address, should I expose a User.updateHomeAddress(Address homeAddress) function on the User entity itself? How can I enforce that the client passes a Home address and not a work address in this case? (a sample implementation is most welcomed) If I want to get the User's home address via User.getHomeAddress() function, must I load the whole addresses array then loop it and check each for its type till I found the correct type then return it? Is there a more efficient way than this?

    Read the article

  • Can a domain specific language be used to representing the Open SRD

    - by NeoModulus
    I am in the early stages of creating an open source C# library that would allow developers to drop in the open SRD (http://www.d20srd.org/) into an existing project. Abstracted it is a complex set of tightly coupled business rules. Having previously worked on an adaptive object model project for health care risk management I began with that pattern in mind. Due to the high coupling of rules it is becoming apparent that the project may require some kind of scripting. Have started researching DSL implementation I am now considering scraping the adaptive object model for a domain specific language. I have not work with domain specific languages so my question is it reasonable to assume a domain specific language can be used to representing the open SRD?

    Read the article

  • Domain Model and Querying

    - by Tyrsius
    I am new to DDD, having worked only in Transaction-Script apps with an anemic model, or just Big Balls of Mud, so please forgive any terminology I abuse. I am trying to understand the proper separation between the domain model and the repository. What is the proper way to construct a domain object that is coming from a database, assuming the (incredibly simplified) need to query for objects by status (returns enumerable), or by ID. Should a factory be building the objects, exposing methods for GetByStatus() and GetByID(), using a DIed repository? Should a repository be called directly, knowing how to build a domain model from the DTO? Should the domain model have a constructor for get by ID, using a DIed repoistory to load the initial state, using some other (?) method for the list? I am not really sure what the best way would be, and this question has an answer advocating each one (these are certainly mutuallu exclusive).

    Read the article

  • How to make this design closer to proper DDD?

    - by Seralize
    I've read about DDD for days now and need help with this sample design. All the rules of DDD make me very confused to how I'm supposed to build anything at all when domain objects are not allowed to show methods to the application layer; where else to orchestrate behaviour? Repositories are not allowed to be injected into entities and entities themselves must thus work on state. Then an entity needs to know something else from the domain, but other entity objects are not allowed to be injected either? Some of these things makes sense to me but some don't. I've yet to find good examples of how to build a whole feature as every example is about Orders and Products, repeating the other examples over and over. I learn best by reading examples and have tried to build a feature using the information I've gained about DDD this far. I need your help to point out what I do wrong and how to fix it, most preferably with code as "I would not recomment doing X and Y" is very hard to understand in a context where everything is just vaguely defined already. If I can't inject an entity into another it would be easier to see how to do it properly. In my example there are users and moderators. A moderator can ban users, but with a business rule: only 3 per day. I did an attempt at setting up a class diagram to show the relationships (code below): interface iUser { public function getUserId(); public function getUsername(); } class User implements iUser { protected $_id; protected $_username; public function __construct(UserId $user_id, Username $username) { $this->_id = $user_id; $this->_username = $username; } public function getUserId() { return $this->_id; } public function getUsername() { return $this->_username; } } class Moderator extends User { protected $_ban_count; protected $_last_ban_date; public function __construct(UserBanCount $ban_count, SimpleDate $last_ban_date) { $this->_ban_count = $ban_count; $this->_last_ban_date = $last_ban_date; } public function banUser(iUser &$user, iBannedUser &$banned_user) { if (! $this->_isAllowedToBan()) { throw new DomainException('You are not allowed to ban more users today.'); } if (date('d.m.Y') != $this->_last_ban_date->getValue()) { $this->_ban_count = 0; } $this->_ban_count++; $date_banned = date('d.m.Y'); $expiration_date = date('d.m.Y', strtotime('+1 week')); $banned_user->add($user->getUserId(), new SimpleDate($date_banned), new SimpleDate($expiration_date)); } protected function _isAllowedToBan() { if ($this->_ban_count >= 3 AND date('d.m.Y') == $this->_last_ban_date->getValue()) { return false; } return true; } } interface iBannedUser { public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date); public function remove(); } class BannedUser implements iBannedUser { protected $_user_id; protected $_date_banned; protected $_expiration_date; public function __construct(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function remove() { $this->_user_id = ''; $this->_date_banned = ''; $this->_expiration_date = ''; } } // Gathers objects $user_repo = new UserRepository(); $evil_user = $user_repo->findById(123); $moderator_repo = new ModeratorRepository(); $moderator = $moderator_repo->findById(1337); $banned_user_factory = new BannedUserFactory(); $banned_user = $banned_user_factory->build(); // Performs ban $moderator->banUser($evil_user, $banned_user); // Saves objects to database $user_repo->store($evil_user); $moderator_repo->store($moderator); $banned_user_repo = new BannedUserRepository(); $banned_user_repo->store($banned_user); Should the User entitity have a 'is_banned' field which can be checked with $user->isBanned();? How to remove a ban? I have no idea.

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

  • 11g ???:Active Data Guard

    - by JaneZhang(???)
    ?Oracle 11g??,????(physical Standby)???redo???,???????,???mount??11g??,???redo???,????????read-only??,????Active Data Guard ???Active Data Guard,?????????????????,??????????????   Active Data Guard???????????,??,????????????,????????,????redo??,????????????,??????????? Oracle Active Data Guard ?Oracle Database Enterprise Edition?????,??????????????   ????Active Data Guard, ??????? read-only ????,???? ALTER DATABASE RECOVER MANAGED STANDBY DATABASE????????????:??????COMPATIBLE ????????11.0.0?  ???????Active Data Guard,???V$DATABASE????"READ ONLY WITH APPLY':      SQL> SELECT open_mode FROM V$DATABASE;      OPEN_MODE      --------------------      READ ONLY WITH APPLY   ????????????,???????real-time apply:   SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE; ?????????read-only????????:    • Issue SELECT statements, including queries that require multiple sorts that leverage TEMP segments    • Use ALTER SESSION and ALTER SYSTEM statements    • Use SET ROLE    • Call stored procedures    • Use database links (dblinks) to write to remote databases    • Use stored procedures to call remote procedures via dblinks    • Use SET TRANSACTION READ ONLY for transaction level read consistency    • Issue complex queries (such as grouping SET queries and WITH CLAUSE queries) ??????????read-only????????:    • Any DMLs (excluding simple SELECT statements) or DDLs    • Query accessing local sequences    • DMLs to local temporary tables    ?????Active Data Guard ??: • ????????????????? • ???Oracle Real Application Clusters (Oracle RAC) ,?????? • RAC???RAC??    Oracle Data Guard ?????,,????????:    * ?????????????????:     http://docs.oracle.com/cd/B28359_01/server.111/b28294/create_ps.htm   * ???Oracle Real Application Clusters (Oracle RAC) ,??????:     http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf   * RAC ???RAC ??:     http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimaryracphysicalsta-131940.pdf  ??Active Data Guard???????,?????:    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr1-activedataguard-1-128199.pdf     ??Oracle Maximum Availability Architecture Best Practices?????,???:   http://www.oracle.com/goto/maa

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • Basics of automated tests?

    - by Muhammad Yasir
    Hi, Till now I've been testing my web (usually written in PHP) as well as desktop applications (normally Java or C#) manually. Now I read somewhere on the net about automated tests. I tried searching to know about it in details but almost all searches end up at things like PHPUnit. Could someone please put some light on the theory behind automated tests? How a software can be tested automatically? Any limitations etc? Or may be you can tell me a place where I can read about this. Regards

    Read the article

  • Rack rSpec Controller Tests with Rack Middleware issue

    - by Roman Gonzalez
    Howdy, I'm having big trouble testing with rSpec's controller API. Right now I'm using a middleware authentication solution (Warden), and when I run the specs, the proxy added by the middleware is not there, and all the authentication tests are throwing NilPointerExceptions all over the place. It seems rSpec is not adding the middleware to the final app on purpose, and I would like to know if there is a way to monkey patch rSpec in order to make that go. I already tested the whole thing with cucumber, however this is a refactoring of an old authentication version and there is several Controller tests that depend on authentication logic in order to work. Thanks in advance.

    Read the article

  • Cross compiling unit tests with CppUnit or similar

    - by Mark
    Has anyone used a package like CppUnit to cross-compile C++ unit tests to run on an embedded platform? I'm using G++ on a Linux box to compile executables that must be run on a LynxOS board. I can't seem to get any of the common unit test packages to configure and build something that will create unit tests. I see a lot of unit test packages, CppUnit, UnitTest++, GTest, CppUTest, etc., but very little about using these packages in a cross-compiler scenario. The ones with a "configure" script imply that this is possible, but I can't seem to get them to configure and build.

    Read the article

  • Why PartCover report shows 0% when mstest runs successfully and all tests pass

    - by SvetlanaM
    Hello, I'm trying to get code coverage with mstest tests. I'm using PartCover 2.2.0.36424. The problem is with real assemblies, I get 0% code coverage (Note: All tests pass). On demo test for demo source that I created, it worked fine (the report makes sense). I noticed that in log file: for demo files, after line "Assembly AAAAAA loaded (MyTestesAssemblyName)", there is line "Class NNNNNN loaded (MyTestesAssemblyName.MyClassname)"; and for the real files ther is no second line (for class) after the line for assembly. Have any ideas what is different in our assemblies? (Note: they are not signed) 10x.

    Read the article

  • unittest tests reuse for family of classes

    - by zaharpopov
    I have problem organizing my unittest based class test for family of tests. For example assume I implement a "dictionary" interface, and have 5 different implementations want to testing. I do write one test class that tests a dictionary interface. But how can I nicely reuse it to test my all classes? So far I do ugly: DictType = hashtable.HashDict In top of file and then use DictType in test class. To test another class I manually change the DictType to something else. How can do this otherwise? Can't pass arguments to unittest classes so is there a nicer way?

    Read the article

  • organize using directives, re-run tests?

    - by Sarah Vessels
    Before making a commit, I prefer to run all hundred-something unit tests in my C# Solution, since they only take a couple minutes to run. However, if I've already run them all, all is well, and then I decide to organize the using directives in my Solution, is it really necessary to re-run the unit tests? I have a macro that goes through all files in the Solution and runs Visual Studio's 'Remove and Sort' command on each. In my understanding, so long as all projects still build after using directives are changed around, things should be fine at runtime, too. Is this correct thinking?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >