Search Results

Search found 67262 results on 2691 pages for 'data driven testing'.

Page 52/2691 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • C# InternalsVisibleTo() attribute for VBNET 2.0 while testing?

    - by Will Marcouiller
    I'm building an Active Directory wrapper in VBNET 2.0 (can't use later .NET) in which I have the following: IUtilisateur IGroupe IUniteOrganisation These interfaces are implemented in internal classes (Friend in VBNET), so that I want to implement a façade in order to instiate each of the interfaces with their internal classes. This will allow the architecture a better flexibility, etc. Now, I want to test these classes (Utilisateur, Groupe, UniteOrganisation) in a different project within the same solution. However, these classes are internal. I would like to be able to instantiate them without going through my façade, but only for these tests, nothing more. Here's a piece of code to illustrate it: public static class DirectoryFacade { public static IGroupe CreerGroupe() { return new Groupe(); } } // Then in code, I would write something alike: public partial class MainForm : Form { public MainForm() { IGroupe g = DirectoryFacade.CreerGroupe(); // Doing stuff with instance here... } } // My sample interface: public interface IGroupe { string Domaine { get; set; } IList<IUtilisateur> Membres { get; } } internal class Groupe : IGroupe { private IList<IUtilisateur> _membres; internal Groupe() { _membres = new List<IUtilisateur>(); } public string Domaine { get; set; } public IList<IUtilisateur> Membres { get { return _membres; } } } I heard of InternalsVisibleTo() attribute, recently. I was wondering whether it is available in VBNET 2.0/VS2005 so that I could access the assmebly's internal classes for my tests? Otherwise, how could I achieve this?

    Read the article

  • Android Service Testing with messages

    - by Sandeep Dhull
    I have a service which does its work(perform network operation) depending upon the type of message(message.what) property of the message. Then it returns the resoponse, also as a message to the requesting component(depending upon the message.replyTo). So, i am trying to write the testcases.. But how????? My Architecture of service is like this: 1) A component(ex. Activity) bounds to the service. 2) The component sends message to the Service(using Messenger). 3) The service has a nested class that handles the messages and execute the network call and returns a response as message to the sender(who initially sent the message and using its replyTo property). Now to test this.. i am using Junit test cases.. So , in that .. 1) in setUp() i am binding to the service.. 2) on testBusinessLogic() . i am sending the message to the service .. Now problem is where to get the response message.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Using NBuilder to mock up a data driven UI - Part 1

    In this article we will take a look at a fairly new open source project called NBuilder (http://www.nbuilder.org and http://code.google.com/p/nbuilder/) and how it can be used to provide us with fake data out of the gate. NBuilder allows you to quickly stand up generated objects based on standard .net types in an easy fluent manner. And that is just the start!

    Read the article

  • ISO 12207: Verification of integration and Unit test validation

    - by user970696
    I have received comments from the supervisor reviewing my thesis. He asked two questions I cannot answer right now: If ISO 12207 says under "Integration verification" that it "checks that components are correctly and completely integrated into a system", how this can be verified without testing, if all testing is validation? How without testing can I know that system is integrated correctly and fully? If unit testing is validation, how does it match the ISO definiton of validation "that requirements for intended use were fulfilled" if its so low level?

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • What is the architectural name for the set of data that enables UI choices?

    - by Richard Collette
    I have separate service methods that fetch business object data and the data for UI selection input such as radio buttons, check-boxes, combo-boxes, etc. I want to name my service methods that fetch the selection data appropriately. I am assuming that Model and ViewModel would not be part of the name because the selection data is but a portion of the Model or ViewModel. What might this set of data be named such that I can name my service method?

    Read the article

  • Mocking with Boost::Test

    - by Billy ONeal
    Hello everyone :) I'm using the Boost::Test library for unit testing, and I've in general been hacking up my own mocking solutions that look something like this: //In header for clients struct RealFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return FindFirstFile(lpFileName, lpFindFileData); }; }; template <typename FirstFile_T = RealFindFirstFile> class DirectoryIterator { //.. Implementation } //In unit tests (cpp) #define THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING 42 struct FakeFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING; }; }; BOOST_AUTO_TEST_CASE( MyTest ) { DirectoryIterator<FakeFindFirstFile> LookMaImMocked; //Test } I've grown frustrated with this because it requires that I implement almost everything as a template, and it is a lot of boilerplate code to achieve what I'm looking for. Is there a good method of mocking up code using Boost::Test over my Ad-hoc method? I've seen several people recommend Google Mock, but it requires a lot of ugly hacks if your functions are not virtual, which I would like to avoid. Oh: One last thing. I don't need assertions that a particular piece of code was called. I simply need to be able to inject data that would normally be returned by Windows API functions.

    Read the article

  • How Can I Point My Local Testing Server at My GitHub Repository?

    - by Goober
    Up until a few days ago, I had a particular setup that was as follows. Using SVN, all of the websites that I developed were committed to a source control drop box on a local testing server. Then using IIS, a new website was set up to point at the last revision of each particular website I developed and display it to the outside world using a specific URL. I have just moved over to using git and github, meaning all of my source controlled code is now no longer stored on a local testing server. As a result of this, I am not sure how I can go about doing a similar thing to what I did with the SVN setup, however I need to be able to essentially have that same setup again, just using Git. So basically, how can I go about getting my local testing server to point at the GitHub repository for that site? Help greatly appreciated.

    Read the article

  • Test ethernet port for data loss

    - by Manoj
    We are trying to test the ethernet phy on our linux box for data loss. As of now we just establish a tftp connection to a server to upload and download a file. Whenever a mismatch occurs, it is reported as failure. This is not a very nice test, as any mismatch might have been caused by the network itself and not a phy problem. Can you suggest a way to test the ethernet phy in a better way for data loss? Thanks...

    Read the article

  • www-data can upload a file but cant move it after the upload action

    - by user70058
    I am currently running Apache and PHP on Ubuntu. I have a page where a user is supposed to upload a profile image. The action on the backend is supposed to work like this: Upload file to user directory -- WORKS! Refer to the uploaded file and create a thumbnail in directory thumbs -- DOES NOT WORK www-data has write access to directory thumbs. My guess is that www-data for some reason does not have proper access to the file that was uploaded. UPLOADED FILE PERMISSIONS -rw-r--r-- 1 www-data www-data 47057 Feb 8 23:24 0181c6e0973eb19cb0d98521a6fe1d9e71cd6daa.jpg THUMBS DIRECTORY PERMISSIONS drwxr-sr-x 2 www-data www-data 4096 Feb 8 23:23 thumbs Im at lost here. I'm new to Ubuntu as well. Any help would be greatly appreciated!

    Read the article

  • What is the Optimal Server Configuration for Split-Path Testing?

    - by doug
    I am far from an expert on Apache or any server for that matter, so i apologize if this question is poorly worded, which it likely is. We have always relied on a vendor for split-path testing (aka "AB Testing"). If you're not familiar with that term, it's a form of marketing research in which you slightly modify one of your web pages (usually one nearest the point of conversion), say for instance, by changing the position of the "Buy Now" button or its color/contrast/texture, then serving one of those two pages to a given user based on random selection. By doing split-path testing ourselves, I suspect we can do it far more cheaply and increase cycle times as well. What is the optimal set-up for these tests? "Optimal" is based on the following criteria: how quickly/easily new tests can be set-up and put online; and minimal disruption to overall site performance

    Read the article

  • Oracle President Mark Hurd Highlights How Data-driven HR Decisions Help Maximize Business Performance

    - by Scott Ewart
    HR Intelligence Can Help Companies Win the Race for Talent Today during a keynote at Taleo World 2012, Oracle President Mark Hurd outlined the ways that executives can use HR intelligence to help them make better business decisions, shape the future of their organizations and improve the bottom line. He highlighted that talent management is one of the top three focus areas for CEOs, and explained how HR intelligence can help drive decisions to meet business objectives. Hurd urged HR leaders to use data to make fact-based decisions about hiring, talent management and succession to drive strategic growth. To win the race for talent, Hurd explained that organizations need powerful technology that provides fact-based valuable insight that is needed to proactively manage talent, drive strategic initiatives that promote innovation, and enhance business performance. To view the full story and press release, click here.

    Read the article

  • Syncing Data with a Server using Silverlight and HTTP Polling Duplex

    - by dwahlin
    Many applications have the need to stay in-sync with data provided by a service. Although web applications typically rely on standard polling techniques to check if data has changed, Silverlight provides several interesting options for keeping an application in-sync that rely on server “push” technologies. A few years back I wrote several blog posts covering different “push” technologies available in Silverlight that rely on sockets or HTTP Polling Duplex. We recently had a project that looked like it could benefit from pushing data from a server to one or more clients so I thought I’d revisit the subject and provide some updates to the original code posted. If you’ve worked with AJAX before in Web applications then you know that until browsers fully support web sockets or other duplex (bi-directional communication) technologies that it’s difficult to keep applications in-sync with a server without relying on polling. The problem with polling is that you have to check for changes on the server on a timed-basis which can often be wasteful and take up unnecessary resources. With server “push” technologies, data can be pushed from the server to the client as it changes. Once the data is received, the client can update the user interface as appropriate. Using “push” technologies allows the client to listen for changes from the data but stay 100% focused on client activities as opposed to worrying about polling and asking the server if anything has changed. Silverlight provides several options for pushing data from a server to a client including sockets, TCP bindings and HTTP Polling Duplex.  Each has its own strengths and weaknesses as far as performance and setup work with HTTP Polling Duplex arguably being the easiest to setup and get going.  In this article I’ll demonstrate how HTTP Polling Duplex can be used in Silverlight 4 applications to push data and show how you can create a WCF server that provides an HTTP Polling Duplex binding that a Silverlight client can consume.   What is HTTP Polling Duplex? Technologies that allow data to be pushed from a server to a client rely on duplex functionality. Duplex (or bi-directional) communication allows data to be passed in both directions.  A client can call a service and the server can call the client. HTTP Polling Duplex (as its name implies) allows a server to communicate with a client without forcing the client to constantly poll the server. It has the benefit of being able to run on port 80 making setup a breeze compared to the other options which require specific ports to be used and cross-domain policy files to be exposed on port 943 (as with sockets and TCP bindings). Having said that, if you’re looking for the best speed possible then sockets and TCP bindings are the way to go. But, they’re not the only game in town when it comes to duplex communication. The first time I heard about HTTP Polling Duplex (initially available in Silverlight 2) I wasn’t exactly sure how it was any better than standard polling used in AJAX applications. I read the Silverlight SDK, looked at various resources and generally found the following definition unhelpful as far as understanding the actual benefits that HTTP Polling Duplex provided: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." Although the previous definition explained the overall process, it sounded as if standard polling was used. Fortunately, Microsoft’s Scott Guthrie provided me with a more clear definition several years back that explains the benefits provided by HTTP Polling Duplex quite well (used with his permission): "The [HTTP Polling Duplex] duplex support does use polling in the background to implement notifications – although the way it does it is different than manual polling. It initiates a network request, and then the request is effectively “put to sleep” waiting for the server to respond (it doesn’t come back immediately). The server then keeps the connection open but not active until it has something to send back (or the connection times out after 90 seconds – at which point the duplex client will connect again and wait). This way you are avoiding hitting the server repeatedly – but still get an immediate response when there is data to send." After hearing Scott’s definition the light bulb went on and it all made sense. A client makes a request to a server to check for changes, but instead of the request returning immediately, it parks itself on the server and waits for data. It’s kind of like waiting to pick up a pizza at the store. Instead of calling the store over and over to check the status, you sit in the store and wait until the pizza (the request data) is ready. Once it’s ready you take it back home (to the client). This technique provides a lot of efficiency gains over standard polling techniques even though it does use some polling of its own as a request is initially made from a client to a server. So how do you implement HTTP Polling Duplex in your Silverlight applications? Let’s take a look at the process by starting with the server. Creating an HTTP Polling Duplex WCF Service Creating a WCF service that exposes an HTTP Polling Duplex binding is straightforward as far as coding goes. Add some one way operations into an interface, create a client callback interface and you’re ready to go. The most challenging part comes into play when configuring the service to properly support the necessary binding and that’s more of a cut and paste operation once you know the configuration code to use. To create an HTTP Polling Duplex service you’ll need to expose server-side and client-side interfaces and reference the System.ServiceModel.PollingDuplex assembly (located at C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Server on my machine) in the server project. For the demo application I upgraded a basketball simulation service to support the latest polling duplex assemblies. The service simulates a simple basketball game using a Game class and pushes information about the game such as score, fouls, shots and more to the client as the game changes over time. Before jumping too far into the game push service, it’s important to discuss two interfaces used by the service to communicate in a bi-directional manner. The first is called IGameStreamService and defines the methods/operations that the client can call on the server (see Listing 1). The second is IGameStreamClient which defines the callback methods that a server can use to communicate with a client (see Listing 2).   [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetTeamData(); } Listing 1. The IGameStreamService interface defines server operations that can be called on the server.   [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveTeamData(List<Team> teamData); [OperationContract(IsOneWay = true, AsyncPattern=true)] IAsyncResult BeginReceiveGameData(GameData gameData, AsyncCallback callback, object state); void EndReceiveGameData(IAsyncResult result); } Listing 2. The IGameStreamClient interfaces defines client operations that a server can call.   The IGameStreamService interface is decorated with the standard ServiceContract attribute but also contains a value for the CallbackContract property.  This property is used to define the interface that the client will expose (IGameStreamClient in this example) and use to receive data pushed from the service. Notice that each OperationContract attribute in both interfaces sets the IsOneWay property to true. This means that the operation can be called and passed data as appropriate, however, no data will be passed back. Instead, data will be pushed back to the client as it’s available.  Looking through the IGameStreamService interface you can see that the client can request team data whereas the IGameStreamClient interface allows team and game data to be received by the client. One interesting point about the IGameStreamClient interface is the inclusion of the AsyncPattern property on the BeginReceiveGameData operation. I initially created this operation as a standard one way operation and it worked most of the time. However, as I disconnected clients and reconnected new ones game data wasn’t being passed properly. After researching the problem more I realized that because the service could take up to 7 seconds to return game data, things were getting hung up. By setting the AsyncPattern property to true on the BeginReceivedGameData operation and providing a corresponding EndReceiveGameData operation I was able to get around this problem and get everything running properly. I’ll provide more details on the implementation of these two methods later in this post. Once the interfaces were created I moved on to the game service class. The first order of business was to create a class that implemented the IGameStreamService interface. Since the service can be used by multiple clients wanting game data I added the ServiceBehavior attribute to the class definition so that I could set its InstanceContextMode to InstanceContextMode.Single (in effect creating a Singleton service object). Listing 3 shows the game service class as well as its fields and constructor.   [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.Single)] public class GameStreamService : IGameStreamService { object _Key = new object(); Game _Game = null; Timer _Timer = null; Random _Random = null; Dictionary<string, IGameStreamClient> _ClientCallbacks = new Dictionary<string, IGameStreamClient>(); static AsyncCallback _ReceiveGameDataCompleted = new AsyncCallback(ReceiveGameDataCompleted); public GameStreamService() { _Game = new Game(); _Timer = new Timer { Enabled = false, Interval = 2000, AutoReset = true }; _Timer.Elapsed += new ElapsedEventHandler(_Timer_Elapsed); _Timer.Start(); _Random = new Random(); }} Listing 3. The GameStreamService implements the IGameStreamService interface which defines a callback contract that allows the service class to push data back to the client. By implementing the IGameStreamService interface, GameStreamService must supply a GetTeamData() method which is responsible for supplying information about the teams that are playing as well as individual players.  GetTeamData() also acts as a client subscription method that tracks clients wanting to receive game data.  Listing 4 shows the GetTeamData() method. public void GetTeamData() { //Get client callback channel var context = OperationContext.Current; var sessionID = context.SessionId; var currClient = context.GetCallbackChannel<IGameStreamClient>(); context.Channel.Faulted += Disconnect; context.Channel.Closed += Disconnect; IGameStreamClient client; if (!_ClientCallbacks.TryGetValue(sessionID, out client)) { lock (_Key) { _ClientCallbacks[sessionID] = currClient; } } currClient.ReceiveTeamData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client if (!_Timer.Enabled) { _Timer.Enabled = true; } } Listing 4. The GetTeamData() method subscribes a given client to the game service and returns. The key the line of code in the GetTeamData() method is the call to GetCallbackChannel<IGameStreamClient>().  This method is responsible for accessing the calling client’s callback channel. The callback channel is defined by the IGameStreamClient interface shown earlier in Listing 2 and used by the server to communicate with the client. Before passing team data back to the client, GetTeamData() grabs the client’s session ID and checks if it already exists in the _ClientCallbacks dictionary object used to track clients wanting callbacks from the server. If the client doesn’t exist it adds it into the collection. It then pushes team data from the Game class back to the client by calling ReceiveTeamData().  Since the service simulates a basketball game, a timer is then started if it’s not already enabled which is then used to randomly send data to the client. When the timer fires, game data is pushed down to the client. Listing 5 shows the _Timer_Elapsed() method that is called when the timer fires as well as the SendGameData() method used to send data to the client. void _Timer_Elapsed(object sender, ElapsedEventArgs e) { int interval = _Random.Next(3000, 7000); lock (_Key) { _Timer.Interval = interval; _Timer.Enabled = false; } SendGameData(_Game.GetGameData()); } private void SendGameData(GameData gameData) { var cbs = _ClientCallbacks.Where(cb => ((IContextChannel)cb.Value).State == CommunicationState.Opened); for (int i = 0; i < cbs.Count(); i++) { var cb = cbs.ElementAt(i).Value; try { cb.BeginReceiveGameData(gameData, _ReceiveGameDataCompleted, cb); } catch (TimeoutException texp) { //Log timeout error } catch (CommunicationException cexp) { //Log communication error } } lock (_Key) _Timer.Enabled = true; } private static void ReceiveGameDataCompleted(IAsyncResult result) { try { ((IGameStreamClient)(result.AsyncState)).EndReceiveGameData(result); } catch (CommunicationException) { // empty } catch (TimeoutException) { // empty } } LIsting 5. _Timer_Elapsed is used to simulate time in a basketball game. When _Timer_Elapsed() fires the SendGameData() method is called which iterates through the clients wanting to be notified of changes. As each client is identified, their respective BeginReceiveGameData() method is called which ultimately pushes game data down to the client. Recall that this method was defined in the client callback interface named IGameStreamClient shown earlier in Listing 2. Notice that BeginReceiveGameData() accepts _ReceiveGameDataCompleted as its second parameter (an AsyncCallback delegate defined in the service class) and passes the client callback as the third parameter. The initial version of the sample application had a standard ReceiveGameData() method in the client callback interface. However, sometimes the client callbacks would work properly and sometimes they wouldn’t which was a little baffling at first glance. After some investigation I realized that I needed to implement an asynchronous pattern for client callbacks to work properly since 3 – 7 second delays are occurring as a result of the timer. Once I added the BeginReceiveGameData() and ReceiveGameDataCompleted() methods everything worked properly since each call was handled in an asynchronous manner. The final task that had to be completed to get the server working properly with HTTP Polling Duplex was adding configuration code into web.config. In the interest of brevity I won’t post all of the code here since the sample application includes everything you need. However, Listing 6 shows the key configuration code to handle creating a custom binding named pollingDuplexBinding and associate it with the service’s endpoint.   <bindings> <customBinding> <binding name="pollingDuplexBinding"> <binaryMessageEncoding /> <pollingDuplex maxPendingSessions="2147483647" maxPendingMessagesPerSession="2147483647" inactivityTimeout="02:00:00" serverPollTimeout="00:05:00"/> <httpTransport /> </binding> </customBinding> </bindings> <services> <service name="GameService.GameStreamService" behaviorConfiguration="GameStreamServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="pollingDuplexBinding" contract="GameService.IGameStreamService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>   Listing 6. Configuring an HTTP Polling Duplex binding in web.config and associating an endpoint with it. Calling the Service and Receiving “Pushed” Data Calling the service and handling data that is pushed from the server is a simple and straightforward process in Silverlight. Since the service is configured with a MEX endpoint and exposes a WSDL file, you can right-click on the Silverlight project and select the standard Add Service Reference item. After the web service proxy is created you may notice that the ServiceReferences.ClientConfig file only contains an empty configuration element instead of the normal configuration elements created when creating a standard WCF proxy. You can certainly update the file if you want to read from it at runtime but for the sample application I fed the service URI directly to the service proxy as shown next: var address = new EndpointAddress("http://localhost.:5661/GameStreamService.svc"); var binding = new PollingDuplexHttpBinding(); _Proxy = new GameStreamServiceClient(binding, address); _Proxy.ReceiveTeamDataReceived += _Proxy_ReceiveTeamDataReceived; _Proxy.ReceiveGameDataReceived += _Proxy_ReceiveGameDataReceived; _Proxy.GetTeamDataAsync(); This code creates the proxy and passes the endpoint address and binding to use to its constructor. It then wires the different receive events to callback methods and calls GetTeamDataAsync().  Calling GetTeamDataAsync() causes the server to store the client in the server-side dictionary collection mentioned earlier so that it can receive data that is pushed.  As the server-side timer fires and game data is pushed to the client, the user interface is updated as shown in Listing 7. Listing 8 shows the _Proxy_ReceiveGameDataReceived() method responsible for handling the data and calling UpdateGameData() to process it.   Listing 7. The Silverlight interface. Game data is pushed from the server to the client using HTTP Polling Duplex. void _Proxy_ReceiveGameDataReceived(object sender, ReceiveGameDataReceivedEventArgs e) { UpdateGameData(e.gameData); } private void UpdateGameData(GameData gameData) { //Update Score this.tbTeam1Score.Text = gameData.Team1Score.ToString(); this.tbTeam2Score.Text = gameData.Team2Score.ToString(); //Update ball visibility if (gameData.Action != ActionsEnum.Foul) { if (tbTeam1.Text == gameData.TeamOnOffense) { AnimateBall(this.BB1, this.BB2); } else //Team 2 { AnimateBall(this.BB2, this.BB1); } } if (this.lbActions.Items.Count > 9) this.lbActions.Items.Clear(); this.lbActions.Items.Add(gameData.LastAction); if (this.lbActions.Visibility == Visibility.Collapsed) this.lbActions.Visibility = Visibility.Visible; } private void AnimateBall(Image onBall, Image offBall) { this.FadeIn.Stop(); Storyboard.SetTarget(this.FadeInAnimation, onBall); Storyboard.SetTarget(this.FadeOutAnimation, offBall); this.FadeIn.Begin(); } Listing 8. As the server pushes game data, the client’s _Proxy_ReceiveGameDataReceived() method is called to process the data. In a real-life application I’d go with a ViewModel class to handle retrieving team data, setup data bindings and handle data that is pushed from the server. However, for the sample application I wanted to focus on HTTP Polling Duplex and keep things as simple as possible.   Summary Silverlight supports three options when duplex communication is required in an application including TCP bindins, sockets and HTTP Polling Duplex. In this post you’ve seen how HTTP Polling Duplex interfaces can be created and implemented on the server as well as how they can be consumed by a Silverlight client. HTTP Polling Duplex provides a nice way to “push” data from a server while still allowing the data to flow over port 80 or another port of your choice.   Sample Application Download

    Read the article

  • The art of Unit Testing with Examples in .NET

    - by outcoldman
    First time when I familiarized with unit testing was 5 or 6 years ago. It was start of my developing career. I remember that somebody told me about code coverage. At that time I didn’t write any Unit tests. Guy, who was my team lead, told me “Do you see operator if with three conditions? You should check all of these conditions”. So, after that I had written some code, I should go to interface and try to invoke all code which I wrote from user interface. Nice? At current time I know little more about tests and unit testing. I have not participated in projects, designed by Test Driven Development (TDD). Basics of my knowledge are a spying code of my colleagues, some articles and screencasts. I had decide that I should know much more, and became a real professional of unit testing, this is why I had start to read book The art of Unit Testing with Examples in .NET. More than, in my current job place looks like I’m just one who writing unit tests for my code. I should show good examples of my tests. ,a href="http://outcoldman.ru/en/blog/show/267"Read more...

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • Can you review my Perl rewrite of Cucumber?

    - by Evgeny
    There is a team working on acceptance testing X11 GUI application in our company, and they created a monstrous acceptance testing framework that drives the GUI as well as running scenarios. The framework is written using Perl 5, and scenario files look more like very complex Perl programs (thousands of lines long with procedural-programming style) than acceptance tests. I recently learned Ruby's Cucumber, and generally have been using Ruby for quite a lot of time. But unfortunately I can't just shove Ruby to replace Perl because the people who are writing all of this don't know Ruby and it's quite certain that they wont want "this" kind of interruption. So to bring Ruby's Cucumber a bit closer to their work, I rewrote it using Perl 5. Unfortunately I am really not a Perl programmer, and would love to get a code review and to hear suggestions from people who both know Perl and Cucumber. Hi Perl/Cucumber StackOverflow users - please help me create this "open source" attempt to re-create Cucumber for Perl! I would love to hear your comments and will accept any acceptable help. The minimal source code is here: http://github.com/kesor/p5-cucumber Thank you for your attention. For those not familiar with cucumber - please take just one small moment to take a look at this one small little page: http://wiki.github.com/aslakhellesoy/cucumber

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • Making Spring Data JPA work with DataNucleus (GAE) (Spring Boot)

    - by xybrek
    There are several hints that Spring Data works with Google App Engine like: http://tommysiu.blogspot.com/2014/01/spring-data-on-gae-part-1.html http://blog.eisele.net/2009/07/spring-300m3-on-google-appengine-with.html Much of the examples are not "Spring Boot" so I've been trying to retrofit things with it. However, I've been stuck with this error for days and days: [INFO] Caused by: java.lang.NullPointerException [INFO] at org.datanucleus.api.jpa.metamodel.SingularAttributeImpl.isVersion(SingularAttributeImpl.java:79) [INFO] at org.springframework.data.jpa.repository.support.JpaMetamodelEntityInformation.findVersionAttribute(JpaMetamodelEntityInformation.java:102) [INFO] at org.springframework.data.jpa.repository.support.JpaMetamodelEntityInformation.<init>(JpaMetamodelEntityInformation.java:79) [INFO] at org.springframework.data.jpa.repository.support.JpaEntityInformationSupport.getMetadata(JpaEntityInformationSupport.java:65) [INFO] at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getEntityInformation(JpaRepositoryFactory.java:149) [INFO] at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getTargetRepository(JpaRepositoryFactory.java:88) [INFO] at org.springframework.data.jpa.repository.support.JpaRepositoryFactory.getTargetRepository(JpaRepositoryFactory.java:68) [INFO] at org.springframework.data.repository.core.support.RepositoryFactorySupport.getRepository(RepositoryFactorySupport.java:158) [INFO] at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.initAndReturn(RepositoryFactoryBeanSupport.java:224) [INFO] at org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.afterPropertiesSet(RepositoryFactoryBeanSupport.java:210) [INFO] at org.springframework.data.jpa.repository.support.JpaRepositoryFactoryBean.afterPropertiesSet(JpaRepositoryFactoryBean.java:92) [INFO] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$6.run(AbstractAutowireCapableBeanFactory.java:1602) [INFO] at java.security.AccessController.doPrivileged(Native Method) [INFO] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1599) [INFO] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1549) [INFO] ... 40 more Where, I'm trying to use Spring Data JPA with DataNucleus/AppEngine: @Configuration @ComponentScan @EnableJpaRepositories @EnableTransactionManagement class JpaApplicationConfig { private static final Logger logger = Logger .getLogger(JpaApplicationConfig.class.getName()); @Bean public EntityManagerFactory entityManagerFactory() { logger.info("Loading Entity Manager..."); return Persistence .createEntityManagerFactory("transactions-optional"); } @Bean public PlatformTransactionManager transactionManager() { logger.info("Loading Transaction Manager..."); final JpaTransactionManager txManager = new JpaTransactionManager(); txManager.setEntityManagerFactory(entityManagerFactory()); return txManager; } } I've tested Persistence.createEntityManagerFactory("transactions-optional"); to see if the app can persist using this EMF, well, it does, so I am sure that this EMF works fine. The problem is the "wiring" up with the Spring Data JPA, can anybody help?

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Parse and read data frame in C?

    - by user253656
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data I know how to write and read data to and from a serial port in Linux, but it is just to write and read a simple string (like "ABD") My issue is that I do not know how to parse the data frame formatted as above so that I can: get the data in octet 1,2 in the Data field get the data in octet 3,4 in the Data field get the value in CRC field to check the consistency of the data Here the sample snip code that read and write a simple string from and to a serial port in Linux: int writeport(int fd, char *chars) { int len = strlen(chars); chars[len] = 0x0d; // stick a <CR> after the command chars[len+1] = 0x00; // terminate the string properly int n = write(fd, chars, strlen(chars)); if (n < 0) { fputs("write failed!\n", stderr); return 0; } return 1; } int readport(int fd, char *result) { int iIn = read(fd, result, 254); result[iIn-1] = 0x00; if (iIn < 0) { if (errno == EAGAIN) { printf("SERIAL EAGAIN ERROR\n"); return 0; } else { printf("SERIAL read error %d %s\n", errno, strerror(errno)); return 0; } } return 1; } Does anyone please have some ideas? Thanks all.

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >