Search Results

Search found 1321 results on 53 pages for 'er diagram'.

Page 48/53 | < Previous Page | 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Searching for tasks with code – Executables and Event Handlers

    Searching packages or just enumerating through all tasks is not quite as straightforward as it may first appear, mainly because of the way you can nest tasks within other containers. You can see this illustrated in the sample package below where I have used several sequence containers and loops. To complicate this further all containers types, including packages and tasks, can have event handlers which can then support the full range of nested containers again. Towards the lower right, the task called SQL In FEL also has an event handler not shown, within which is another Execute SQL Task, so that makes a total of 6 Execute SQL Tasks 6 tasks spread across the package. In my previous post about such as adding a property expressionI kept it simple and just looked at tasks at the package level, but what if you wanted to find any or all tasks in a package? For this post I've written a console program that will search a package looking at all tasks no matter how deeply nested, and check to see if the name starts with "SQL". When it finds a matching task it writes out the hierarchy by name for that task, starting with the package and working down to the task itself. The output for our sample package is shown below, note it has found all 6 tasks, including the one on the OnPreExecute event of the SQL In FEL task TaskSearch v1.0.0.0 (1.0.0.0) Copyright (C) 2009 Konesans Ltd Processing File - C:\Projects\Alpha\Packages\MyPackage.dtsx MyPackage\FOR Counter Loop\SQL In Counter Loop MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL\OnPreExecute\SQL On Pre Execute for FEL SQL Task MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SEQ Nested Lvl 2\SQL In Nested Lvl 2 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #1 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #2 6 matching tasks found in package. The full project and code is available for download below, but first we can walk through the project to highlight the most important sections of code. This code has been abbreviated for this description, but is complete in the download. First of all we load the package, and then start by looking at the Executables for the package. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { int matchCount = 0; // Look in the package's executables ProcessExecutables(package.Executables, ref matchCount); ... // // ... // Write out final count Console.WriteLine("{0} matching tasks found in package.", matchCount); } The ProcessExecutables method is a key method, as an executable could be described as the the highest level of a working functionality or container. There are several of types of executables, such as tasks, or sequence containers and loops. To know what to do next we need to work out what type of executable we are dealing with as the abbreviated version of method shows below. private static void ProcessExecutables(Executables executables, ref int matchCount) { foreach (Executable executable in executables) { TaskHost taskHost = executable as TaskHost; if (taskHost != null) { ProcessTaskHost(taskHost, ref matchCount); ProcessEventHandlers(taskHost.EventHandlers, ref matchCount); continue; } ... // // ... ForEachLoop forEachLoop = executable as ForEachLoop; if (forEachLoop != null) { ProcessExecutables(forEachLoop.Executables, ref matchCount); ProcessEventHandlers(forEachLoop.EventHandlers, ref matchCount); continue; } } } As you can see if the executable we find is a task we then call out to our ProcessTaskHost method. As with all of our executables a task can have event handlers which themselves contain more executables such as task and loops, so we also make a call out our ProcessEventHandlers method. The other types of executables such as loops can also have event handlers as well as executables. As shown with the example for the ForEachLoop we call the same ProcessExecutables and ProcessEventHandlers methods again to drill down into the hierarchy of objects that the package may contain. This code needs to explicitly check for each type of executable (TaskHost, Sequence, ForLoop and ForEachLoop) because whilst they all have an Executables property this is not from a common base class or interface. This example was just a simple find a task by its name, so ProcessTaskHost really just does that. We also get the hierarchy of objects so we can write out for information, obviously you can adapt this method to do something more interesting such as adding a property expression. private static void ProcessTaskHost(TaskHost taskHost, ref int matchCount) { if (taskHost == null) { return; } // Check if the task matches our match name if (taskHost.Name.StartsWith(TaskNameFilter, StringComparison.OrdinalIgnoreCase)) { // Build up the full object hierarchy of the task // so we can write it out for information StringBuilder path = new StringBuilder(); DtsContainer container = taskHost; while (container != null) { path.Insert(0, container.Name); container = container.Parent; if (container != null) { path.Insert(0, "\\"); } } // Write the task path // e.g. Package\Container\Event\Task Console.WriteLine(path); Console.WriteLine(); // Increment match counter for info matchCount++; } } Just for completeness, the other processing method we covered above is for event handlers, but really that just calls back to the executables. This same method is called in our main package method, but it was omitted for brevity here. private static void ProcessEventHandlers(DtsEventHandlers eventHandlers, ref int matchCount) { foreach (DtsEventHandler eventHandler in eventHandlers) { ProcessExecutables(eventHandler.Executables, ref matchCount); } } As hopefully the code demonstrates, executables (Microsoft.SqlServer.Dts.Runtime.Executable) are the workers, but within them you can nest more executables (except for task tasks).Executables themselves can have event handlers which can in turn hold more executables. I have tried to illustrate this highlight the relationships in the following diagram. Download Sample code project TaskSearch.zip (11KB)

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Monitoring almost anything with BizTalk 360

    - by Michael Stephenson
    When you work in an integration environment it is common that you will find yourself in a situation where you integrate with some unusual applications or have some unusual dependencies. That is the nature of integration. When you work with BizTalk one of the common problems is that BizTalk often is the place where problems with applications you integrate with are highlighted and these external applications may have poor monitoring solutions. Fortunately if you are a working with a customer who uses BizTalk 360 then it contains a feature called the "Web Endpoint Manager". Typically the web endpoint manager is used to monitor web services that you integrate with and will ping them at appropriate times to make sure they return the expected HTTP status code. When you have an usual situation where you want to monitor something which is key to the success to your solution but you find yourself having to consider a significant custom solution to monitor the external dependency then the Web Endpoint Manager could be your friend. The endpoint manager monitors a url and checks for a certain status code. This means that you can create your own aspx web page and then make BizTalk 360 monitor this web page. Behind the web page you could write any code you wished. An example of this is architecture is shown in the below diagram.     In the custom web page you would implement some custom code to do whatever it is that you want to monitor. In the below code snippet you can see how the Page_Load default method is doing some kind of check then depending on the result of the check it returns a certain HTTP code. protected void Page_Load(object sender, EventArgs e) { var result = CheckSomething();   if (result == "Success") Response.StatusCode = 202; else if (result == "DatabaseError") Response.StatusCode = 510; else if (result == "SystemError") Response.StatusCode = 512; else Response.StatusCode = 513;   }   In BizTalk 360 you would go into the Monitor and Notify tab and then to BizTalk Environment which gives you access to the Web Endpoint Manager. You need an alarm setup which configures how the endpoint will be checked. I'm not going to go through the details of creating the alarm as this is already documented in the BizTalk 360 documentation. One point to note is that in the example I am using I setup a threshold alarm which means that the url is checked about every minute and if there is an error that persists for a period of time then the alarm will raise the alert notification. In my example I configured the alarm to fire if the error persisted for 3 minutes. The below picture shows accessing the endpoint manager.   In the web endpoint manager you would then configure your endpoint to monitor and the HTTP response code which indicates all is working fine. The below picture shows this. I now have my endpoint monitoring setup and BizTalk 360 should be checking my custom endpoint to see that it is available. If I wanted to manually sanity check that the endpoints I have registered are working fine then clicking the Refresh button will show if they are all good or not. If my custom ASP.net page which is checking my dependency gets a problem you will see in the endpoint manager that the status code does not match the expected return code and your endpoints will display in red and you can see the problem. The below picture shows this. If I use specific HTTP response codes for the errors the custom ASP.net page might encounter I can easily interpret these to know what the problem is. Using the alarms and notifications with BizTalk 360 it means when your endpoint goes into an error state you can easily configure email or SMS notifications from BizTalk 360 to tell you that your endpoint is having problems and you can use BizTalk 360 to help correlate what the problem is to allow you to investigate further. Below you can see the email which tells me my endpoint is not working.   When everything returns to normal you will see the status is now fixed and you will see a situation like below where you can see the WebEndpoints are now green and the return code matches what is expected.   Conclusion As you can see it is really easy to plug your own custom ASP.net page into the BizTalk 360 web endpoint monitoring feature. This extension then gives you the power to really extend the monitoring to almost anything you want as long as you can write some .net code to check that the dependency is available and working. It would be interesting to hear of any ideas people have around things they would monitor with this extension. More details on the end point monitor can be found on the following link: http://www.biztalk360.com/tour/monitoring_notifications

    Read the article

  • Integration with Multiple Versions of BizTalk HL7 Accelerator Schemas

    - by Paul Petrov
    Microsoft BizTalk Accelerator for HL7 comes with multiple versions of the HL7 implementation. One of the typical integration tasks is to receive one format and transmit another. For example, system A works HL7 v2.4 messages, system B with v2.3, and system C with v2.2. The system A is exchanging messages with B and C. The logical solution is to create schemas in separate namespaces for each system and assign maps on send ports. Schematic diagram of the messaging solution is shown below:   Nothing is complex about that conceptually. On the implementation level things can get nasty though because of the elaborate nature of HL7 schemas and sheer amount of message types involved. If trying to implement maps directly in BizTalk Map Editor one would quickly get buried by thousands of links between subfields of HL7 segments. Since task is repetitive because HL7 segments are reused between message types it's natural to take advantage of such modular structure and reduce amount of work through reuse. Here's where it makes sense to switch from visual map editor to old plain XSLT. The implementation is done in three steps. First, create XSL templates to map from segments of one version to another. This can be done using BizTalk Map Editor subsequently copying and modifying generated XSL code to create one xsl:template per segment. Group all segments for format mapping in one XSL file (we call it SegmentTemplates.xsl). Here's how template for the PID segment (Patient Identification) would look like this: <xsl:template name="PID"> <PID_PatientIdentification> <xsl:if test="PID_PatientIdentification/PID_1_SetIdPatientId"> <PID_1_SetIdPid> <xsl:value-of select="PID_PatientIdentification/PID_1_SetIdPatientId/text()" /> </PID_1_SetIdPid> </xsl:if> <xsl:for-each select="PID_PatientIdentification/PID_2_PatientIdExternalId"> <PID_2_PatientId> <xsl:if test="CX_0_Id"> <CX_0_Id> <xsl:value-of select="CX_0_Id/text()" /> </CX_0_Id> </xsl:if> <xsl:if test="CX_1_CheckDigit"> <CX_1_CheckDigitSt> <xsl:value-of select="CX_1_CheckDigit/text()" /> </CX_1_CheckDigitSt> </xsl:if> <xsl:if test="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed"> <CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> <xsl:value-of select="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed/text()" /> </CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> . . . // skipped for brevity This is the most tedious and time consuming part. Templates can be created for only those segments that are used in message interchange. Once this is done the rest goes much easier. The next step is to create message type specific XSL that references (imports) segment templates XSL file. Inside this file simple call segment templates in appropriate places. For example, beginning of the mapping XSL for ADT_A01 message would look like this:   <xsl:import href="SegmentTemplates_23_to_24.xslt" />  <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />   <xsl:template match="/">    <xsl:apply-templates select="s0:ADT_A01_23_GLO_DEF" />  </xsl:template>   <xsl:template match="s0:ADT_A01_23_GLO_DEF">    <ns0:ADT_A01_24_GLO_DEF>      <xsl:call-template name="EVN" />      <xsl:call-template name="PID" />      <xsl:for-each select="PD1_PatientDemographic">        <xsl:call-template name="PD1" />      </xsl:for-each>      <xsl:call-template name="PV1" />      <xsl:for-each select="PV2_PatientVisitAdditionalInformation">        <xsl:call-template name="PV2" />      </xsl:for-each> This code simply calls segment template directly for required singular elements and in for-each loop for optional/repeating elements. And lastly, create BizTalk map (btm) that references message type specific XSL. It is essentially empty map with Custom XSL Path set to appropriate XSL: In the end, you will end up with one segment templates file that is referenced by many message type specific XSL files which in turn used by BizTalk maps. Once all segment maps are created they are widely reusable and all the rest work is very simple and clean.

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Is this proper OO design for C++?

    - by user121917
    I recently took a software processes course and this is my first time attempting OO design on my own. I am trying to follow OO design principles and C++ conventions. I attempted and gave up on MVC for this application, but I am trying to "decouple" my classes such that they can be easily unit-tested and so that I can easily change the GUI library used and/or the target OS. At this time, I have finished designing classes but have not yet started implementing methods. The function of the software is to log all packets sent and received, and display them on the screen (like WireShark, but for one local process only). The software accomplishes this by hooking the send() and recv() functions in winsock32.dll, or some other pair of analogous functions depending on what the intended Target is. The hooks add packets to SendPacketList/RecvPacketList. The GuiLogic class starts a thread which checks for new packets. When new packets are found, it utilizes the PacketFilter class to determine the formatting for the new packet, and then sends it to MainWindow, a native win32 window (with intent to later port to Qt).1 Full size image of UML class diagram Here are my classes in skeleton/header form (this is my actual code): class PacketModel { protected: std::vector<byte> data; int id; public: PacketModel(); PacketModel(byte* data, unsigned int size); PacketModel(int id, byte* data, unsigned int size); int GetLen(); bool IsValid(); //len >= sizeof(opcode_t) opcode_t GetOpcode(); byte* GetData(); //returns &(data[0]) bool GetData(byte* outdata, int maxlen); void SetData(byte* pdata, int len); int GetId(); void SetId(int id); bool ParseData(char* instr); bool StringRepr(char* outstr); byte& operator[] (const int index); }; class SendPacket : public PacketModel { protected: byte* returnAddy; public: byte* GetReturnAddy(); void SetReturnAddy(byte* addy); }; class RecvPacket : public PacketModel { protected: byte* callAddy; public: byte* GetCallAddy(); void SetCallAddy(byte* addy); }; //problem: packets may be added to list at any time by any number of threads //solution: critical section associated with each packet list class Synch { public: void Enter(); void Leave(); }; template<class PacketType> class PacketList { private: static const int MAX_STORED_PACKETS = 1000; public: static const int DEFAULT_SHOWN_PACKETS = 100; private: vector<PacketType> list; Synch synch; //wrapper for critical section public: void AddPacket(PacketType* packet); PacketType* GetPacket(int id); int TotalPackets(); }; class SendPacketList : PacketList<SendPacket> { }; class RecvPacketList : PacketList<RecvPacket> { }; class Target //one socket { bool Send(SendPacket* packet); bool Inject(RecvPacket* packet); bool InitSendHook(SendPacketList* sendList); bool InitRecvHook(RecvPacketList* recvList); }; class FilterModel { private: opcode_t opcode; int colorID; bool bFilter; char name[41]; }; class FilterFile { private: FilterModel filter; public: void Save(); void Load(); FilterModel* GetFilter(opcode_t opcode); }; class PacketFilter { private: FilterFile filters; public: bool IsFiltered(opcode_t opcode); bool GetName(opcode_t opcode, char* namestr); //return false if name does not exist COLORREF GetColor(opcode_t opcode); //return default color if no custom color }; class GuiLogic { private: SendPacketList sendList; RecvPacketList recvList; PacketFilter packetFilter; void GetPacketRepr(PacketModel* packet); void ReadNew(); void AddToWindow(); public: void Refresh(); //called from thread void GetPacketInfo(int id); //called from MainWindow }; I'm looking for a general review of my OO design, use of UML, and use of C++ features. I especially just want to know if I'm doing anything considerably wrong. From what I've read, design review is on-topic for this site (and off-topic for the Code Review site). Any sort of feedback is greatly appreciated. Thanks for reading this.

    Read the article

  • Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore

    - by sathya
    Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore   I was really excited on attending the day 1 of Microsoft TechEd 2010 in Bangalore. This is the first Teched that am attending. The event was really fun filled with lot of knowledge sharing sessions and lots of goodies and gifts by the partners Initially the Event Started by Murthy's Session. He explained about the Developers relating to the 5 elements of nature (Pancha Boothaas) 1. Fire - Passion 2. Wave (Water) - Catch the right wave which we need to apply. 3. Earth - Connections and lots of opportunities around the world 4. Air -  Its whatever we breathe. Developers.. Without them nothing is possible. they are like the air 5. Sky - Cloud based applications   Next the Keynote and the announcement of Visual Studio by SomaSegar. List of things that he delivered his speech on : 1. Announcement of Visual Studio 2010 2. Announcement of .NET 4.0 3. Announcement of Silverlight later this week 4. What is the current Trend? Microsoft has done a research with many developers across the globe and have got the following feedback from the users. Get Lost (interrupted) - When we do some work and somebody is calling or interrepting by someother way we lose track of what we were doing and we need to do from the start Falling Behind- Technology gets updated  phenomenally over a period of time and developers always have a scenario like they are not in the state of the art technology and they always have a doubt whether they are staying updated. Lack of Collobaration - When a Manager asks a person what the team members have done and some might be done and some might not be and finally all are into a state like we dont know where we are. So they have addressed these 3 points in the VS 2010 by the following features : Get Lost - Some cool features which could overcome this. We have some Graphical interface. which could show what we have done and where we are. Some Zoom features in the code level. Falling Behind - Everything is based on .NET language base. 2010 has been built in such a way that if developers know the native language that's enough for building good applications. Lack of Collobaration - Some Dashboard Features which would show where exactly the project is. And a graphical user interface is shown on clicking which it directly drills down even to the code level. 5. An overview on all new features in VS 2010. 6. Some good demos of new features in VS 2010 by Polita and one more girl. Some of the new features included : 1. Team Explorer 2. Zoom in Code 3. Ribbon Development 4. Development in Single Platform for Windows Phone, XBox, Zune, Azure, Web Based and Windows based applications 5. Sequence Diagram Generation directly from code 6. Dashboards to show project status 7. Javascript and JQuery intellisense 8. Native support for JQuery 9. Packaging feature while deploying. 10. Generation of different versions of web.config like Web.Config.Production, Web.Config.Staging, etc. 11. IntelliTrace - Eliminating the "Not Reproducible" statement. 12. Automated User Interface Testing. At last in the closing of the day we had a great event called Demo Extravaganza, where lot of cool projects that were launched by Microsoft and also the projects that are under research were also shown. I got a lot of info about Bing today. BING really rocks!!! It has the following : 1. Visual Search 2. Product based search. For each product different menu filters were provided to make an advanced search 3. BING Maps was awesome!! It zoomed in to the street level and we can assume that we are the persons who are walking or running on the road and we can see the real objects like buildings moving by our side. 4. PhotoSynth was used in BING to show up all the images taken around the globe in a 3D format. 5. Formula - If we give some formula it automatically gives the value for the variable or derivation of expression Also some info about some kool touch apps which does an authentication and computation of Teched Attendee's Points that they have scored and the sessions attended. One guy won an XBOX in lucky draw as a gift. There were lot of Partner Stalls like Accenture,Intel,Citrix,MicroFocus,Telerik,infragistics,Sapient etc. Some Offers were provided for us like 50% off on Certifications, 1 free Elearning Course, etc. Stay tuned!! Wil update you on other events too..

    Read the article

  • C# async and actors

    - by Alex.Davies
    If you read my last post about async, you might be wondering what drove me to write such odd code in the first place. The short answer is that .NET Demon is written using NAct Actors. Actors are an old idea, which I believe deserve a renaissance under C# 5. The idea is to isolate each stateful object so that only one thread has access to its state at any point in time. That much should be familiar, it's equivalent to traditional lock-based synchronization. The different part is that actors pass "messages" to each other rather than calling a method and waiting for it to return. By doing that, each thread can only ever be holding one lock. This completely eliminates deadlocks, my least favourite concurrency problem. Most people who use actors take this quite literally, and there are plenty of frameworks which help you to create message classes and loops which can receive the messages, inspect what type of message they are, and process them accordingly. But I write C# for a reason. Do I really have to choose between using actors and everything I love about object orientation in C#? Type safety Interfaces Inheritance Generics As it turns out, no. You don't need to choose between messages and method calls. A method call makes a perfectly good message, as long as you don't wait for it to return. This is where asynchonous methods come in. I have used NAct for a while to wrap my objects in a proxy layer. As long as I followed the rule that methods must always return void, NAct queued up the call for later, and immediately released my thread. When I needed to get information out of other actors, I could use EventHandlers and callbacks (continuation passing style, for any CS geeks reading), and NAct would call me back in my isolated thread without blocking the actor that raised the event. Using callbacks looks horrible though. To remind you: m_BuildControl.FilterEnabledForBuilding(    projects,    enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(        enabledProjects,             newDirtyProjects =             {                 ....... Which is why I'm really happy that NAct now supports async methods. Now, methods are allowed to return Task rather than just void. I can await those methods, and C# 5 will turn the rest of my method into a continuation for me. NAct will run the other method in the other actor's context, but will make sure that when my method resumes, we're back in my context. Neither actor was ever blocked waiting for the other one. Apart from when they were actually busy doing something, they were responsive to concurrent messages from other sources. To be fair, you could use async methods with lock statements to achieve exactly the same thing, but it's ugly. Here's a realistic example of an object that has a queue of data that gets passed to another object to be processed: class QueueProcessor {    private readonly ItemProcessor m_ItemProcessor = ...     private readonly object m_Sync = new object();    private Queue<object> m_DataQueue = ...    private List<object> m_Results = ...     public async Task ProcessOne() {         object data = null;         lock (m_Sync)         {             data = m_DataQueue.Dequeue();         }         var processedData = await m_ItemProcessor.ProcessData(data); lock (m_Sync)         {             m_Results.Add(processedData);         }     } } We needed to write two lock blocks, one to get the data to process, one to store the result. The worrying part is how easily we could have forgotten one of the locks. Compare that to the version using NAct: class QueueProcessorActor : IActor { private readonly ItemProcessor m_ItemProcessor = ... private Queue<object> m_DataQueue = ... private List<object> m_Results = ... public async Task ProcessOne()     {         // We are an actor, it's always thread-safe to access our private fields         var data = m_DataQueue.Dequeue();         var processedData = await m_ItemProcessor.ProcessData(data);         m_Results.Add(processedData);     } } You don't have to explicitly lock anywhere, NAct ensures that your code will only ever run on one thread, because it's an actor. Either way, async is definitely better than traditional synchronous code. Here's a diagram of what a typical synchronous implementation might do: The left side shows what is running on the thread that has the lock required to access the QueueProcessor's data. The red section is where that lock is held, but doesn't need to be. Contrast that with the async version we wrote above: Here, the lock is released in the middle. The QueueProcessor is free to do something else. Most importantly, even if the ItemProcessor sometimes calls the QueueProcessor, they can never deadlock waiting for each other. So I thoroughly recommend you use async for all code that has to wait a while for things. And if you find yourself writing lots of lock statements, think about using actors as well. Using actors and async together really takes the misery out of concurrent programming.

    Read the article

  • Actionscript 3.0 - Enemies do not move right in my platformer game

    - by Christian Basar
    I am making a side-scrolling platformer game in Flash (Actionscript 3.0). I have made lots of progress lately, but I have come across a new problem. I will give some background first. My game level's terrain (or 'floor') is referenced by a MovieClip variable called 'floor.' My desire is to have the Player and enemy characters walk along the terrain. I have gotten the Player character to move on the terrain just fine; he walks up/down hills and falls whenever there is no ground beneath him. Here is the code I created to allow the Player to follow the terrain correctly. Much more code is used to control the Player, but only this code deals with the Player character's following of the terrain and gravity. // If the Player's not on the ground (not touching the 'floor' MovieClip)... if (!onGround) { // Disable ducking downKeyPressed = false; // Increase the Player's 'y' position by his 'y' velocity player.y += playerYVel; } // Increase the 'playerYVel' variable so that the Player will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the player // when he's off the ground. playerYVel += gravity; // Give the Player a terminal velocity of 15 px/frame if (playerYVel > 15) { playerYVel = 15; } // If the Player has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(player.x, player.y, true)) { player.y += playerYVel; // The Player is not on the ground when he's not touching it onGround = false; } Since getting this code to work for the Player, I have created a 'SkullDemon' class, which is one of the planned enemies for my game. I want the 'SkullDemon' objects to move along the terrain like the Player does. With lots of great help, I have already coded the EventListeners, etc. necessary for the 'SkullDemons' to move. Unfortunately, I am having trouble getting them to move along the terrain. In fact, they do not touch the terrain at all; they move along the top of the boundary of the 'floor' MovieClip! I had a simple text diagram showing what I mean, but unfortunately Stackoverflow does not format it correctly. I hope my problem is clear from my description. Strangely enough, my code for the Player's movement and the 'SkullDemon's' movement is almost exactly the same, yet the 'SkullDemons' do not move like the Player does. Here is my code for the SkullDemon movement: // Move all of the Skull Demons using this method protected function moveSkullDemons():void { // Go through the whole 'skullDemonContainer' for (var skullDi:int = 0; skullDi < skullDemonContainer.numChildren; skullDi++) { // Set the SkullDemon 'instance' variable to equal the current SkullDemon skullDIns = SkullDemon(skullDemonContainer.getChildAt(skullDi)); // For now, just move the Skull Demons left at 5 units per second skullDIns.x -= 5; // If the Skull Demon has not hit the 'floor,' increase his falling //speed if (! floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { // Increase the Skull Demon's 'y' position by his 'y' velocity skullDIns.y += skullDIns.sdYVel; // The Skull Demon is not on the ground when he's not touching it skullDIns.sdOnGround = false; } // Increase the 'sdYVel' variable so that the Skull Demon will fall // progressively faster down the screen. This code technically // runs "all the time" but in reality it only affects the Skull Demon // when he's off the ground. if (! skullDIns.sdOnGround) { skullDIns.sdYVel += skullDIns.sdGravity; // Give the Skull Demon a terminal velocity of 15 px/frame if (skullDIns.sdYVel > 15) { skullDIns.sdYVel = 15; } } // What happens when the Skull Demon lands on the ground after a fall? // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre for (var i:int = 0; i < 10; i++) { // The Skull Demon is only on the ground ('onGround == true') when // the ground is touching the Skull Demon MovieClip's origin point, // which is at the Skull Demon's bottom centre if (floor.hitTestPoint(skullDIns.x, skullDIns.y, true)) { skullDIns.y = skullDIns.y; // Set the Skull Demon's y-axis speed to 0 skullDIns.sdYVel = 0; // The Skull Demon is on the ground again skullDIns.sdOnGround = true; } } } } // End of 'moveSkullDemons()' function It is almost like the 'SkullDemons' are interacting with the 'floor' MovieClip using the hitTestObject() function, and not the hitTestPoint() function which is what I want, and which works for the Player character. I am confused about this problem and would appreciate any help you could give me. Thanks!

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • Key Windows Phone Development Concepts

    - by Tim Murphy
    As I am doing more development in and out of the enterprise arena for Windows Phone I decide I would study for the 70-599 test.  I generally take certification tests as a way to force me to dig deeper into a technology.  Between the development and studying I decided it would be good to put a post together of key development features in Windows Phone 7 environment.  Contrary to popular belief the launch of Windows Phone 8 will not obsolete Windows Phone 7 development.  With the launch of 7.8 coming shortly and people who will remain on 7.X for the foreseeable future there are still consumers needing these apps so don’t throw out the baby with the bath water. PhoneApplicationService This is a class that every Windows Phone developer needs to become familiar with.  When it comes to application state this is your go to repository.  It also contains events that help with management of your application’s lifecycle.  You can access it like the following code sample. 1: PhoneApplicationService.Current.State["ValidUser"] = userResult; DeviceNetworkInformation This class allows you to determine the connectivity of the device and be notified when something changes with that connectivity.  If you are making web service calls you will want to check here before firing off. I have found that this class doesn’t actually work very well for determining if you have internet access.  You are better of using the following code where IsConnectedToInternet is an App level property. private void Application_Launching(object sender, LaunchingEventArgs e){ // Validate user access if (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None) { IsConnectedToInternet = true; } else { IsConnectedToInternet = false; } NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged);}void NetworkChange_NetworkAddressChanged(object sender, EventArgs e){ IsConnectedToInternet = (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None);} Push Notification Push notification allows your application to receive notifications in a way that reduces the application’s power needs. This MSDN article is a good place to get the basics of push notification, but you can see the essential concept in the diagram below.  There are three types of push notification: toast, Tile and raw.  The first two work regardless of the state of the application where as raw messages are discarded if your application is not running.   Live Tiles Live tiles are one of the main differentiators of the Windows Phone platform.  They allow users to find information at a glance from their start screen without navigating into individual apps.  Knowing how to implement them can be a great boost to the attractiveness of your application. The simplest step-by-step explanation for creating live tiles is here. Local Database While your application really only has Isolated Storage as a data store there are some ways of giving you database functionality to develop against.  There are a number of open source ORM style solutions.  Probably the best and most native way I have found is to use LINQ to SQL.  It does take a significant amount of setup, but the ease of use once it is configured is worth the cost.  Rather than repeat the full concepts here I will point you to a post that I wrote previously. Tasks (Bing, Email) Leveraging built in features of the Windows Phone platform is an easy way to add functionality that would be expensive to develop on your own.  The classes that you need to make yourself familiar with are BingMapsDirectionsTask and EmailComposeTask.  This will allow your application to supply directions and give the user an email path to relay information to friends and associates. Event model Because of the ability for users to switch quickly to switch to other apps or the home screen is just one reason why knowing the Windows Phone event model is important.  You need to be able to save data so that if a user gets a phone call they can come back to exactly where they were in your application.  This means that you will need to handle such events as Launching, Activated, Deactivated and Closing at an application level.  You will probably also want to get familiar with the OnNavigatedTo and OnNavigatedFrom events at the page level.  These will give you an opportunity to save data as a user navigates through your app. Summary This is just a small portion of the concepts that you will use while building Windows Phone apps, but these are some of the most critical.  With the launch of Windows Phone 8 this list will probably expand.  Take the time to investigate these topics further and try them out in your apps. del.icio.us Tags: Windows Phone 7,Windows Phone,WP7,Software Development,70-599

    Read the article

  • How to make this design closer to proper DDD?

    - by Seralize
    I've read about DDD for days now and need help with this sample design. All the rules of DDD make me very confused to how I'm supposed to build anything at all when domain objects are not allowed to show methods to the application layer; where else to orchestrate behaviour? Repositories are not allowed to be injected into entities and entities themselves must thus work on state. Then an entity needs to know something else from the domain, but other entity objects are not allowed to be injected either? Some of these things makes sense to me but some don't. I've yet to find good examples of how to build a whole feature as every example is about Orders and Products, repeating the other examples over and over. I learn best by reading examples and have tried to build a feature using the information I've gained about DDD this far. I need your help to point out what I do wrong and how to fix it, most preferably with code as "I would not recomment doing X and Y" is very hard to understand in a context where everything is just vaguely defined already. If I can't inject an entity into another it would be easier to see how to do it properly. In my example there are users and moderators. A moderator can ban users, but with a business rule: only 3 per day. I did an attempt at setting up a class diagram to show the relationships (code below): interface iUser { public function getUserId(); public function getUsername(); } class User implements iUser { protected $_id; protected $_username; public function __construct(UserId $user_id, Username $username) { $this->_id = $user_id; $this->_username = $username; } public function getUserId() { return $this->_id; } public function getUsername() { return $this->_username; } } class Moderator extends User { protected $_ban_count; protected $_last_ban_date; public function __construct(UserBanCount $ban_count, SimpleDate $last_ban_date) { $this->_ban_count = $ban_count; $this->_last_ban_date = $last_ban_date; } public function banUser(iUser &$user, iBannedUser &$banned_user) { if (! $this->_isAllowedToBan()) { throw new DomainException('You are not allowed to ban more users today.'); } if (date('d.m.Y') != $this->_last_ban_date->getValue()) { $this->_ban_count = 0; } $this->_ban_count++; $date_banned = date('d.m.Y'); $expiration_date = date('d.m.Y', strtotime('+1 week')); $banned_user->add($user->getUserId(), new SimpleDate($date_banned), new SimpleDate($expiration_date)); } protected function _isAllowedToBan() { if ($this->_ban_count >= 3 AND date('d.m.Y') == $this->_last_ban_date->getValue()) { return false; } return true; } } interface iBannedUser { public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date); public function remove(); } class BannedUser implements iBannedUser { protected $_user_id; protected $_date_banned; protected $_expiration_date; public function __construct(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function remove() { $this->_user_id = ''; $this->_date_banned = ''; $this->_expiration_date = ''; } } // Gathers objects $user_repo = new UserRepository(); $evil_user = $user_repo->findById(123); $moderator_repo = new ModeratorRepository(); $moderator = $moderator_repo->findById(1337); $banned_user_factory = new BannedUserFactory(); $banned_user = $banned_user_factory->build(); // Performs ban $moderator->banUser($evil_user, $banned_user); // Saves objects to database $user_repo->store($evil_user); $moderator_repo->store($moderator); $banned_user_repo = new BannedUserRepository(); $banned_user_repo->store($banned_user); Should the User entitity have a 'is_banned' field which can be checked with $user->isBanned();? How to remove a ban? I have no idea.

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • How to run node.js app on port 80? Are processes blocking my port?

    - by Lucas
    I believe the port 80 on my remote instance is blocked, and I am trying to run a node.js app using port 80. I have experimented with ports 3000 and 3002, and both ports are working fine, but I get an error when running on port 80. I suspect port 80 is blocked from my output of netstat -an below, but how can I find the process id's of the addresses that are blocking port 80 below? [lucas@ecoinstance]~/node/nodetest1$ netstat -an Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3002 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:27017 127.0.0.1:51108 ESTABLISHED tcp 0 0 127.0.0.1:51106 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51106 ESTABLISHED tcp 0 0 127.0.0.1:51107 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:3002 174.61.171.61:36583 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51109 ESTABLISHED tcp 0 0 10.240.241.116:42423 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51108 127.0.0.1:27017 ESTABLISHED tcp 0 532 10.240.241.116:22 174.61.171.61:56824 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51107 ESTABLISHED tcp 0 0 10.240.241.116:42412 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51109 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:51105 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:42422 169.254.169.254:80 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51105 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN udp 0 0 0.0.0.0:49948 0.0.0.0:* udp 0 0 0.0.0.0:68 0.0.0.0:* udp 0 0 10.240.241.116:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::12151 :::* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 405680 /tmp/ssh-KdkxJfFLpKTC/agent.22 813 unix 2 [ ACC ] STREAM LISTENING 408230 /tmp/ssh-ofUeNNEwAqtP/agent.22 243 unix 2 [ ACC ] STREAM LISTENING 416227 /tmp/mongodb-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 /run/udev/control unix 7 [ ] DGRAM 5286 /dev/log unix 2 [ ACC ] STREAM LISTENING 5318 /var/run/acpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 /tmp//tmux-1000/default unix 2 [ ACC ] STREAM LISTENING 414450 /var/run/dbus/system_bus_socke And here is the log when trying to run on port 80 with node.js: [lucas@ecoinstance]~/node/nodetest1$ npm start > [email protected] start /home/lucas/node/nodetest1 > node ./bin/www events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EACCES at errnoException (net.js:904:11) at Server._listen2 (net.js:1023:19) at listen (net.js:1064:10) at Server.listen (net.js:1138:5) at Function.app.listen (/home/lucas/node/nodetest1/node_modules/express/lib/applicati on.js:532:24) at Object.<anonymous> (/home/lucas/node/nodetest1/bin/www:7:18) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) npm ERR! [email protected] start: `node ./bin/www` npm ERR! Exit status 8 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is most likely a problem with the nodetest1 package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./bin/www npm ERR! You can get their info via: npm ERR! npm owner ls nodetest1 npm ERR! There is likely additional logging output above. npm ERR! System Linux 3.13-0.bpo.1-amd64 npm ERR! command "/usr/local/bin/node" "/usr/local/bin/npm" "start" npm ERR! cwd /home/lucas/node/nodetest1 npm ERR! node -v v0.10.28 npm ERR! npm -v 1.4.9 npm ERR! code ELIFECYCLE npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/lucas/node/nodetest1/npm-debug.log npm ERR! not ok code 0 And sudo netstat -lnp does not return any matching port 80's: [lucas@ecoinstance]~/node/nodetest1$ sudo netstat -lnp [48/648] Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Progr am name tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 29160/mon god tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1976/sshd tcp6 0 0 :::22 :::* LISTEN 1976/sshd udp 0 0 0.0.0.0:49948 0.0.0.0:* 1604/dhcl ient udp 0 0 0.0.0.0:68 0.0.0.0:* 1604/dhcl ient udp 0 0 10.240.241.116:123 0.0.0.0:* 2076/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 2076/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 2076/ntpd udp6 0 0 :::12151 :::* 1604/dhcl ient udp6 0 0 :::123 :::* 2076/ntpd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 405680 22814/ssh-agent /tmp/ssh-K dkxJfFLpKTC/agent.22813 unix 2 [ ACC ] STREAM LISTENING 408230 24049/ssh-agent /tmp/ssh-o fUeNNEwAqtP/agent.22243 unix 2 [ ACC ] STREAM LISTENING 416227 29160/mongod /tmp/mongo db-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 284/udevd /run/udev/ control unix 2 [ ACC ] STREAM LISTENING 5318 1798/acpid /var/run/a cpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 5177/tmux /tmp//tmux -1000/default unix 2 [ ACC ] STREAM LISTENING 414450 28213/dbus-daemon /var/run/d bus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 404225 22324/1 /tmp/ssh-9 TlDmu4bjl/agent.22324

    Read the article

  • Getting java.lang.ClassNotFoundException: com.mysql.jdbc.Driver Exception

    - by Yashwant Chavan
    Hi , I am getting Following Exception while configuring the Connection Pool in Tomcat This is Context.xml <Context path="/DBTest" docBase="DBTest" debug="5" reloadable="true" crossContext="true"> <!-- maxActive: Maximum number of dB connections in pool. Make sure you configure your mysqld max_connections large enough to handle all of your db connections. Set to -1 for no limit. --> <!-- maxIdle: Maximum number of idle dB connections to retain in pool. Set to -1 for no limit. See also the DBCP documentation on this and the minEvictableIdleTimeMillis configuration parameter. --> <!-- maxWait: Maximum time to wait for a dB connection to become available in ms, in this example 10 seconds. An Exception is thrown if this timeout is exceeded. Set to -1 to wait indefinitely. --> <!-- username and password: MySQL dB username and password for dB connections --> <!-- driverClassName: Class name for the old mm.mysql JDBC driver is org.gjt.mm.mysql.Driver - we recommend using Connector/J though. Class name for the official MySQL Connector/J driver is com.mysql.jdbc.Driver. --> <!-- url: The JDBC connection url for connecting to your MySQL dB. --> <Resource name="jdbc/TestDB" auth="Container" type="javax.sql.DataSource" maxActive="100" maxIdle="30" maxWait="10000" username="root" password="password" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql:///BUSINESS"/> </Context> This is Bean Entry <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/TestDB"></property> <property name="resourceRef" value="true"></property> </bean> org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Co nnection; nested exception is org.apache.tomcat.dbcp.dbcp.SQLNestedException: Ca nnot load JDBC driver class 'com.mysql.jdbc.Driver' at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(Dat aSourceUtils.java:82) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java: 382) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:45 8) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:46 6) at com.businesscaliber.dao.Dao.getQueryForListMap(Dao.java:66) at com.businesscaliber.dao.MiscellaneousDao.getDefaultSucessStory(Miscel laneousDao.java:109) at com.businesscaliber.listeners.BusinessContextLoader.contextInitialize d(BusinessContextLoader.java:40) at org.apache.catalina.core.StandardContext.listenerStart(StandardContex t.java:3795) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4 252) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase .java:760) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:74 0) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:831) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:720 ) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490 ) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1150) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl eSupport.java:120) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443 ) at org.apache.catalina.core.StandardService.start(StandardService.java:4 48) at org.apache.catalina.core.StandardServer.start(StandardServer.java:700 ) at org.apache.catalina.startup.Catalina.start(Catalina.java:552) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433) Caused by: org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot load JDBC driv er class 'com.mysql.jdbc.Driver' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDat aSource.java:1136) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSo urce.java:880) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(D ataSourceUtils.java:113) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(Dat aSourceUtils.java:79) ... 30 more Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:164) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDat aSource.java:1130)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53  | Next Page >