Search Results

Search found 40386 results on 1616 pages for 'object design'.

Page 508/1616 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • How to apply programatical changes to the Terrain SplatPrototype

    - by Shivan Dragon
    I have a script to which I add a Terrain object (I drag and drop the terrain in the public Terrain field). The Terrain is already setup in Unity to have 2 PaintTextures: 1 is a Square (set up with a tile size so that it forms a checkered pattern) and the 2nd one is a grass image: Also the Target Strength of the first PaintTexture is lowered so that the checkered pattern also reveals some of the grass underneath. Now I want, at run time, to change the Tile Size of the first PaintTexture, i.e. have more or less checkers depending on various run time conditions. I've looked through Unity's documentation and I've seen you have the Terrain.terrainData.SplatPrototype array which allows you to change this. Also there's a RefreshPrototypes() method on the terrainData object and a Flush() method on the Terrain object. So I made a script like this: public class AStarTerrain : MonoBehaviour { public int aStarCellColumns, aStarCellRows; public GameObject aStarCellHighlightPrefab; public GameObject aStarPathMarkerPrefab; public GameObject utilityRobotPrefab; public Terrain aStarTerrain; void Start () { //I've also tried NOT drag and dropping the Terrain on the public field //and instead just using the commented line below, but I get the same results //aStarTerrain = this.GetComponents<Terrain>()[0]; Debug.Log ("Got terrain "+aStarTerrain.name); SplatPrototype[] splatPrototypes = aStarTerrain.terrainData.splatPrototypes; Debug.Log("Terrain has "+splatPrototypes.Length+" splat prototypes"); SplatPrototype aStarCellSplat = splatPrototypes[0]; Debug.Log("Re-tyling splat prototype "+aStarCellSplat.texture.name); aStarCellSplat.tileSize = new Vector2(2000,2000); Debug.Log("Tyling is now "+aStarCellSplat.tileSize.x+"/"+aStarCellSplat.tileSize.y); aStarTerrain.terrainData.RefreshPrototypes(); aStarTerrain.Flush(); } //... Problem is, nothing gets changed, the checker map is not re-tiled. The console outputs correctly tell me that I've got the Terrain object with the right name, that it has the right number of splat prototypes and that I'm modifying the tileSize on the SplatPrototype object corresponding to the right texture. It also tells me the value has changed. But nothing gets updated in the actual graphical view. So please, what am I missing?

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Subterranean IL: Filter exception handlers

    - by Simon Cooper
    Filter handlers are the second type of exception handler that aren't accessible from C#. Unlike the other handler types, which have defined conditions for when the handlers execute, filter lets you use custom logic to determine whether the handler should be run. However, similar to a catch block, the filter block does not get run if control flow exits the block without throwing an exception. Introducing filter blocks An example of a filter block in IL is the following: .try { // try block } filter { // filter block endfilter }{ // filter handler } or, in v1 syntax, TryStart: // try block TryEnd: FilterStart: // filter block HandlerStart: // filter handler HandlerEnd: .try TryStart to TryEnd filter FilterStart handler HandlerStart to HandlerEnd In the v1 syntax there is no end label specified for the filter block. This is because the filter block must come immediately before the filter handler; the end of the filter block is the start of the filter handler. The filter block indicates to the CLR whether the filter handler should be executed using a boolean value on the stack when the endfilter instruction is run; true/non-zero if it is to be executed, false/zero if it isn't. At the start of the filter block, and the corresponding filter handler, a reference to the exception thrown is pushed onto the stack as a raw object (you have to manually cast to System.Exception). The allowed IL inside a filter block is tightly controlled; you aren't allowed branches outside the block, rethrow instructions, and other exception handling clauses. You can, however, use call and callvirt instructions to call other methods. Filter block logic To demonstrate filter block logic, in this example I'm filtering on whether there's a particular key in the Data dictionary of the thrown exception: .try { // try block } filter { // Filter starts with exception object on stack // C# code: ((Exception)e).Data.Contains("MyExceptionDataKey") // only execute handler if Contains returns true castclass [mscorlib]System.Exception callvirt instance class [mscorlib]System.Collections.IDictionary [mscorlib]System.Exception::get_Data() ldstr "MyExceptionDataKey" callvirt instance bool [mscorlib]System.Collections.IDictionary::Contains(object) endfilter }{ // filter handler // Also starts off with exception object on stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) } Conclusion Filter exception handlers are another exception handler type that isn't accessible from C#, however, just like fault handlers, the behaviour can be replicated using a normal catch block: try { // try block } catch (Exception e) { if (!FilterLogic(e)) throw; // handler logic } So, it's not that great a loss, but it's still annoying that this functionality isn't directly accessible. Well, every feature starts off with minus 100 points, so it's understandable why something like this didn't make it into the C# compiler ahead of a different feature.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • Creating predefinied camera views - How do I move the camera to make sense while using Controller?

    - by Deukalion
    I'm trying to understand 3D but the one thing I can't seem to understand is the Camera. Right now I'm rendering four 3D Cubes with textures and I set the Project Matrix: public BasicCamera3D(float fieldOfView, float aspectRatio, float clipStart, float clipEnd, Vector3 cameraPosition, Vector3 cameraLookAt) { projection_fieldOfView = MathHelper.ToRadians(fieldOfView); projection_aspectRatio = aspectRatio; projection_clipstart = clipStart; projection_clipend = clipEnd; matrix_projection = Matrix.CreatePerspectiveFieldOfView(projection_fieldOfView, aspectRatio, clipStart, clipEnd); view_cameraposition = cameraPosition; view_cameralookat = cameraLookAt; matrix_view = Matrix.CreateLookAt(cameraPosition, cameraLookAt, Vector3.Up); } BasicCamera3D gameCamera = new BasicCamera3D(45f, GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000f, new Vector3(0, 0, 8), new Vector3(0, 0, 0)); This creates a sort of "Top-Down" camera, with 8 (still don't get the unit type here - it's not pixels I guess?) But, if I try to position the camera at the side to make "Side-View" or "Reverse Side View" camera, the camera is rotating to much until it's turned around it a couple of times. I render the boxes at: new Vector3(-1, 0, 0) new Vector3(0, 0, 0) new Vector3(1, 0, 0) new Vector3(1, 0, 1) and with the Top-Down camera it shows good, but I don't get how I can make the camera show the side or 45 degrees from top (Like 3rd person action games) because the logic doesn't make sense. Also, since every object you render needs a new BasicEffect with a new projection/view/world - can you still use the "same" camera always so you don't have to create a new View/Matrix and such for each object. It's seems weird. If someone could help me get the camera to navigate around my objects "naturally" so I can be able to set a few predtermined views to choose from it would be really helpful. Are there some sort of algorithm to calculate the view for this and perhaps not simply one value? Examples: Top-Down-View: I have an object at 0, 0, 0 when I turn the right stick on the Xbox 360 Controller it should rotate around that object kind of, not flip and turn upside down, disappear and then magically appear as a tiny dot somewhere I have no clue where it is like it feels like it does now. Side-View: I have an object at 0, 0, 0 when I rotate to sides or up and down, the camera should be able to show a little more of the periphery to each side (depending on which you look at), and the same while moving up or down.

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • Using ASP.NET Membership Provider with an ACL

    - by geekrutherford
    Up until recently one of my applications has used the membership provider within ASP.NET exclusively. However, it has been proposed that while the currently defined roles are beneficial, security needs to be more granular to restrict both access to certain pages and functionality present within a given page.   Unfortunately, the role based security ASP.NET gives you out of the box falls down in this area. This is not due to a lack of foresight by Microsoft, but rather it was simply not designed for implementing both role based security and any inherent ACL you may define within these roles. Mind you some would say an ACL is independent of the role to which a user belongs and is assigned to the user directly.   The application mentioned here has it's own User object (which encapsulates the membership provider user object as a property) and SQL Server table to store extended information not present in the aspnet_users table. While I could have modified the aspnet membership schema to suit the applications needs, it seemed smarter to simply create a separate table with a foreign key back to the aspnet_users table.   Since I have a separate object to store extended user information, I simply created an ACL object and expose it as a property of my user object.   This is all well and good, but it does not help in regards to the SiteMapProvider and restricting access at the page level based on the users ACL.   The straightforward answer would be to develop some code within the databound event for the menu that checks the page title and has hardcoded logic that dictates a user must have certain permissions turned on. The problem with this approach is that it's HARDCODED!!! If you need to change access to a page you'd need to do a build and go through your normal deployment process....ugh!!!   An alternative method, albeit not perfect, is to utilize the resourceKey property on the SiteMapNodes in the SiteMap file with the name of the required permission to view the page. Within the databound event for your menu you iterate the SiteMapNodes in the menus SiteMapProvider looking for a match at the page level based on title. When a match is detected, you have a switch/case on the SiteMapNodes resourceKey (the name of the ACL permission required). The case for the resourceKey ensures the users ACL permission is turned on and viola!!!   This is noteably not perfect in that it is using the resourceKey in a manner other than intended.  Since the application is not localized, using it in the manner described it not an issue.   Below is a sample SiteMap file with the resourceKey used as the ACL permission identifier:     Below is the ItemDataBound event. This application uses the Telerik Menu control:

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • Creating classed in JavaScript

    - by Renso
    Goal:Creating class instances in JavaScript is not available since you define "classes" in js with object literals. In order to create classical classes like you would in c#, ruby, java, etc, with inheritance and instances.Rather than typical class definitions using object literals, js has a constructor function and the NEW operator that will allow you to new-up a class and optionally provide initial properties to initialize the new object with.The new operator changes the function's context and behavior of the return statement.var Person = function(name) {   this.name = name;};   //Init the personvar dude= new Person("renso");//Validate the instanceassert(dude instanceof Person);When a constructor function is called with the new keyword, the context changes from global window to a new and empty context specific to the instance; "this" will refer in this case to the "dude" context.Here is class pattern that you will need to define your own CLASS emulation library:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   return _class;}var Person a new Class();Person.prototype.init = function() {};var person = new Person;In order for the class emulator to support adding functions and properties to static classes as well as object instances of People, change the emulator:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   _class.fn = _class.prototype;   _class.fn.parent = _class;   //adding class properties   _class.extend = function(obj) {      var extended = obj.extended;      for(var i in obj) {         _class[i] = obj[i];      };      if(extended) extended(_class);   };   //adding new instances   _class.include = function(obj) {      var included = obj.included;      for(var i in obj) {         _class.fn[i] = obj[i];      };      if(included) included(_class);   };   return _class;}Now you can use it to create and extend your own object instances://adding static functions to the class Personvar Person = new Class();Person.extend({   find: function(name) {/*....*/},      delete: function(id) {/*....*/},});//calling static function findvar person = Person.find('renso');   //adding properties and functions to the class' prototype so that they are available on instances of the class Personvar Person = new Class;Person.extend({   save: function(name) {/*....*/},   delete: function(id) {/*....*/}});var dude = new Person;//calling instance functiondude.save('renso');

    Read the article

  • Collision disturbing the jumping mechanic in java 2D game [on hold]

    - by user50931
    So I have been working on a 2D Java game recently and everything was going smoothly, until I reached a problem to do with the players jumping mechanic. So far I've got the player to jump a fixed rate and fall due to gravity. Hers my code for my Player class. public class Player extends GameObject { public Player(int x, int y, int width, int height, ObjectId id) { super(x, y, width, height, id); } @Override public void tick(ArrayList<GameObject> object) { if(go){ x+=vx; y+=vy; } if(vx <0){ facing =-1; }else if(vx >0) facing =1; checkCollision(object); checkStance(); } private void checkStance() { if(falling){ //gravity jumping = false; vy = speed/2; } if(jumping){ // Calculates how high jump should be vy = -speed*2; if(jumpY - y >= maxJumpHeight) falling =true; } } private void checkCollision(ArrayList<GameObject> object) { for(int i=0; i< object.size(); i++ ){ GameObject tempObject = object.get(i); if(tempObject.getId() == ObjectId.Ledge){ if(getBoundsTop().intersects(tempObject.getBoundsAll())){ //Top y = tempObject.getY() + tempObject.getBoundsAll().height; falling =true; } if(getBoundsRight().intersects(tempObject.getBoundsAll())){ // Right x = tempObject.getX() -width ; } if(getBoundsLeft().intersects(tempObject.getBoundsAll())){ //Left x = tempObject.getX() + tempObject.getWidth(); } if(getBoundsBottom().intersects(tempObject.getBoundsAll())){ //Bottom y = tempObject.getY() - height; falling =false; vy=0; }else{ falling =true; } } } } @Override public void render(Graphics g) { g.setColor(Color.BLACK); g.fillRect((int)x, (int)y, width, height); } @Override public Rectangle getBoundsAll() { return new Rectangle((int)x, (int)y,width,height); } public Rectangle getBoundsTop() { return new Rectangle((int) x , (int)y ,width,height/15); } public Rectangle getBoundsBottom() { return new Rectangle( (int)x , (int) y +height -(height /15),width,height/15); } public Rectangle getBoundsLeft() { return new Rectangle( (int) x , (int) y + height /10 ,width/8,height - (height /5)); } public Rectangle getBoundsRight() { return new Rectangle((int) x + width - (width/8) ,(int) y + height /10 ,width/8,height - height/5); } } My problem is when I add: else{ falling =true; } during the loop of the ArrayList to check collision, it stops the player from jumping and keeps him on the ground. I've tried to find a way around this but haven't had any luck. Any suggestions?

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • CodePlex Daily Summary for Wednesday, February 17, 2010

    CodePlex Daily Summary for Wednesday, February 17, 2010New ProjectsAcademic Success Accounting System: The system is intended to use by school teacher to set marks to students and estimate their academic success and possibilities. The client applicat...Access.PowerTools: Access PowerTools is currently a sample MS Access add-in project to try & test features of Add-in Express™ 2009 for Microsoft® Office and .net (ht...AntoonCms: AntoonCms makes it easy to maintain a simple website with it's builtin administration pages. It's developed in C# on target Framework 2.0 The CMS...ASP.NET MVC Mehr Lib: Mehr Lib makes it easier for ASP.NET MVC developers to do develop projects. It's developed in C#. This version currently include Ajax master detail...BCryptTool: Developer tool that calculates BCrypt hash codes for strings. BCrypt is an implementation of the Blowfish cipher and a computationally-expensive ha...Coronasoft Cryostasis scripting engine: A scripting engine that allows you to dynamically load plugins from just about any supported .NET language. Its written in C#. Languages supported ...Critical Point Search: Critical Point Searchcritical points: critical pointsFont Family Name Retrieval: This library helps developer to retrieve the font family name from the TTF, OTF and TTC font files, so that developer can display the font without ...jQuery Form Input Hints Plugin: Automatically display hints on input textboxes in your forms using this jQuery plugin. I wrote this code to be as simple and as easy to use as pos...Kojax: kojax projectKronRetro: KronRetro! Making a Habbo Retro just got easier! Powered by PHP & MySQL you can make a Habbo Retro site fast!MVVM Wrapper Kit: MVVM Wrapper Kit makes it easier for View Model programmers to wrap their business objects and collections while preserving change notification and...ObjectCartographer: ObjectCartographer is an object to object mapper and object factory. It's developed in C#.PE-file Reader Writer API (PERWAPI): PERWAPI is a reader writer module for .NET program executables. It has been used as back-end for progamming language compilers such as Gardens Poi...Pinger: A simple Pinger, pings an address until you press a buttonQPV: 0.1: QPV aka Que pelicula es una aplicacion que consiste crear una base de datos potente de peliculas, criticas e informacion para poder filtrar pelicul...SIMD Detector: This SIMD class helps developers to detect the types of SIMD instruction available on users' processor. It supports Intel and AMD CPUs. It is writt...StackOverflow Test Project: Following Andrew Siemer's StackOverflow Knowledge Exchange Project.WeBlog: A blogging platform built on the MVC framework The project will showcase current technologies such as MVC 2, Silverlight 4 and jQuery 1.4. Data pro...Webmedia: this is my webmedia projectWindows Azure RSS Reader: This is and online RSS reader based on the Windows Azure platformWordEditor. A Word Editor for Windows, and an extended RichTextBox control.: This is a word editor that can be used as a stand alone word processor, or added to an existing project.Домашняя Бухгалтерия: Программа для ведения домашнего бухгалтерского учета финансов. New ReleasesAccess.PowerTools: Access PowerTools Add-In Community Edition v.0.0.1: Access PowerTools Add-In Community Edition v.0.0.1 is a sample MS Access add-in project to try & test Add-in Express™ 2009 for Microsoft® Office an...Active CSS: ActiveCSS-0.1.1: revision for version 0.1ASP.NET: Microsoft Ajax Minifier 4.0: The Microsoft Ajax Minifier enables you to improve the performance of your Ajax applications by reducing the size of your Cascading Style Sheet and...ASP.NET MVC Mehr Lib: V1.0: Mehr Lib V1.0 This version currently include ajax master detail combo facilities.ASP.net Ribbon: Version 1.2: New controls : Expandable gallery Color Picker Multi color File Menu Some JS modifications. Some CSS modifications. Includes some functionna...ASP.NET Web Forms Model-View-Presenter (MVP) Contrib: WebForms MVP Contrib CTP6: This is a release of the WebForms MVP Contrib project for WebForms MVP CTP6. Release includes: WebForms MVP Contrib framework Ninject IoC containerAwesomiumDotNet: AwesomiumDotNet 1.2.1: - Added Awesomium 1.5 features: URL filtering, header rewrite rules, SetOpensExternalLinksInCallingFrame. - Numerous fixes and improvements.BCryptTool: BCryptTool v0.1: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Buzz Dot Net: Buzz Dot Net v.1.10216: Features Parse Google Buzz feed to Objects Partial MVVM Implementation Partial OptimizationsCanvas VSDOC Intellisense: v1.0.0.0a: canvas-vsdoc.js and canvas-utils.js JavaScript intellisense for HTML5 Canvas element.CheckHeader: CheckHeader v0.8.5: The Microsoft .NET Framework 3.5 (SP1) is needed to run this program.Claymore MVP: Claymore 1.0.2.0: Changelog Added ASP.NET WebForm support via ClaymoreHttpModule class. Added xsd schema for Visual Studio Intellisense within App.config and Web....Dam Gd - URL Shortner: Dam.gd Version 1.1: This is the latest instalment in our URL shortner. It uses The Easy API http://theeasyapi.com to access data that is used for the back-end analyti...D-AMPS: D-AMPS 0.9.1: Initial version.easySMS: easySMS 1.0 Source code: easySMS 1.0 Source codeFont Family Name Retrieval: 1st Release: Version 1.0.0Free Silverlight & WPF Chart Control - Visifire: Visifire Now Supports DataBinding: Hi, Today we are releasing the much awaited DataBinding feature in Visifire 3.0.3 beta 3. Now you can Bind any DataSource at the Series level so t...GenerateTypedBamApi: Version 2.0: Changes in this release: NEW: Export functionality no longer requires Excel to be installed (uses OLE DB vs. Excel Automation; also enables usage i...Gmail Notifier 2: GmailNotifier2 1.2.1: Fixes issues #9652, #9653iTuner - The iTunes Companion: iTuner 1.1.3699: This includes the first pass of the iTuner Librarian including management of dead tracks, duplicates, and empty directories... While I promised a ...jQuery Form Input Hints Plugin: jQuery.InputHints v1.0: jQuery.InputHints v1.0 Includes Standard & minified source Demo HTML file VS2008 SolutionLibWowArmory: LibWowArmory 0.2.3 beta: LibWowArmory 0.2.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.2. Changes since version 0.2.2:Update...Managed Extensibility Framework: MEF Preview 9: We have merged the .net 3.5 and Silverlight 3 into a single zip. The bin folder contains the binaries for .net 3.5 whereas bin\SL contains the bina...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-1.3.3.0-6571.msi: February 16, 2010 * MdxDesigner: Fix for the issue where when an element is clicked, the mouse wheel stops working until the cursor leaves and r...MEFGeneric: MEFGeneric Preview 9: MEFGeneric Preview 9 release.Mesopotamia Experiment: Mesopotamia 1.2.26: Bug Fixes - mud map - progress window - recycle app domains on robotics engine crashes( in command prompt and visual, major work) - fixed rooomba h...Microsoft Solution Framework for Business Intelligence in Media: Release 1.0: This is the public release of the Microsoft Solution Framework for Business Intelligence in Media (Release 1.0).MVVM Wrapper Kit: MVVM Wrapper Beta: A simple test project is included to get you up and running, and wrapping those business objects.nBayes - Bayesian Filtering in C#: nBayes v0.2: nBayes' indexing system is factored in such a way that you can easily replace the index with a custom implementation. This release introduces an ad...NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.5: 3.6.0.5 16-February-2010 - Fix: SqlAzManSid Class. "Equals" matches object signiture instead of IAzManSid signiture. When a real null object is pas...ObjectCartographer: ObjectCartographer Code 1.0: This is the first release and contains code to help with object to object mapping (including mapping from one object to multiple objects), object f...Office Apps: 0.8.6: Bug fix's, added Calendar.OI - Open Internet: OI HTML and .XAP files (OI offline): this is the HTML code and the XAP file. please right-click the app at http://bit.ly/openinternet and select "install openinternet application to th...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.3: Perwapi version 1.1.3 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Pinger: Pinger 1.0.0.0 Binary: The Latest BinaryRNA Comparative Analysis Software Tools: RNA Comparative Analysis Software Tools 2.0: RNA Comparative Analysis Software Tools Version 2.0 Note: The RNA Comparative Analysis Software Tools are provided as is, without any warranty. No...SAL- Self Artificial Learning: Artificial Learning working proof of concept: This is a working proof of concept. It includes the Dev version (in .zip format) and the consumer version (in .exe format)SharePoint Management PowerShell scripts: SharePoint 2010 PowerShell Scripts: All the SharePoint 2010 PowerShell Scripts The first file is an Excel 2010 file allowing to find quiclky and easily the new cmdlets available wi...SIMD Detector: 1st Release: Version 1.3 Supports MMX/MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a, SSE5, 3DNow.Terminals: Terminals 1.9 Beta Release: This is a beta release so the new features being added to terminals can be tested properly. The major change in this release is that Terminals has...Text Designer Outline Text Library: 9th minor release: Added the ability to select brush, such as gradient brush or texture brush for the text body. Added CSharp library, TextDesignerCSLibrary. Manage...VivoSocial: VivoSocial 7.0.2: This release has several updated modules. See the Support Forums for more details. Since we update modules very often, we will be changing how we d...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.6.00: changes CKEditor Upgrade to Version 3.2 SVN 5132 File Browser: After File Upload, File will be Auto Selected File Browser: Icons are corrected ...WordEditor. A Word Editor for Windows, and an extended RichTextBox control.: WordEditor Source Code: This contains the latest solution file, with all project files included.Домашняя Бухгалтерия: Alapha Realease: Принимаются ваши предложения по дизайну и функциональности программы.Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsMicrosoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSimple SavantNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelSharpyjQuery Library for SharePoint Web ServicesFluent Validation for .NET

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • How do you setup the driver for a Philips based TV capture card?

    - by user8270
    Hi, I have a TV card that I have not managed to install with Ubuntu 10.10 i386. I have tried various topics in various forums and I could not install it. I hope you can help me to install it thank you. lspci 01:07.0 Multimedia controller: Philips Semiconductors SAA7130 Video Broadcast Decoder (rev 01) dmesg [10299.516344] saa7134 ALSA driver for DMA sound unloaded [11385.340661] Linux video capture interface: v2.00 [11385.384278] saa7130/34: v4l2 driver version 0.2.16 loaded [11385.384390] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [11385.384403] saa7130[0]: subsystem: 1131:0000, board: LifeView/Typhoon FlyVIDEO2000 [card=3,insmod option] [11385.384412] saa7130[0]: can't get MMIO memory @ 0x0 [11385.384431] saa7134: probe of 0000:01:07.0 failed with error -16 [11385.401174] saa7134 ALSA driver for DMA sound loaded [11385.401182] saa7134 ALSA: no saa7134 cards found [11477.797019] tvtime[12534]: segfault at 6b0 ip 0804cf64 sp bf928a4c error 4 in tvtime[8048000+76000] [11626.141821] tvtime[12549]: segfault at 6b0 ip 0804cf64 sp bfec357c error 4 in tvtime[8048000+76000] [12218.120632] saa7134 ALSA driver for DMA sound unloaded [12464.993061] Linux video capture interface: v2.00 [12465.028285] saa7130/34: v4l2 driver version 0.2.16 loaded [12465.028392] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [12465.028404] saa7134: <rant> [12465.028406] saa7134: Congratulations! Your TV card vendor saved a few [12465.028408] saa7134: cents for a eeprom, thus your pci board has no [12465.028411] saa7134: subsystem ID and I can't identify it automatically [12465.028414] saa7134: </rant> [12465.028416] saa7134: I feel better now. Ok, here are the good news: [12465.028418] saa7134: You can use the card=<nr> insmod option to specify [12465.028421] saa7134: which board do you have. The list: [12465.028428] saa7134: card=0 -> UNKNOWN/GENERIC [12465.028435] saa7134: card=1 -> Proteus Pro [philips reference design] 1131:2001 1131:2001 [12465.028447] saa7134: card=2 -> LifeView FlyVIDEO3000 5168:0138 4e42:0138 [12465.028457] saa7134: card=3 -> LifeView/Typhoon FlyVIDEO2000 5168:0138 4e42:0138 [12465.028467] saa7134: card=4 -> EMPRESS 1131:6752 [12465.028475] saa7134: card=5 -> SKNet Monster TV 1131:4e85 [12465.028484] saa7134: card=6 -> Tevion MD 9717 [12465.028491] saa7134: card=7 -> KNC One TV-Station RDS / Typhoon TV Tune 1131:fe01 1894:fe01 [12465.028501] saa7134: card=8 -> Terratec Cinergy 400 TV 153b:1142 [12465.028510] saa7134: card=9 -> Medion 5044 [12465.028517] saa7134: card=10 -> Kworld/KuroutoShikou SAA7130-TVPCI [12465.028523] saa7134: card=11 -> Terratec Cinergy 600 TV 153b:1143 [12465.028532] saa7134: card=12 -> Medion 7134 16be:0003 16be:5000 [12465.028542] saa7134: card=13 -> Typhoon TV+Radio 90031 [12465.028548] saa7134: card=14 -> ELSA EX-VISION 300TV 1048:226b [12465.028557] saa7134: card=15 -> ELSA EX-VISION 500TV 1048:226a [12465.028565] saa7134: card=16 -> ASUS TV-FM 7134 1043:4842 1043:4830 1043:4840 [12465.028576] saa7134: card=17 -> AOPEN VA1000 POWER 1131:7133 [12465.028585] saa7134: card=18 -> BMK MPEX No Tuner [12465.028592] saa7134: card=19 -> Compro VideoMate TV 185b:c100 [12465.028600] saa7134: card=20 -> Matrox CronosPlus 102b:48d0 [12465.028608] saa7134: card=21 -> 10MOONS PCI TV CAPTURE CARD 1131:2001 [12465.028617] saa7134: card=22 -> AverMedia M156 / Medion 2819 1461:a70b [12465.028625] saa7134: card=23 -> BMK MPEX Tuner [12465.028632] saa7134: card=24 -> KNC One TV-Station DVR 1894:a006 [12465.028640] saa7134: card=25 -> ASUS TV-FM 7133 1043:4843 [12465.028648] saa7134: card=26 -> Pinnacle PCTV Stereo (saa7134) 11bd:002b [12465.028657] saa7134: card=27 -> Manli MuchTV M-TV002 [12465.028663] saa7134: card=28 -> Manli MuchTV M-TV001 [12465.028670] saa7134: card=29 -> Nagase Sangyo TransGear 3000TV 1461:050c [12465.028679] saa7134: card=30 -> Elitegroup ECS TVP3XP FM1216 Tuner Card( 1019:4cb4 [12465.028687] saa7134: card=31 -> Elitegroup ECS TVP3XP FM1236 Tuner Card 1019:4cb5 [12465.028695] saa7134: card=32 -> AVACS SmartTV [12465.028702] saa7134: card=33 -> AVerMedia DVD EZMaker 1461:10ff [12465.028710] saa7134: card=34 -> Noval Prime TV 7133 [12465.028717] saa7134: card=35 -> AverMedia AverTV Studio 305 1461:2115 [12465.028725] saa7134: card=36 -> UPMOST PURPLE TV 12ab:0800 [12465.028734] saa7134: card=37 -> Items MuchTV Plus / IT-005 [12465.028740] saa7134: card=38 -> Terratec Cinergy 200 TV 153b:1152 [12465.028749] saa7134: card=39 -> LifeView FlyTV Platinum Mini 5168:0212 4e42:0212 5169:1502 [12465.028760] saa7134: card=40 -> Compro VideoMate TV PVR/FM 185b:c100 [12465.028768] saa7134: card=41 -> Compro VideoMate TV Gold+ 185b:c100 [12465.028776] saa7134: card=42 -> Sabrent SBT-TVFM (saa7130) [12465.028783] saa7134: card=43 -> :Zolid Xpert TV7134 [12465.028790] saa7134: card=44 -> Empire PCI TV-Radio LE [12465.028796] saa7134: card=45 -> Avermedia AVerTV Studio 307 1461:9715 [12465.028805] saa7134: card=46 -> AVerMedia Cardbus TV/Radio (E500) 1461:d6ee [12465.028813] saa7134: card=47 -> Terratec Cinergy 400 mobile 153b:1162 [12465.028821] saa7134: card=48 -> Terratec Cinergy 600 TV MK3 153b:1158 [12465.028830] saa7134: card=49 -> Compro VideoMate Gold+ Pal 185b:c200 [12465.028838] saa7134: card=50 -> Pinnacle PCTV 300i DVB-T + PAL 11bd:002d [12465.028847] saa7134: card=51 -> ProVideo PV952 1540:9524 [12465.028855] saa7134: card=52 -> AverMedia AverTV/305 1461:2108 [12465.028863] saa7134: card=53 -> ASUS TV-FM 7135 1043:4845 [12465.028871] saa7134: card=54 -> LifeView FlyTV Platinum FM / Gold 5168:0214 5168:5214 1489:0214 5168:0304 [12465.028884] saa7134: card=55 -> LifeView FlyDVB-T DUO / MSI TV@nywhere D 5168:0306 4e42:0306 [12465.028894] saa7134: card=56 -> Avermedia AVerTV 307 1461:a70a [12465.028903] saa7134: card=57 -> Avermedia AVerTV GO 007 FM 1461:f31f [12465.028911] saa7134: card=58 -> ADS Tech Instant TV (saa7135) 1421:0350 1421:0351 1421:0370 1421:1370 [12465.028924] saa7134: card=59 -> Kworld/Tevion V-Stream Xpert TV PVR7134 [12465.028931] saa7134: card=60 -> LifeView/Typhoon/Genius FlyDVB-T Duo Car 5168:0502 4e42:0502 1489:0502 [12465.028942] saa7134: card=61 -> Philips TOUGH DVB-T reference design 1131:2004 [12465.028951] saa7134: card=62 -> Compro VideoMate TV Gold+II [12465.028958] saa7134: card=63 -> Kworld Xpert TV PVR7134 [12465.028964] saa7134: card=64 -> FlyTV mini Asus Digimatrix 1043:0210 [12465.028973] saa7134: card=65 -> V-Stream Studio TV Terminator [12465.028980] saa7134: card=66 -> Yuan TUN-900 (saa7135) [12465.028986] saa7134: card=67 -> Beholder BeholdTV 409 FM 0000:4091 [12465.028995] saa7134: card=68 -> GoTView 7135 PCI 5456:7135 [12465.029003] saa7134: card=69 -> Philips EUROPA V3 reference design 1131:2004 [12465.029011] saa7134: card=70 -> Compro Videomate DVB-T300 185b:c900 [12465.029020] saa7134: card=71 -> Compro Videomate DVB-T200 185b:c901 [12465.029028] saa7134: card=72 -> RTD Embedded Technologies VFG7350 1435:7350 [12465.029036] saa7134: card=73 -> RTD Embedded Technologies VFG7330 1435:7330 [12465.029045] saa7134: card=74 -> LifeView FlyTV Platinum Mini2 14c0:1212 [12465.029053] saa7134: card=75 -> AVerMedia AVerTVHD MCE A180 1461:1044 [12465.029062] saa7134: card=76 -> SKNet MonsterTV Mobile 1131:4ee9 [12465.029070] saa7134: card=77 -> Pinnacle PCTV 40i/50i/110i (saa7133) 11bd:002e [12465.029078] saa7134: card=78 -> ASUSTeK P7131 Dual 1043:4862 [12465.029087] saa7134: card=79 -> Sedna/MuchTV PC TV Cardbus TV/Radio (ITO [12465.029094] saa7134: card=80 -> ASUS Digimatrix TV 1043:0210 [12465.029102] saa7134: card=81 -> Philips Tiger reference design 1131:2018 [12465.029110] saa7134: card=82 -> MSI TV@Anywhere plus 1462:6231 1462:8624 [12465.029120] saa7134: card=83 -> Terratec Cinergy 250 PCI TV 153b:1160 [12465.029128] saa7134: card=84 -> LifeView FlyDVB Trio 5168:0319 [12465.029137] saa7134: card=85 -> AverTV DVB-T 777 1461:2c05 1461:2c05 [12465.029147] saa7134: card=86 -> LifeView FlyDVB-T / Genius VideoWonder D 5168:0301 1489:0301 [12465.029156] saa7134: card=87 -> ADS Instant TV Duo Cardbus PTV331 0331:1421 [12465.029165] saa7134: card=88 -> Tevion/KWorld DVB-T 220RF 17de:7201 [12465.029173] saa7134: card=89 -> ELSA EX-VISION 700TV 1048:226c [12465.029182] saa7134: card=90 -> Kworld ATSC110/115 17de:7350 17de:7352 [12465.029191] saa7134: card=91 -> AVerMedia A169 B 1461:7360 [12465.029200] saa7134: card=92 -> AVerMedia A169 B1 1461:6360 [12465.029208] saa7134: card=93 -> Medion 7134 Bridge #2 16be:0005 [12465.029216] saa7134: card=94 -> LifeView FlyDVB-T Hybrid Cardbus/MSI TV 5168:3306 5168:3502 5168:3307 4e42:3502 [12465.029229] saa7134: card=95 -> LifeView FlyVIDEO3000 (NTSC) 5169:0138 [12465.029238] saa7134: card=96 -> Medion Md8800 Quadro 16be:0007 16be:0008 16be:000d [12465.029249] saa7134: card=97 -> LifeView FlyDVB-S /Acorp TV134DS 5168:0300 4e42:0300 [12465.029259] saa7134: card=98 -> Proteus Pro 2309 0919:2003 [12465.029267] saa7134: card=99 -> AVerMedia TV Hybrid A16AR 1461:2c00 [12465.029276] saa7134: card=100 -> Asus Europa2 OEM 1043:4860 [12465.029284] saa7134: card=101 -> Pinnacle PCTV 310i 11bd:002f [12465.029293] saa7134: card=102 -> Avermedia AVerTV Studio 507 1461:9715 [12465.029301] saa7134: card=103 -> Compro Videomate DVB-T200A [12465.029308] saa7134: card=104 -> Hauppauge WinTV-HVR1110 DVB-T/Hybrid 0070:6700 0070:6701 0070:6702 0070:6703 0070:6704 0070:6705 [12465.029324] saa7134: card=105 -> Terratec Cinergy HT PCMCIA 153b:1172 [12465.029332] saa7134: card=106 -> Encore ENLTV 1131:2342 1131:2341 3016:2344 [12465.029344] saa7134: card=107 -> Encore ENLTV-FM 1131:230f [12465.029352] saa7134: card=108 -> Terratec Cinergy HT PCI 153b:1175 [12465.029360] saa7134: card=109 -> Philips Tiger - S Reference design [12465.029367] saa7134: card=110 -> Avermedia M102 1461:f31e [12465.029375] saa7134: card=111 -> ASUS P7131 4871 1043:4871 [12465.029384] saa7134: card=112 -> ASUSTeK P7131 Hybrid 1043:4876 [12465.029392] saa7134: card=113 -> Elitegroup ECS TVP3XP FM1246 Tuner Card 1019:4cb6 [12465.029401] saa7134: card=114 -> KWorld DVB-T 210 17de:7250 [12465.029409] saa7134: card=115 -> Sabrent PCMCIA TV-PCB05 0919:2003 [12465.029418] saa7134: card=116 -> 10MOONS TM300 TV Card 1131:2304 [12465.029426] saa7134: card=117 -> Avermedia Super 007 1461:f01d [12465.029435] saa7134: card=118 -> Beholder BeholdTV 401 0000:4016 [12465.029443] saa7134: card=119 -> Beholder BeholdTV 403 0000:4036 [12465.029451] saa7134: card=120 -> Beholder BeholdTV 403 FM 0000:4037 [12465.029459] saa7134: card=121 -> Beholder BeholdTV 405 0000:4050 [12465.029468] saa7134: card=122 -> Beholder BeholdTV 405 FM 0000:4051 [12465.029476] saa7134: card=123 -> Beholder BeholdTV 407 0000:4070 [12465.029484] saa7134: card=124 -> Beholder BeholdTV 407 FM 0000:4071 [12465.029493] saa7134: card=125 -> Beholder BeholdTV 409 0000:4090 [12465.029501] saa7134: card=126 -> Beholder BeholdTV 505 FM 5ace:5050 [12465.029510] saa7134: card=127 -> Beholder BeholdTV 507 FM / BeholdTV 509 5ace:5070 5ace:5090 [12465.029520] saa7134: card=128 -> Beholder BeholdTV Columbus TVFM 0000:5201 [12465.029528] saa7134: card=129 -> Beholder BeholdTV 607 FM 5ace:6070 [12465.029537] saa7134: card=130 -> Beholder BeholdTV M6 5ace:6190 [12465.029545] saa7134: card=131 -> Twinhan Hybrid DTV-DVB 3056 PCI 1822:0022 [12465.029554] saa7134: card=132 -> Genius TVGO AM11MCE [12465.029560] saa7134: card=133 -> NXP Snake DVB-S reference design [12465.029567] saa7134: card=134 -> Medion/Creatix CTX953 Hybrid 16be:0010 [12465.029576] saa7134: card=135 -> MSI TV@nywhere A/D v1.1 1462:8625 [12465.029584] saa7134: card=136 -> AVerMedia Cardbus TV/Radio (E506R) 1461:f436 [12465.029592] saa7134: card=137 -> AVerMedia Hybrid TV/Radio (A16D) 1461:f936 [12465.029601] saa7134: card=138 -> Avermedia M115 1461:a836 [12465.029609] saa7134: card=139 -> Compro VideoMate T750 185b:c900 [12465.029617] saa7134: card=140 -> Avermedia DVB-S Pro A700 1461:a7a1 [12465.029626] saa7134: card=141 -> Avermedia DVB-S Hybrid+FM A700 1461:a7a2 [12465.029634] saa7134: card=142 -> Beholder BeholdTV H6 5ace:6290 [12465.029642] saa7134: card=143 -> Beholder BeholdTV M63 5ace:6191 [12465.029651] saa7134: card=144 -> Beholder BeholdTV M6 Extra 5ace:6193 [12465.029659] saa7134: card=145 -> AVerMedia MiniPCI DVB-T Hybrid M103 1461:f636 1461:f736 [12465.029669] saa7134: card=146 -> ASUSTeK P7131 Analog [12465.029676] saa7134: card=147 -> Asus Tiger 3in1 1043:4878 [12465.029684] saa7134: card=148 -> Encore ENLTV-FM v5.3 1a7f:2008 [12465.029693] saa7134: card=149 -> Avermedia PCI pure analog (M135A) 1461:f11d [12465.029701] saa7134: card=150 -> Zogis Real Angel 220 [12465.029708] saa7134: card=151 -> ADS Tech Instant HDTV 1421:0380 [12465.029716] saa7134: card=152 -> Asus Tiger Rev:1.00 1043:4857 [12465.029725] saa7134: card=153 -> Kworld Plus TV Analog Lite PCI 17de:7128 [12465.029733] saa7134: card=154 -> Avermedia AVerTV GO 007 FM Plus 1461:f31d [12465.029742] saa7134: card=155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid 0070:6706 0070:6708 [12465.029752] saa7134: card=156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid 0070:6707 0070:6709 0070:670a [12465.029763] saa7134: card=157 -> Avermedia AVerTV Studio 507UA 1461:a11b [12465.029772] saa7134: card=158 -> AVerMedia Cardbus TV/Radio (E501R) 1461:b7e9 [12465.029780] saa7134: card=159 -> Beholder BeholdTV 505 RDS 0000:505b [12465.029789] saa7134: card=160 -> Beholder BeholdTV 507 RDS 0000:5071 [12465.029797] saa7134: card=161 -> Beholder BeholdTV 507 RDS 0000:507b [12465.029806] saa7134: card=162 -> Beholder BeholdTV 607 FM 5ace:6071 [12465.029815] saa7134: card=163 -> Beholder BeholdTV 609 FM 5ace:6090 [12465.029823] saa7134: card=164 -> Beholder BeholdTV 609 FM 5ace:6091 [12465.029832] saa7134: card=165 -> Beholder BeholdTV 607 RDS 5ace:6072 [12465.029840] saa7134: card=166 -> Beholder BeholdTV 607 RDS 5ace:6073 [12465.029849] saa7134: card=167 -> Beholder BeholdTV 609 RDS 5ace:6092 [12465.029857] saa7134: card=168 -> Beholder BeholdTV 609 RDS 5ace:6093 [12465.029866] saa7134: card=169 -> Compro VideoMate S350/S300 185b:c900 [12465.029874] saa7134: card=170 -> AverMedia AverTV Studio 505 1461:a115 [12465.029883] saa7134: card=171 -> Beholder BeholdTV X7 5ace:7595 [12465.029892] saa7134: card=172 -> RoverMedia TV Link Pro FM 19d1:0138 [12465.029900] saa7134: card=173 -> Zolid Hybrid TV Tuner PCI 1131:2004 [12465.029909] saa7134: card=174 -> Asus Europa Hybrid OEM 1043:4847 [12465.029917] saa7134: card=175 -> Leadtek Winfast DTV1000S 107d:6655 [12465.029926] saa7134: card=176 -> Beholder BeholdTV 505 RDS 0000:5051 [12465.029934] saa7134: card=177 -> Hawell HW-404M7 [12465.029941] saa7134: card=178 -> Beholder BeholdTV H7 [12465.029948] saa7134: card=179 -> Beholder BeholdTV A7 [12465.029955] saa7134: card=180 -> Avermedia PCI M733A 1461:4155 1461:4255 [12465.029967] saa7130[0]: subsystem: 1131:0000, board: UNKNOWN/GENERIC [card=0,autodetected] [12465.030033] saa7130[0]: can't get MMIO memory @ 0x0 [12465.030051] saa7134: probe of 0000:01:07.0 failed with error -16 [12465.053892] saa7134 ALSA driver for DMA sound loaded [12465.053900] saa7134 ALSA: no saa7134 cards found tvtime-scanner Leyendo la configuración de /etc/tvtime/tvtime.xml Leyendo la configuración de /home/ricardo/.tvtime/tvtime.xml Escaneando usando la norma de TV NTSC. /home/ricardo/.tvtime/stationlist.xml: No existing NTSC station list "Custom". videoinput: Cannot open capture device /dev/video0: No existe el dispositivo o la dirección

    Read the article

  • Problems with capture TV card

    - by user8270
    Hi, I have a TV card that I have not managed to install with Ubuntu 10.10 i386. I have tried various topics in various forums and I could not install it. I hope you can help me to install it thank you. lspci 01:07.0 Multimedia controller: Philips Semiconductors SAA7130 Video Broadcast Decoder (rev 01) dmesg [10299.516344] saa7134 ALSA driver for DMA sound unloaded [11385.340661] Linux video capture interface: v2.00 [11385.384278] saa7130/34: v4l2 driver version 0.2.16 loaded [11385.384390] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [11385.384403] saa7130[0]: subsystem: 1131:0000, board: LifeView/Typhoon FlyVIDEO2000 [card=3,insmod option] [11385.384412] saa7130[0]: can't get MMIO memory @ 0x0 [11385.384431] saa7134: probe of 0000:01:07.0 failed with error -16 [11385.401174] saa7134 ALSA driver for DMA sound loaded [11385.401182] saa7134 ALSA: no saa7134 cards found [11477.797019] tvtime[12534]: segfault at 6b0 ip 0804cf64 sp bf928a4c error 4 in tvtime[8048000+76000] [11626.141821] tvtime[12549]: segfault at 6b0 ip 0804cf64 sp bfec357c error 4 in tvtime[8048000+76000] [12218.120632] saa7134 ALSA driver for DMA sound unloaded [12464.993061] Linux video capture interface: v2.00 [12465.028285] saa7130/34: v4l2 driver version 0.2.16 loaded [12465.028392] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [12465.028404] saa7134: <rant> [12465.028406] saa7134: Congratulations! Your TV card vendor saved a few [12465.028408] saa7134: cents for a eeprom, thus your pci board has no [12465.028411] saa7134: subsystem ID and I can't identify it automatically [12465.028414] saa7134: </rant> [12465.028416] saa7134: I feel better now. Ok, here are the good news: [12465.028418] saa7134: You can use the card=<nr> insmod option to specify [12465.028421] saa7134: which board do you have. The list: [12465.028428] saa7134: card=0 -> UNKNOWN/GENERIC [12465.028435] saa7134: card=1 -> Proteus Pro [philips reference design] 1131:2001 1131:2001 [12465.028447] saa7134: card=2 -> LifeView FlyVIDEO3000 5168:0138 4e42:0138 [12465.028457] saa7134: card=3 -> LifeView/Typhoon FlyVIDEO2000 5168:0138 4e42:0138 [12465.028467] saa7134: card=4 -> EMPRESS 1131:6752 [12465.028475] saa7134: card=5 -> SKNet Monster TV 1131:4e85 [12465.028484] saa7134: card=6 -> Tevion MD 9717 [12465.028491] saa7134: card=7 -> KNC One TV-Station RDS / Typhoon TV Tune 1131:fe01 1894:fe01 [12465.028501] saa7134: card=8 -> Terratec Cinergy 400 TV 153b:1142 [12465.028510] saa7134: card=9 -> Medion 5044 [12465.028517] saa7134: card=10 -> Kworld/KuroutoShikou SAA7130-TVPCI [12465.028523] saa7134: card=11 -> Terratec Cinergy 600 TV 153b:1143 [12465.028532] saa7134: card=12 -> Medion 7134 16be:0003 16be:5000 [12465.028542] saa7134: card=13 -> Typhoon TV+Radio 90031 [12465.028548] saa7134: card=14 -> ELSA EX-VISION 300TV 1048:226b [12465.028557] saa7134: card=15 -> ELSA EX-VISION 500TV 1048:226a [12465.028565] saa7134: card=16 -> ASUS TV-FM 7134 1043:4842 1043:4830 1043:4840 [12465.028576] saa7134: card=17 -> AOPEN VA1000 POWER 1131:7133 [12465.028585] saa7134: card=18 -> BMK MPEX No Tuner [12465.028592] saa7134: card=19 -> Compro VideoMate TV 185b:c100 [12465.028600] saa7134: card=20 -> Matrox CronosPlus 102b:48d0 [12465.028608] saa7134: card=21 -> 10MOONS PCI TV CAPTURE CARD 1131:2001 [12465.028617] saa7134: card=22 -> AverMedia M156 / Medion 2819 1461:a70b [12465.028625] saa7134: card=23 -> BMK MPEX Tuner [12465.028632] saa7134: card=24 -> KNC One TV-Station DVR 1894:a006 [12465.028640] saa7134: card=25 -> ASUS TV-FM 7133 1043:4843 [12465.028648] saa7134: card=26 -> Pinnacle PCTV Stereo (saa7134) 11bd:002b [12465.028657] saa7134: card=27 -> Manli MuchTV M-TV002 [12465.028663] saa7134: card=28 -> Manli MuchTV M-TV001 [12465.028670] saa7134: card=29 -> Nagase Sangyo TransGear 3000TV 1461:050c [12465.028679] saa7134: card=30 -> Elitegroup ECS TVP3XP FM1216 Tuner Card( 1019:4cb4 [12465.028687] saa7134: card=31 -> Elitegroup ECS TVP3XP FM1236 Tuner Card 1019:4cb5 [12465.028695] saa7134: card=32 -> AVACS SmartTV [12465.028702] saa7134: card=33 -> AVerMedia DVD EZMaker 1461:10ff [12465.028710] saa7134: card=34 -> Noval Prime TV 7133 [12465.028717] saa7134: card=35 -> AverMedia AverTV Studio 305 1461:2115 [12465.028725] saa7134: card=36 -> UPMOST PURPLE TV 12ab:0800 [12465.028734] saa7134: card=37 -> Items MuchTV Plus / IT-005 [12465.028740] saa7134: card=38 -> Terratec Cinergy 200 TV 153b:1152 [12465.028749] saa7134: card=39 -> LifeView FlyTV Platinum Mini 5168:0212 4e42:0212 5169:1502 [12465.028760] saa7134: card=40 -> Compro VideoMate TV PVR/FM 185b:c100 [12465.028768] saa7134: card=41 -> Compro VideoMate TV Gold+ 185b:c100 [12465.028776] saa7134: card=42 -> Sabrent SBT-TVFM (saa7130) [12465.028783] saa7134: card=43 -> :Zolid Xpert TV7134 [12465.028790] saa7134: card=44 -> Empire PCI TV-Radio LE [12465.028796] saa7134: card=45 -> Avermedia AVerTV Studio 307 1461:9715 [12465.028805] saa7134: card=46 -> AVerMedia Cardbus TV/Radio (E500) 1461:d6ee [12465.028813] saa7134: card=47 -> Terratec Cinergy 400 mobile 153b:1162 [12465.028821] saa7134: card=48 -> Terratec Cinergy 600 TV MK3 153b:1158 [12465.028830] saa7134: card=49 -> Compro VideoMate Gold+ Pal 185b:c200 [12465.028838] saa7134: card=50 -> Pinnacle PCTV 300i DVB-T + PAL 11bd:002d [12465.028847] saa7134: card=51 -> ProVideo PV952 1540:9524 [12465.028855] saa7134: card=52 -> AverMedia AverTV/305 1461:2108 [12465.028863] saa7134: card=53 -> ASUS TV-FM 7135 1043:4845 [12465.028871] saa7134: card=54 -> LifeView FlyTV Platinum FM / Gold 5168:0214 5168:5214 1489:0214 5168:0304 [12465.028884] saa7134: card=55 -> LifeView FlyDVB-T DUO / MSI TV@nywhere D 5168:0306 4e42:0306 [12465.028894] saa7134: card=56 -> Avermedia AVerTV 307 1461:a70a [12465.028903] saa7134: card=57 -> Avermedia AVerTV GO 007 FM 1461:f31f [12465.028911] saa7134: card=58 -> ADS Tech Instant TV (saa7135) 1421:0350 1421:0351 1421:0370 1421:1370 [12465.028924] saa7134: card=59 -> Kworld/Tevion V-Stream Xpert TV PVR7134 [12465.028931] saa7134: card=60 -> LifeView/Typhoon/Genius FlyDVB-T Duo Car 5168:0502 4e42:0502 1489:0502 [12465.028942] saa7134: card=61 -> Philips TOUGH DVB-T reference design 1131:2004 [12465.028951] saa7134: card=62 -> Compro VideoMate TV Gold+II [12465.028958] saa7134: card=63 -> Kworld Xpert TV PVR7134 [12465.028964] saa7134: card=64 -> FlyTV mini Asus Digimatrix 1043:0210 [12465.028973] saa7134: card=65 -> V-Stream Studio TV Terminator [12465.028980] saa7134: card=66 -> Yuan TUN-900 (saa7135) [12465.028986] saa7134: card=67 -> Beholder BeholdTV 409 FM 0000:4091 [12465.028995] saa7134: card=68 -> GoTView 7135 PCI 5456:7135 [12465.029003] saa7134: card=69 -> Philips EUROPA V3 reference design 1131:2004 [12465.029011] saa7134: card=70 -> Compro Videomate DVB-T300 185b:c900 [12465.029020] saa7134: card=71 -> Compro Videomate DVB-T200 185b:c901 [12465.029028] saa7134: card=72 -> RTD Embedded Technologies VFG7350 1435:7350 [12465.029036] saa7134: card=73 -> RTD Embedded Technologies VFG7330 1435:7330 [12465.029045] saa7134: card=74 -> LifeView FlyTV Platinum Mini2 14c0:1212 [12465.029053] saa7134: card=75 -> AVerMedia AVerTVHD MCE A180 1461:1044 [12465.029062] saa7134: card=76 -> SKNet MonsterTV Mobile 1131:4ee9 [12465.029070] saa7134: card=77 -> Pinnacle PCTV 40i/50i/110i (saa7133) 11bd:002e [12465.029078] saa7134: card=78 -> ASUSTeK P7131 Dual 1043:4862 [12465.029087] saa7134: card=79 -> Sedna/MuchTV PC TV Cardbus TV/Radio (ITO [12465.029094] saa7134: card=80 -> ASUS Digimatrix TV 1043:0210 [12465.029102] saa7134: card=81 -> Philips Tiger reference design 1131:2018 [12465.029110] saa7134: card=82 -> MSI TV@Anywhere plus 1462:6231 1462:8624 [12465.029120] saa7134: card=83 -> Terratec Cinergy 250 PCI TV 153b:1160 [12465.029128] saa7134: card=84 -> LifeView FlyDVB Trio 5168:0319 [12465.029137] saa7134: card=85 -> AverTV DVB-T 777 1461:2c05 1461:2c05 [12465.029147] saa7134: card=86 -> LifeView FlyDVB-T / Genius VideoWonder D 5168:0301 1489:0301 [12465.029156] saa7134: card=87 -> ADS Instant TV Duo Cardbus PTV331 0331:1421 [12465.029165] saa7134: card=88 -> Tevion/KWorld DVB-T 220RF 17de:7201 [12465.029173] saa7134: card=89 -> ELSA EX-VISION 700TV 1048:226c [12465.029182] saa7134: card=90 -> Kworld ATSC110/115 17de:7350 17de:7352 [12465.029191] saa7134: card=91 -> AVerMedia A169 B 1461:7360 [12465.029200] saa7134: card=92 -> AVerMedia A169 B1 1461:6360 [12465.029208] saa7134: card=93 -> Medion 7134 Bridge #2 16be:0005 [12465.029216] saa7134: card=94 -> LifeView FlyDVB-T Hybrid Cardbus/MSI TV 5168:3306 5168:3502 5168:3307 4e42:3502 [12465.029229] saa7134: card=95 -> LifeView FlyVIDEO3000 (NTSC) 5169:0138 [12465.029238] saa7134: card=96 -> Medion Md8800 Quadro 16be:0007 16be:0008 16be:000d [12465.029249] saa7134: card=97 -> LifeView FlyDVB-S /Acorp TV134DS 5168:0300 4e42:0300 [12465.029259] saa7134: card=98 -> Proteus Pro 2309 0919:2003 [12465.029267] saa7134: card=99 -> AVerMedia TV Hybrid A16AR 1461:2c00 [12465.029276] saa7134: card=100 -> Asus Europa2 OEM 1043:4860 [12465.029284] saa7134: card=101 -> Pinnacle PCTV 310i 11bd:002f [12465.029293] saa7134: card=102 -> Avermedia AVerTV Studio 507 1461:9715 [12465.029301] saa7134: card=103 -> Compro Videomate DVB-T200A [12465.029308] saa7134: card=104 -> Hauppauge WinTV-HVR1110 DVB-T/Hybrid 0070:6700 0070:6701 0070:6702 0070:6703 0070:6704 0070:6705 [12465.029324] saa7134: card=105 -> Terratec Cinergy HT PCMCIA 153b:1172 [12465.029332] saa7134: card=106 -> Encore ENLTV 1131:2342 1131:2341 3016:2344 [12465.029344] saa7134: card=107 -> Encore ENLTV-FM 1131:230f [12465.029352] saa7134: card=108 -> Terratec Cinergy HT PCI 153b:1175 [12465.029360] saa7134: card=109 -> Philips Tiger - S Reference design [12465.029367] saa7134: card=110 -> Avermedia M102 1461:f31e [12465.029375] saa7134: card=111 -> ASUS P7131 4871 1043:4871 [12465.029384] saa7134: card=112 -> ASUSTeK P7131 Hybrid 1043:4876 [12465.029392] saa7134: card=113 -> Elitegroup ECS TVP3XP FM1246 Tuner Card 1019:4cb6 [12465.029401] saa7134: card=114 -> KWorld DVB-T 210 17de:7250 [12465.029409] saa7134: card=115 -> Sabrent PCMCIA TV-PCB05 0919:2003 [12465.029418] saa7134: card=116 -> 10MOONS TM300 TV Card 1131:2304 [12465.029426] saa7134: card=117 -> Avermedia Super 007 1461:f01d [12465.029435] saa7134: card=118 -> Beholder BeholdTV 401 0000:4016 [12465.029443] saa7134: card=119 -> Beholder BeholdTV 403 0000:4036 [12465.029451] saa7134: card=120 -> Beholder BeholdTV 403 FM 0000:4037 [12465.029459] saa7134: card=121 -> Beholder BeholdTV 405 0000:4050 [12465.029468] saa7134: card=122 -> Beholder BeholdTV 405 FM 0000:4051 [12465.029476] saa7134: card=123 -> Beholder BeholdTV 407 0000:4070 [12465.029484] saa7134: card=124 -> Beholder BeholdTV 407 FM 0000:4071 [12465.029493] saa7134: card=125 -> Beholder BeholdTV 409 0000:4090 [12465.029501] saa7134: card=126 -> Beholder BeholdTV 505 FM 5ace:5050 [12465.029510] saa7134: card=127 -> Beholder BeholdTV 507 FM / BeholdTV 509 5ace:5070 5ace:5090 [12465.029520] saa7134: card=128 -> Beholder BeholdTV Columbus TVFM 0000:5201 [12465.029528] saa7134: card=129 -> Beholder BeholdTV 607 FM 5ace:6070 [12465.029537] saa7134: card=130 -> Beholder BeholdTV M6 5ace:6190 [12465.029545] saa7134: card=131 -> Twinhan Hybrid DTV-DVB 3056 PCI 1822:0022 [12465.029554] saa7134: card=132 -> Genius TVGO AM11MCE [12465.029560] saa7134: card=133 -> NXP Snake DVB-S reference design [12465.029567] saa7134: card=134 -> Medion/Creatix CTX953 Hybrid 16be:0010 [12465.029576] saa7134: card=135 -> MSI TV@nywhere A/D v1.1 1462:8625 [12465.029584] saa7134: card=136 -> AVerMedia Cardbus TV/Radio (E506R) 1461:f436 [12465.029592] saa7134: card=137 -> AVerMedia Hybrid TV/Radio (A16D) 1461:f936 [12465.029601] saa7134: card=138 -> Avermedia M115 1461:a836 [12465.029609] saa7134: card=139 -> Compro VideoMate T750 185b:c900 [12465.029617] saa7134: card=140 -> Avermedia DVB-S Pro A700 1461:a7a1 [12465.029626] saa7134: card=141 -> Avermedia DVB-S Hybrid+FM A700 1461:a7a2 [12465.029634] saa7134: card=142 -> Beholder BeholdTV H6 5ace:6290 [12465.029642] saa7134: card=143 -> Beholder BeholdTV M63 5ace:6191 [12465.029651] saa7134: card=144 -> Beholder BeholdTV M6 Extra 5ace:6193 [12465.029659] saa7134: card=145 -> AVerMedia MiniPCI DVB-T Hybrid M103 1461:f636 1461:f736 [12465.029669] saa7134: card=146 -> ASUSTeK P7131 Analog [12465.029676] saa7134: card=147 -> Asus Tiger 3in1 1043:4878 [12465.029684] saa7134: card=148 -> Encore ENLTV-FM v5.3 1a7f:2008 [12465.029693] saa7134: card=149 -> Avermedia PCI pure analog (M135A) 1461:f11d [12465.029701] saa7134: card=150 -> Zogis Real Angel 220 [12465.029708] saa7134: card=151 -> ADS Tech Instant HDTV 1421:0380 [12465.029716] saa7134: card=152 -> Asus Tiger Rev:1.00 1043:4857 [12465.029725] saa7134: card=153 -> Kworld Plus TV Analog Lite PCI 17de:7128 [12465.029733] saa7134: card=154 -> Avermedia AVerTV GO 007 FM Plus 1461:f31d [12465.029742] saa7134: card=155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid 0070:6706 0070:6708 [12465.029752] saa7134: card=156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid 0070:6707 0070:6709 0070:670a [12465.029763] saa7134: card=157 -> Avermedia AVerTV Studio 507UA 1461:a11b [12465.029772] saa7134: card=158 -> AVerMedia Cardbus TV/Radio (E501R) 1461:b7e9 [12465.029780] saa7134: card=159 -> Beholder BeholdTV 505 RDS 0000:505b [12465.029789] saa7134: card=160 -> Beholder BeholdTV 507 RDS 0000:5071 [12465.029797] saa7134: card=161 -> Beholder BeholdTV 507 RDS 0000:507b [12465.029806] saa7134: card=162 -> Beholder BeholdTV 607 FM 5ace:6071 [12465.029815] saa7134: card=163 -> Beholder BeholdTV 609 FM 5ace:6090 [12465.029823] saa7134: card=164 -> Beholder BeholdTV 609 FM 5ace:6091 [12465.029832] saa7134: card=165 -> Beholder BeholdTV 607 RDS 5ace:6072 [12465.029840] saa7134: card=166 -> Beholder BeholdTV 607 RDS 5ace:6073 [12465.029849] saa7134: card=167 -> Beholder BeholdTV 609 RDS 5ace:6092 [12465.029857] saa7134: card=168 -> Beholder BeholdTV 609 RDS 5ace:6093 [12465.029866] saa7134: card=169 -> Compro VideoMate S350/S300 185b:c900 [12465.029874] saa7134: card=170 -> AverMedia AverTV Studio 505 1461:a115 [12465.029883] saa7134: card=171 -> Beholder BeholdTV X7 5ace:7595 [12465.029892] saa7134: card=172 -> RoverMedia TV Link Pro FM 19d1:0138 [12465.029900] saa7134: card=173 -> Zolid Hybrid TV Tuner PCI 1131:2004 [12465.029909] saa7134: card=174 -> Asus Europa Hybrid OEM 1043:4847 [12465.029917] saa7134: card=175 -> Leadtek Winfast DTV1000S 107d:6655 [12465.029926] saa7134: card=176 -> Beholder BeholdTV 505 RDS 0000:5051 [12465.029934] saa7134: card=177 -> Hawell HW-404M7 [12465.029941] saa7134: card=178 -> Beholder BeholdTV H7 [12465.029948] saa7134: card=179 -> Beholder BeholdTV A7 [12465.029955] saa7134: card=180 -> Avermedia PCI M733A 1461:4155 1461:4255 [12465.029967] saa7130[0]: subsystem: 1131:0000, board: UNKNOWN/GENERIC [card=0,autodetected] [12465.030033] saa7130[0]: can't get MMIO memory @ 0x0 [12465.030051] saa7134: probe of 0000:01:07.0 failed with error -16 [12465.053892] saa7134 ALSA driver for DMA sound loaded [12465.053900] saa7134 ALSA: no saa7134 cards found tvtime-scanner Leyendo la configuración de /etc/tvtime/tvtime.xml Leyendo la configuración de /home/ricardo/.tvtime/tvtime.xml Escaneando usando la norma de TV NTSC. /home/ricardo/.tvtime/stationlist.xml: No existing NTSC station list "Custom". videoinput: Cannot open capture device /dev/video0: No existe el dispositivo o la dirección ls /dev/ No video directory here

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >