Search Results

Search found 44965 results on 1799 pages for 'presenter first'.

Page 87/1799 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • Box2D Joints in entity components system

    - by Johnmph
    I search a way to have Box2D joints in an entity component system, here is what i found : 1) Having the joints in Box2D/Body component as parameters, we have a joint array with an ID by joint and having in the other body component the same joint ID, like in this example : Entity1 - Box2D/Body component { Body => (body parameters), Joints => { Joint1 => (joint parameters), others joints... } } // Joint ID = Joint1 Entity2 - Box2D/Body component { Body => (body parameters), Joints => { Joint1 => (joint parameters), others joints... } } // Same joint ID than in Entity1 There are 3 problems with this solution : The first problem is the implementation of this solution, we must manage the joints ID to create joints and to know between which bodies they are connected. The second problem is the parameters of joint, where are they got ? on the Entity1 or Entity2 ? If they are the same parameters for the joint, there is no problem but if they are differents ? The third problem is that we can't limit number of bodies to 2 by joint (which is mandatory), a joint can only link 2 bodies, in this solution, nothing prevents to create more than 2 entities with for each a body component with the same joint ID, in this case, how we know the 2 bodies to joint and what to do with others bodies ? 2) Same solution than the first solution but by having entities ID instead of Joint ID, like in this example : Entity1 - Box2D/Body component { Body => (body parameters), Joints => { Entity2 => (joint parameters), others joints... } } Entity2 - Box2D/Body component { Body => (body parameters), Joints => { Entity1 => (joint parameters), others joints... } } With this solution, we fix the first problem of the first solution but we have always the two others problems. 3) Having a Box2D/Joint component which is inserted in the entities which contains the bodies to joint (we share the same joint component between entities with bodies to joint), like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Joint => (Joint parameters) } // Shared, same as in Entity2 Entity2 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity1 There are 2 problems with this solution : The first problem is the same problem than in solution 1 and 2 : We can't limit number of bodies to 2 by joint (which is mandatory), a joint can only link 2 bodies, in this solution, nothing prevents to create more than 2 entities with for each a body component and the shared joint component, in this case, how we know the 2 bodies to joint and what to do with others bodies ? The second problem is that we can have only one joint by body because entity components system allows to have only one component of same type in an entity. So we can't put two Joint components in the same entity. 4) Having a Box2D/Joint component which is inserted in the entity which contains the first body component to joint and which has an entity ID parameter (this entity contains the second body to joint), like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } - Box2D/Joint component { Entity2 => (Joint parameters) } // Entity2 is the entity ID which contains the other body to joint, the first body being in this entity Entity2 - Box2D/Body component { Body => (body parameters) } There are exactly the same problems that in the third solution, the only difference is that we can have two differents joints by entity instead of one (by putting one joint component in an entity and another joint component in another entity, each joint referencing to the other entity). 5) Having a Box2D/Joint component which take in parameter the two entities ID which contains the bodies to joint, this component can be inserted in any entity, like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } Entity2 - Box2D/Body component { Body => (body parameters) } Entity3 - Box2D/Joint component { Joint => (Body1 => Entity1, Body2 => Entity2, others parameters of joint) } // Entity1 is the ID of the entity which have the first body to joint and Entity2 is the ID of the entity which have the second body to joint (This component can be in any entity, that doesn't matter) With this solution, we fix the problem of the body limitation by joint, we can only have two bodies per joint, which is correct. And we are not limited by number of joints per body, because we can create an another Box2D/Joint component, referencing to Entity1 and Entity2 and put this component in a new entity. The problem of this solution is : What happens if we change the Body1 or Body2 parameter of Joint component at runtime ? We need to add code to sync the Body1/Body2 parameters changes with the real joint object. 6) Same as solution 3 but in a better way : Having a Box2D/Joint component Box2D/Joint which is inserted in the entities which contains the bodies to joint, we share the same joint component between these entities BUT the difference is that we create a new entity to link the body component with the joint component, like in this example : Entity1 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity3 Entity2 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity4 Entity3 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity1 - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity4 Entity4 - Box2D/Body component { Body => (body parameters) } // Shared, same as in Entity2 - Box2D/Joint component { Joint => (joint parameters) } // Shared, same as in Entity3 With this solution, we fix the second problem of the solution 3, because we can create an Entity5 which will have the shared body component of Entity1 and an another joint component so we are no longer limited in the joint number per body. But the first problem of solution 3 remains, because we can't limit the number of entities which have the shared joint component. To resolve this problem, we can add a way to limit the number of share of a component, so for the Joint component, we limit the number of share to 2, because we can only joint 2 bodies per joint. This solution would be perfect because there is no need to add code to sync changes like in the solution 5 because we are notified by the entity components system when components / entities are added to/removed from the system. But there is a conception problem : How to know easily and quickly between which bodies the joint operates ? Because, there is no way to find easily an entity with a component instance. My question is : Which solution is the best ? Is there any other better solutions ? Sorry for the long text and my bad english.

    Read the article

  • Is CodeFirst intended for large scale applications?

    - by RoboShop
    I've been reading up on Entity Framework, in particular, EF 4.1 and following this link ( http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx) and it's guide on Code First. I find it neat but I was wondering, is Code First supposed to be just a solution for rapid development where you can just jump right in without much planning or is it actually intended to be used for large scale applications?

    Read the article

  • Power supply triggered to start by another power supply

    - by steampowered
    I am building a raid array in a separate enclosure. I will be putting an empty tower case next to an existing tower computer, and this second tower case will only hold hard drives. There are many solutions for connecting the drives in the second case to the raid card in the first case (SFF-8088 and SFF-8087 cables). But I prefer not to run power from the first case to the second case. Can I use a power supply in the first tower case and cause it to start the power supply in the second case based on an indication from power in the first tower case's power supply? Maybe run a 12 volt cable from the first case to the power supply on the second case only for the purpose of initiating the second power supply.

    Read the article

  • Why’s (Poignant) Guide to Ruby

    - by Ben Griswold
    You’re familiar with O’Reilly’s brilliant Head First Series, right?  Great.  Then you know how every book begins with an explanation of the Head First teaching style and you know the teaching format which Kathy Sierra and Bert Bates developed is based on research in cognitive science, neurobiology and educational psychology and it’s all about making learning visual and conversational and attractive and emotional and it’s highly effective.  Anyway, it’s a great series and you should read every last one of the books. Moving on… I’ve been wanting to learn more about Ruby and Why’s (Poignant) Guide to Ruby has been on my reading list for a while and there was talk about cartoon foxes and other silliness and I figured Why’s (Poignant) Guide to Ruby probably takes the same unorthodox teaching style as the Head First books – and that’s great – so I read the book, in piecemeal, over the last couple of weeks and, well, I figured wrong. Now having read the book, here’s my take on Why’s (Poignant) Guide – it’s very creative and clever and it does a darn good job of introducing one to Ruby.  If you’re interested in Ruby or simply interested, the online book is worth your time.  If you’re thinking (like me) that cartoon foxes will be doing the teaching, that’s simple not the case.  However, the cartoons and the random stories in the sidebar may serve a purpose. Unlike the Head First books where images and captions are used to further explain the teachings, the cartoons and stories in Why’s Guide serve as intermission and offer your brain a brief moment of rest before the next Ruby concept is explained.  It’s not a bad strategy, but definitely not as effective as the Head First techniques.  

    Read the article

  • 50 Years of LEDs: An Interview with Inventor Nick Holonyak [Video]

    - by Jason Fitzpatrick
    The man who powered on the first LED half a century ago is still around to talk about it; read on to watch an interview with LED inventor Nick Holonyak. The most fascinating thing about Holonyak’s journey to the invention of the LED was that he started off trying to build a laser and ended up inventing a super efficient light source: Holonyak got his PhD in 1954. In 1957, after a year at Bell Labs and a two year stint in the Army, he joined GE’s research lab in Syracuse, New York. GE was already exploring semiconductor applications and building the forerunners of modern diodes called thyristors and rectifiers. At a GE lab in Schenectady, the scientist Robert Hall was trying to build the first diode laser. Hall, Holonyak and others noticed that semiconductors emit radiation, including visible light, when electricity flows through them. Holonyak and Hall were trying to “turn them on,” and channel, focus and multiply the light. Hall was the first to succeed. He built the world’s first semiconductor laser. Without it, there would be no CD and DVD players today. “Nobody knew how to turn the semiconductor into the laser,” Holonyak says. “We arrived at the answer before anyone else.” But Hall’s laser emitted only invisible, infrared light. Holonyak spent more time in his lab, testing, cutting and polishing his hand-made semiconducting alloys. In the fall of 1962, he got first light. “People thought that alloys were rough and turgid and lumpy,” he says. “We knew damn well what happened and that we had a very powerful way of converting electrical current directly into light. We had the ultimate lamp.” How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems 7 Ways To Free Up Hard Disk Space On Windows

    Read the article

  • The incomplete list of impolite WP7 user feature requests

    When I first moved from the combination of a dumb phone and a separate music player, I had modest requirements: phone calls, MP3 playback, calendar notifications, contact management, email, camera and solitaire. Even asking for only these seven things, my first smart phone was as life changing as my first laptop. I could do a great deal of my work while out and about, allowing me to have a much more productive work/personal life balance.   When I was first married, the word “love”...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Wait till all CCActions have completed

    - by tGilani
    I am developing a simple cocos2d game in which I want to animate two CCSprites simultaneously, and for this purpose I simply set CCActions on respective `CCSprite's as follows. [first runAction:[CCMoveTo actionWithDuration:1 position:secondPosition]]; [second runAction:[CCMoveTo actionWithDuration:1 position:firstPosition]]; Now I want to wait till the animations are complete, so I can perform the next step. How should I wait for these animations to finish? There are actually two method calls, the first one animates the objects via the code above and second call does the other animation. I need to delay the second method call until the animations in first are complete. (I would not like to use CCCallFunc blocks as I want to call the second method from the same caller as the first one.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Webcast: The Power to Translate is Now Inside Oracle WebCenter Sites

    - by kellsey.ruppel
    The Power to Translate is Now Inside Oracle WebCenter Sites You are invited to a special preview of the Lingotek Inside Oracle WebCenter Sites solution which will be showcased at Collaborate in Las Vegas later in April. Register Now! Now it's easy to quickly translate your content directly from Oracle WebCenter Sites using the new Lingotek - Inside for Oracle WebCenter Sites integration. Your users will be able to access translated content, nominate content for translation, and even offer to translate content themselves. Lingotek - Inside Integration: Content identified and seamlessly viewable within Lingotek Workbench. Translation Completed by: Machine and Translation Memory Community Volunteers, Crowdsourcing Professional Translators Translated Content Automatically Saved. Content within Oracle WebCenter Sites: Related Secured Routed Through Workflows Publish to Intranets, Web Sites, Applications Oracle WebCenter Sites Web Experience Management Enables marketers and business users to easily create and manage contextually relevant, social, and interactive online experiences across multiple channels on a global scale. Drive customer acquisition, brand loyalty, and business success Optimize customer engagement across Web, mobile, and social channels Manage large-scale, multichannel global online presence with integration to enterprise applications Register Now! You'll hear from the experts how this can be done. Free 30 Minute Webinar Date: Tues, Apr 17thTime: 8:00am MST, 3pm GMT and 4pm CET Win a Kindle Fire Register before April 6th for a chance to win a Amazon Kindle Fire! Presenter: Rob Vandenberg, President and CEO of Lingotek, drives the vision while leading the charge to change the future of translation. Rob is a well-known technology industry veteran, and his expertise and knowledge surrounding translation, localization, and internationalization materials, software products, and web content serves as an immeasurable asset to customers needs and requirements. Rob is a frequent industry speaker and panelist . Presenter: Andrew PalmerOracleEMEA Alliances DirectorWebCenter Sites System RequirementsPC-based attendeesRequired: Windows® 7, Vista, XP or 2003 ServerMacintosh®-based attendeesRequired: Mac OS® X 10.5 or newer

    Read the article

  • Date Time Format in RUBY

    - by Madhan ayyasamy
    The following snippets is very useful when we render views dates in various format in ruby on rails."Format meaning:  %a - The abbreviated weekday name (``Sun'')  %A - The  full  weekday  name (``Sunday'')  %b - The abbreviated month name (``Jan'')  %B - The  full  month  name (``January'')  %c - The preferred local date and time representation  %d - Day of the month (01..31)  %H - Hour of the day, 24-hour clock (00..23)  %I - Hour of the day, 12-hour clock (01..12)  %j - Day of the year (001..366)  %m - Month of the year (01..12)  %M - Minute of the hour (00..59)  %p - Meridian indicator (``AM''  or  ``PM'')  %S - Second of the minute (00..60)  %U - Week  number  of the current year,          starting with the first Sunday as the first          day of the first week (00..53)  %W - Week  number  of the current year,          starting with the first Monday as the first          day of the first week (00..53)  %w - Day of the week (Sunday is 0, 0..6)  %x - Preferred representation for the date alone, no time  %X - Preferred representation for the time alone, no date  %y - Year without a century (00..99)  %Y - Year with century  %Z - Time zone name  %% - Literal ``%'' character   t = Time.now   t.strftime("Printed on %m/%d/%Y")   #=> "Printed on 04/09/2003"   t.strftime("at %I:%M%p")            #=> "at 08:56AM""Have a great day!

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Where does the term "Front End" come from?

    - by Richard JP Le Guen
    Where does the term "front-end" come from? Is there a particular presentation/talk/job-posting which is regarded as the first use of the term? Is someone credited with coining the term? The Merriam-Webster entry for "front-end" claims the first known use of the term was 1973 but it doesn't seem to provide details about that first known use. Likewise, the Wikipedia page about front and back ends is fairly low quality, and cites very few sources.

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • How to preseed 12.10 desktop when the ubuntu-desktop package is missing?

    - by user183394
    I have been trying to use a preseed file to do PXE booting of a 12.10 desktop. Upon the first boot, I was greeted by a terminal with login prompt. Surprised, I checked the /var/log/installer/syslog but didn't find a trace of desktop installation. Feeling curious, I double-checked the content of the loop mounted iso file, and realized that the ubuntu-desktop package that existed up to 12.04.1 is no longer available. So, the following preseed lines from the Ubuntu preseed example no longer apply: ################################################################################ ### Package selection ### ################################################################################ # Selected packages. tasksel tasksel/first multiselect ubuntu-desktop #tasksel tasksel/first multiselect lamp-server, print-server #tasksel tasksel/first multiselect kubuntu-desktop Given such a situation, is there something that I can specify in the pre-seed file to install the entire default desktop?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-21

    - by Bob Rhubart
    Software Architects Need Not Apply | Dustin Marx "I think there is a place for software architecture," says Dustin Marx, "but a portion of our fellow software architects have harmed the reputation of the discipline." For another angle on this subject, check out Out of the Tower, Into the Trenches from the Nov/Dec edition of Oracle Magazine. Oracle Data Integrator 11g - Faster Files | David Allan David Allan illustrates "a big step for regular file processing on the way to super-charging big data files using Hadoop." 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. WLST Domain creation using dry-run | Michel Schildmeijer What to do "if you want to browse through your domain to check if settings you want to apply satisfy your requirements." Cloud opens up new vistas for service orientation at Netflix | Joe McKendrick "Many see service oriented architecture as laying the groundwork for cloud. But at one well-known company, cloud has instigated the move to SOA." How to avoid the Portlet Skin mismatch | Martin Deh Detailed how-to from WebCenter A-Team blogger Martin Deh. Internationalize WebCenter Portal - Content Presenter | Stefan Krantz Stefan Krantz explains "how to get Content Presenter and its editorials to comply with the current selected locale for the WebCenter Portal session." Oracle Public Cloud Architecture | Tyler Jewell Tyler Jewell discusses the multi-tenancy model and elasticity solution implemented by Oracle Cloud in this QCon presentation. A Distributed Access Control Architecture for Cloud Computing The authors of this InfoQ article discuss a distributed architecture based on the principles from security management and software engineering. Thought for the Day "Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to to, let us concentrate rather on explaining to human beings what we want a computer to do." — Donald Knuth Source: Quotes for Software Engineers

    Read the article

  • JOIN THE ORACLE Fusion Middleware Summer Camps

    - by mseika
    JOIN THE ORACLE Fusion Middleware Summer Camps For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: - BPM Suite 11 - ADF 11g - WebCenter Portal - WebLogic 12c - SOA Suite 11g - ADF for BPM Suite 11 - WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) - ADF 11g advanced training by Grant Ronald and Frank Nimphius - WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata - WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) - ADF for BPM Suite 11g advanced training by David Read - WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€ Bootcamps are limited to 20 persons first come first serve For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop” Steven Boon, Ordina Linkedin “This is the training that prepares for real projects and POCs” Jon Petter Hjulstad, eVita – blog & twitter SOA & BPM Partner Community registration Please first login at http://partner.oracle.com and then visit: http://www.oracle.com/goto/emea/soa. If you have any questions please contact the Oracle Partner Business Center. If you have questions please feel free to contact us any time! Best regards Jürgen KressOracle EMEA SOA & BPM Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • D3D11 how to simulate multiple depth channels

    - by Nock
    Here's what I'd like to achieve: Rendering a first pass of objects in my scene, using standard depth comparison Rendering another pass of objects in the same scene, but with the following rules: A Pixel of the 2nd pass always override the first pass (no depth compare between them) Use Depth comparison between pixels written from the second pass. In English I want depth comparison made inside each pass but I always want the second pass pixels to override the first pass ones. Some things I've thought: I tried to think about using stencil to solve this, but I couldn't find a way. I know I could render into a separate target the second pass then composite the result into the first, but I'd like to avoid that. I could use two separate Depth Buffer, one dedicated to each pass. (I never tried, but I figure it's possible to switch the depth buffer in a Render Target "on the fly") Any idea of the best solution? Thanks

    Read the article

  • Applying Interactive Sorting to Multiple Columns in Reporting Services

    - by smisner
    A nice feature that appeared first in SQL Server 2008 is the ability to allow the user to click a column header to sort that column. It defaults to an ascending sort first, but you can click the column again to switch to a descending sort. You can learn more about interactive sorts in general at the Adding Interactive Sort to a Data Region in Books Online. Not mentioned in the article is how to apply interactive sorting to multiple columns, hence the reason for this post! Let’s say that I have a simple table like this: To enable interactive sorting, I open the Text Box properties for each of the column headers – the ones in the top row. Here’s an example of how I set up basic interactive sorting: Now when I preview the report, I see icons appear in each text box on the header row to indicate that interactive sorting is enabled. The initial sort order that displays when you preview the report depends on how you design the report. In this case, the report sorts by Sales Territory Group first, and then by Calendar Year. Interactive sorting overrides the report design. So let’s say that I want to sort first by Calendar Year, and then by Sales Territory Group. To do this, I click the arrow to the right of Calendar Year, and then, while pressing the Shift key, I click the arrow to the right of Sales Territory Group twice (once for ascending order and then a second time for descending order). Now my report looks like this: This technique only seems to work when you have a minimum of three columns configured with interactive sorting. If I remove the property from one of the columns in the above example, and try to use the interactive sorting on the remaining two columns, I can sort only the first column. The sort on the second column gets ignored. I don’t know if that’s by design or a bug, but I do know that’s what I’m experiencing when I try it out!

    Read the article

  • Alternate method to dependent, nested if statements to check multiple states

    - by octopusgrabbus
    Is there an easier way to process multiple true/false states than using nested if statements? I think there is, and it would be to create a sequence of states, and then use a function like when to determine if all states were true, and drop out if not. I am asking the question to make sure there is not a preferred Clojure way to do this. Here is the background of my problem: I have an application that depends on quite a few input files. The application depends on .csv data reports; column headers for each report (.csv files also), so each sequence in the sequence of sequences can be zipped together with its columns for the purposes of creating a smaller sequence; and column files for output data. I use the following functions to find out if a file is present: (defn kind [filename] (let [f (File. filename)] (cond (.isFile f) "file" (.isDirectory f) "directory" (.exists f) "other" :else "(cannot be found)" ))) (defn look-for [filename expected-type] (let [find-status (kind-stat filename expected-type)] find-status)) And here are the first few lines of a multiple if which looks ugly and is hard to maintain: (defn extract-re-values "Plain old-fashioned sub-routine to process real-estate values / 3rd Q re bills extract." [opts] (if (= (utl/look-for (:ifm1 opts) "f") 0) ; got re columns? (if (= (utl/look-for (:ifn1 opts) "f") 0) ; got re data? (if (= (utl/look-for (:ifm3 opts) "f") 0) ; got re values output columns? (if (= (utl/look-for (:ifm4 opts) "f") 0) ; got re_mixed_use_ratio columns? (let [re-in-col-nams (first (utl/fetch-csv-data (:ifm1 opts))) re-in-data (utl/fetch-csv-data (:ifn1 opts)) re-val-cols-out (first (utl/fetch-csv-data (:ifm3 opts))) mu-val-cols-out (first (utl/fetch-csv-data (:ifm4 opts))) chk-results (utl/chk-seq-len re-in-col-nams (first re-in-data) re-rec-count)] I am not looking for a discussion of the best way, but what is in Clojure that facilitates solving a problem like this.

    Read the article

  • ArchBeat Link-o-Rama for 2012-05-30

    - by Bob Rhubart
    Roll Your Own Solaris Blogroll | Larry Wake blogs.oracle.com Larry Wake shares an easy way to find bloggers who write about various aspects of Oracle Solaris. Updating metadata in a WebCenter Content Presenter template | Yannick Ongena yonaweb.be Oracle ACE Yannick Ongena explains "how we can add a link to the content presenter that will open a popup where we can update the metadata of the content." Enable Content editing of Iterative components | Stefan Krantz blogs.oracle.com "The key aspect of this architectural solution," explains Stefan Krantz, "is to support a data type that allows for grouping of editable elements like Plain text, Images and Rich Text, each group of elements must support a infinite amount of grouped repetitions (Rows)." Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 www.oracle.com These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. ODTUG Kscope12 - June 24-28 - San Antonio, TX kscope12.com June 24-28, 2012 San Antonio, TX Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Thought for the Day "CIOs and the IT department cannot stop disruptive technology changes any more than the business managers can. Business managers have to, and are, embracing the new technologies because if they don’t, they, and their business units, will become irrelevant and disappear under the competitive conditions of the market." — Andy Mulholland Source: Capgemini CTO Blog

    Read the article

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • When does a Tumbling Window Start in StreamInsight

    Whilst getting some courseware ready I was playing around writing some code and I decided to very simply show when a window starts and ends based on you asking for a TumblingWindow of n time units in StreamInsight.  I thought this was going to be a two second thing but what I found was something I haven’t yet found documented anywhere until now.   All this code is written in C# and will slot straight into my favourite quick-win dev tool LinqPad   Let’s first create a sample dataset   var EnumerableCollection = new [] { new {id = 1, StartTime = DateTime.Parse("2010-10-01 12:00:00 PM").ToLocalTime()}, new {id = 2, StartTime = DateTime.Parse("2010-10-01 12:20:00 PM").ToLocalTime()}, new {id = 3, StartTime = DateTime.Parse("2010-10-01 12:30:00 PM").ToLocalTime()}, new {id = 4, StartTime = DateTime.Parse("2010-10-01 12:40:00 PM").ToLocalTime()}, new {id = 5, StartTime = DateTime.Parse("2010-10-01 12:50:00 PM").ToLocalTime()}, new {id = 6, StartTime = DateTime.Parse("2010-10-01 01:00:00 PM").ToLocalTime()}, new {id = 7, StartTime = DateTime.Parse("2010-10-01 01:10:00 PM").ToLocalTime()}, new {id = 8, StartTime = DateTime.Parse("2010-10-01 02:00:00 PM").ToLocalTime()}, new {id = 9, StartTime = DateTime.Parse("2010-10-01 03:20:00 PM").ToLocalTime()}, new {id = 10, StartTime = DateTime.Parse("2010-10-01 03:30:00 PM").ToLocalTime()}, new {id = 11, StartTime = DateTime.Parse("2010-10-01 04:40:00 PM").ToLocalTime()}, new {id = 12, StartTime = DateTime.Parse("2010-10-01 04:50:00 PM").ToLocalTime()}, new {id = 13, StartTime = DateTime.Parse("2010-10-01 05:00:00 PM").ToLocalTime()}, new {id = 14, StartTime = DateTime.Parse("2010-10-01 05:10:00 PM").ToLocalTime()} };   Now let’s create a stream of point events   var inputStream = EnumerableCollection .ToPointStream(Application,evt=> PointEvent .CreateInsert(evt.StartTime,evt),AdvanceTimeSettings.StrictlyIncreasingStartTime);   Now we can create our windows over the stream.  The first window we will create is a one hour tumbling window.  We’'ll count the events in the window but what we do here is not the point, the point is our window edges.   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Now we can have a look at what we get.  I am only going to show the first non Cti event as that is enough to demonstrate what is going on   windowedStream.ToIntervalEnumerable().First(e=> e.EventKind == EventKind.Insert).Dump("First Row from Windowed Stream");   The results are below   EventKind Insert   StartTime 01/10/2010 12:00   EndTime 01/10/2010 13:00     { CountOfEntries = 5 }   Payload CountOfEntries 5   Now this makes sense and is quite often the width of window specified in examples.  So what happens if I change the windowing code now to var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(5),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()}; Now where does your window start?  What about   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromMinutes(13),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Well for the first example your window will start at 01/10/2010 10:00:00 , and for the second example it will start at  01/10/2010 11:55:00 Surprised?   Here is the reason why and thanks to the StreamInsight team for listening.   Windows start at TimeSpan.MinValue. Windows are then created from that point onwards of the size you specified in your code.  If a window contains no events they are not produced by the engine to the output.  This is why window start times can be before the first event is created.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >