Search Results

Search found 39405 results on 1577 pages for 'zeta two'.

Page 240/1577 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • System.Timers.Timer leaking due to "direct delegate roots"

    - by alimbada
    Apologies for the rather verbose and long-winded post, but this problem's been perplexing me for a few weeks now so I'm posting as much information as I can in order to get this resolved quickly. We have a WPF UserControl which is being loaded by a 3rd party app. The 3rd party app is a presentation application which loads and unloads controls on a schedule defined by an XML file which is downloaded from a server. Our control, when it is loaded into the application makes a web request to a web service and uses the data from the response to display some information. We're using an MVVM architecture for the control. The entry point of the control is a method that is implementing an interface exposed by the main app and this is where the control's configuration is set up. This is also where I set the DataContext of our control to our MainViewModel. The MainViewModel has two other view models as properties and the main UserControl has two child controls. Depending on the data received from the web service, the main UserControl decides which child control to display, e.g. if there is a HTTP error or the data received is not valid, then display child control A, otherwise display child control B. As you'd expect, these two child controls bind two separate view models each of which is a property of MainViewModel. Now child control B (which is displayed when the data is valid) has a RefreshService property/field. RefreshService is an object that is responsible for updating the model in a number of ways and contains 4 System.Timers.Timers; a _modelRefreshTimer a _viewRefreshTimer a _pageSwitchTimer a _retryFeedRetrievalOnErrorTimer (this is only enabled when something goes wrong with retrieving data). I should mention at this point that there are two types of data; the first changes every minute, the second changes every few hours. The controls' configuration decides which type we are using/displaying. If data is of the first type then we update the model quite frequently (every 30 seconds) using the _modelRefreshTimer's events. If the data is of the second type then we update the model after a longer interval. However, the view still needs to be refreshed every 30 seconds as stale data needs to be removed from the view (hence the _viewRefreshTimer). The control also paginates the data so we can see more than we can fit on the display area. This works by breaking the data up into Lists and switching the CurrentPage (which is a List) property of the view model to the right List. This is done by handling the _pageSwitchTimer's Elapsed event. Now the problem My problem is that the control, when removed from the visual tree doesn't dispose of it's timers. This was first noticed when we started getting an unusually high number of requests on the web server end very soon after deploying this control and found that requests were being made at least once a second! We found that the timers were living on and not stopping hours after the control had been removed from view and that the more timers there were the more requests piled up at the web server. My first solution was to implement IDisposable for the RefreshService and do some clean up when the control's UnLoaded event was fired. Within the RefreshServices Dispose method I've set Enabled to false for all the timers, then used the Stop() method on all of them. I've then called Dispose() too and set them to null. None of this worked. After some reading around I found that event handlers may hold references to Timers and prevent them from being disposed and collected. After some more reading and researching I found that the best way around this was to use the Weak Event Pattern. Using this blog and this blog I've managed to work around the shortcomings in the Weak Event pattern. However, none of this solves the problem. Timers are still not being disabled or stopped (let alone disposed) and web requests are continuing to build up. Mem Profiler tells me that "This type has N instances that are directly rooted by a delegate. This can indicate the delegate has not been properly removed" (where N is the number of instances). As far as I can tell though, all listeners of the Elapsed event for the timers are being removed during the cleanup so I can't understand why the timers continue to run. Thanks for reading. Eagerly awaiting your suggestions/comments/solutions (if you got this far :-p)

    Read the article

  • How to Plug a Small Hole in NetBeans JSF (Join Table) Code Generation

    - by MarkH
    I was asked recently to provide an assist with designing and building a small-but-vital application that had at its heart some basic CRUD (Create, Read, Update, & Delete) functionality, built upon an Oracle database, to be accessible from various locations. Working from the stated requirements, I fleshed out the basic application and database designs and, once validated, set out to complete the first iteration for review. Using SQL Developer, I created the requisite tables, indices, and sequences for our first run. One of the tables was a many-to-many join table with three fields: one a primary key for that table, the other two being primary keys for the other tables, represented as foreign keys in the join table. Here is a simplified example of the trio of tables: Once the database was in decent shape, I fired up NetBeans to let it have first shot at the code. NetBeans does a great job of generating a mountain of essential code, saving developers what must be millions of hours of effort each year by building a basic foundation with a few clicks and keystrokes. Lest you think it (or any tool) can do everything for you, however, occasionally something tosses a paper clip into the delicate machinery and makes you open things up to fix them. Join tables apparently qualify.  :-) In the case above, the entity class generated for the join table (New Entity Classes from Database) included an embedded object consisting solely of the two foreign key fields as attributes, in addition to an object referencing each one of the "component" tables. The Create page generated (New JSF Pages from Entity Classes) worked well to a point, but when trying to save, we were greeted with an error: Transaction aborted. Hmm. A quick debugger session later and I'd identified the issue: when trying to persist the new join-table object, the embedded "foreign-keys-only" object still had null values for its two (required value) attributes...even though the embedded table objects had populated key attributes. Here's the simple fix: In the join-table controller class, find the public String create() method. It will look something like this:     public String create() {        try {            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } To restore balance to the force, modify the create() method as follows (changes in red):     public String create() {         try {            // Add the next two lines to resolve:            current.getJoinEntityPK().setTbl1id(current.getTbl1().getId().toBigInteger());            current.getJoinEntityPK().setTbl2id(current.getTbl2().getId().toBigInteger());            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } I'll be refactoring this code shortly, but for now, it works. Iteration one is complete and being reviewed, and we've met the milestone. Here's to happy endings (and customers)! All the best,Mark

    Read the article

  • CoreData: Same predicate (IN) returns different fetched results after a Save operation

    - by Jason Lee
    I have code below: NSArray *existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; [context save:&error]; existedTasks = [[TaskBizDB sharedInstance] fetchTasksWatchedByMeOfProject:projectId]; NSArray *allTasks = [[TaskBizDB sharedInstance] fetchTasksOfProject:projectId]; First line returns two objects; Second line save the context; Third line returns just one object, which is contained in the 'two objects' above; And the last line returns 6 objects, containing the 'two objects' returned at the first line. The fetch interface works like below: WXModel *model = [WXModel modelWithEntity:NSStringFromClass([WQPKTeamTask class])]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(%@ IN personWatchers) AND (projectId == %d)", currentLoginUser, projectId]; [model setPredicate:predicate]; NSArray *fetchedTasks = [model fetch]; if (fetchedTasks.count == 0) return nil; return fetchedTasks; What confused me is that, with the same fetch request, why return different results just after a save? Here comes more detail: The 'two objects' returned at the first line are: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } And the only one object returned at third line is: <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); } Printing description of allTasks: <_PFArray 0xf30b9a0>( <WQPKTeamTask: 0xf3ab9d0> (entity: WQPKTeamTask; id: 0xf3cda40 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p6> ; data: <fault>), <WQPKTeamTask: 0xf315720> (entity: WQPKTeamTask; id: 0xf3c23a0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p7> ; data: <fault>), <WQPKTeamTask: 0xf3a1ed0> (entity: WQPKTeamTask; id: 0xf3cda30 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p8> ; data: <fault>), <WQPKTeamTask: 0x1b92fcc0> (entity: WQPKTeamTask; id: 0x1b9300f0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p9> ; data: { projectId = 372004; taskId = 338001; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }), <WQPKTeamTask: 0xf325e50> (entity: WQPKTeamTask; id: 0xf343820 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p10> ; data: <fault>), <WQPKTeamTask: 0xf3f6130> (entity: WQPKTeamTask; id: 0xf3cb8d0 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WQPKTeamTask/p11> ; data: { projectId = 372004; taskId = 340006; personWatchers = ( "0xf0bf440 <x-coredata://CFFD3F8B-E613-4DE8-85AA-4D6DD08E88C5/WWPerson/p1>" ); }) ) UPDATE 1 If I call the same interface fetchTasksWatchedByMeOfProject: in: #pragma mark - NSFetchedResultsController Delegate - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { I will get 'two objects' as well. UPDATE 2 I've tried: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers == %@) AND (projectId == %d)", currentLoginUser, projectId]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(ANY personWatchers.personId == %@) AND (projectId == %d)", currentLoginUserId, projectId]; Still the same result. UPDATE 3 I've checked the save:&error, error is nil.

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Control to Control Binding in WPF/Silverlight

    - by psheriff
    In the past if you had two controls that you needed to work together, you would have to write code. For example, if you want a label control to display any text a user typed into a text box you would write code to do that. If you want turn off a set of controls when a user checks a check box, you would also have to write code. However, with XAML, these operations become very easy to do. Bind Text Box to Text Block As a basic example of this functionality, let’s bind a TextBlock control to a TextBox. When the user types into a TextBox the value typed in will show up in the TextBlock control as well. To try this out, create a new Silverlight or WPF application in Visual Studio. On the main window or user control type in the following XAML. <StackPanel>  <TextBox Margin="10" x:Name="txtData" />  <TextBlock Margin="10"              Text="{Binding ElementName=txtData,                             Path=Text}" /></StackPanel> Now run the application and type into the TextBox control. As you type you will see the data you type also appear in the TextBlock control. The {Binding} markup extension is responsible for this behavior. You set the ElementName attribute of the Binding markup to the name of the control that you wish to bind to. You then set the Path attribute to the name of the property of that control you wish to bind to. That’s all there is to it! Bind the IsEnabled Property Now let’s apply this concept to something that you might use in a business application. Consider the following two screen shots. The idea is that if the Add Benefits check box is un-checked, then the IsEnabled property of the three “Benefits” check boxes will be set to false (Figure 1). If the Add Benefits check box is checked, then the IsEnabled property of the “Benefits” check boxes will be set to true (Figure 2). Figure 1: Uncheck Add Benefits and the Benefits will be disabled. Figure 2: Check Add Benefits and the Benefits will be enabled. To accomplish this, you would write XAML to bind to each of the check boxes in the “Benefits To Add” section to the check box named chkBenefits. Below is a fragment of the XAML code that would be used. <CheckBox x:Name="chkBenefits" /> <CheckBox Content="401k"           IsEnabled="{Binding ElementName=chkBenefits,                               Path=IsChecked}" /> Since the IsEnabled property is a boolean type and the IsChecked property is also a boolean type, you can bind these two together. If they were different types, or if you needed them to set the IsEnabled property to the inverse of the IsChecked property then you would need to use a ValueConverter class. SummaryOnce you understand the basics of data binding in XAML, you can eliminate a lot code. Connecting controls together is as easy as just setting the ElementName and Path properties of the Binding markup extension. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "SL – Basic Control Binding" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • Automatic Properties, Collection Initializers, and Implicit Line Continuation support with VB 2010

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the eighteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. A few days ago I blogged about two new language features coming with C# 4.0: optional parameters and named arguments.  Today I’m going to post about a few of my favorite new features being added to VB with VS 2010: Auto-Implemented Properties, Collection Initializers, and Implicit Line Continuation support. Auto-Implemented Properties Prior to VB 2010, implementing properties within a class using VB required you to explicitly declare the property as well as implement a backing field variable to store its value.  For example, the code below demonstrates how to implement a “Person” class using VB 2008 that exposes two public properties - “Name” and “Age”:   While explicitly declaring properties like above provides maximum flexibility, I’ve always found writing this type of boiler-plate get/set code tedious when you are simply storing/retrieving the value from a field.  You can use VS code snippets to help automate the generation of it – but it still generates a lot of code that feels redundant.  C# 2008 introduced a cool new feature called automatic properties that helps cut down the code quite a bit for the common case where properties are simply backed by a field.  VB 2010 also now supports this same feature.  Using the auto-implemented properties feature of VB 2010 we can now implement our Person class using just the code below: When you declare an auto-implemented property, the VB compiler automatically creates a private field to store the property value as well as generates the associated Get/Set methods for you.  As you can see above – the code is much more concise and easier to read. The syntax supports optionally initializing the properties with default values as well if you want to: You can learn more about VB 2010’s automatic property support from this MSDN page. Collection Initializers VB 2010 also now supports using collection initializers to easily create a collection and populate it with an initial set of values.  You identify a collection initializer by declaring a collection variable and then use the From keyword followed by braces { } that contain the list of initial values to add to the collection.  Below is a code example where I am using the new collection initializer feature to populate a “Friends” list of Person objects with two people, and then bind it to a GridView control to display on a page: You can learn more about VB 2010’s collection initializer support from this MSDN page. Implicit Line Continuation Support Traditionally, when a statement in VB has been split up across multiple lines, you had to use a line-continuation underscore character (_) to indicate that the statement wasn’t complete.  For example, with VB 2008 the below LINQ query needs to append a “_” at the end of each line to indicate that the query is not complete yet: The VB 2010 compiler and code editor now adds support for what is called “implicit line continuation support” – which means that it is smarter about auto-detecting line continuation scenarios, and as a result no longer needs you to explicitly indicate that the statement continues in many, many scenarios.  This means that with VB 2010 we can now write the above code with no “_” at all: The implicit line continuation feature also works well when editing XML Literals within VB (which is pretty cool). You can learn more about VB 2010’s Implicit Line Continuation support and many of the scenarios it supports from this MSDN page (scroll down to the “Implicit Line Continuation” section to find details). Summary The above three VB language features are but a few of the new language and code editor features coming with VB 2010.  Visit this site to learn more about some of the other VB language features coming with the release.  Also subscribe to the VB team’s blog to learn more and stay up-to-date with the posts they the team regularly publishes. Hope this helps, Scott

    Read the article

  • Oracle Support Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1)

    - by faye.todd(at)oracle.com
    Master Note for Troubleshooting Advanced Queuing and Oracle Streams Propagation Issues (Doc ID 233099.1) Copyright (c) 2010, Oracle Corporation. All Rights Reserved. In this Document  Purpose  Last Review Date  Instructions for the Reader  Troubleshooting Details     1. Scope and Application      2. Definitions and Classifications     3. How to Use This Guide     4. Basic AQ Propagation Troubleshooting     5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages     6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment     7. Performance Issues  References Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.0 to 11.2.0.2 - Release: 8.1.7 to 11.2Information in this document applies to any platform. Purpose This document presents a step-by-step methodology for troubleshooting and resolving problems with Advanced Queuing Propagation in both Streams and basic Advanced Queuing environments. It also serves as a master reference for other more specific notes on Oracle Streams Propagation and Advanced Queuing Propagation issues. Last Review Date December 20, 2010 Instructions for the Reader A Troubleshooting Guide is provided to assist in debugging a specific issue. When possible, diagnostic tools are included in the document to assist in troubleshooting. Troubleshooting Details 1. Scope and Application This note is intended for Database Administrators of Oracle databases where issues are being encountered with propagating messages between advanced queues, whether the queues are used for user-created messaging systems or for Oracle Streams. It contains troubleshooting steps and links to notes for further problem resolution.It can also be used a template to document a problem when it is necessary to engage Oracle Support Services. Knowing what is NOT happening can frequently speed up the resolution process by focusing solely on the pertinent problem area. This guide is divided into five parts: Section 2: Definitions and Classifications (discusses the different types and features of propagations possible - helpful for understanding the rest of the guide) Section 3: How to Use this Guide (to be used as a start part for determining the scope of the problem and what sections to consult) Section 4. Basic AQ propagation troubleshooting (applies to both AQ propagation of user enqueued and dequeued messages as well as Oracle Streams propagations) Section 5. Additional troubleshooting steps for AQ propagation of user enqueued and dequeued messages Section 6. Additional troubleshooting steps for Oracle Streams propagation Section 7. Performance issues 2. Definitions and Classifications Given the potential scope of issues that can be encountered with AQ propagation, the first recommended step is to do some basic diagnosis to determine the type of problem that is being encountered. 2.1. What Type of Propagation is Being Used? 2.1.1. Buffered Messaging For an advanced queue, messages can be maintained on disk (persistent messaging) or in memory (buffered messaging). To determine if a queue is buffered or not, reference the GV_$BUFFERED_QUEUES view. If the queue does not appear in this view, it is persistent. 2.1.2. Propagation mode - queue-to-dblink vs queue-to-queue As of 10.2, an AQ propagation can also be defined as queue-to-dblink, or queue-to-queue: queue-to-dblink: The propagation delivers messages or events from the source queue to all subscribing queues at the destination database identified by the dblink. A single propagation schedule is used to propagate messages to all subscribing queues. Hence any changes made to this schedule will affect message delivery to all the subscribing queues. This mode does not support multiple propagations from the same source queue to the same target database. queue-to-queue: Added in 10.2, this propagation mode delivers messages or events from the source queue to a specific destination queue identified on the database link. This allows the user to have fine-grained control on the propagation schedule for message delivery. This new propagation mode also supports transparent failover when propagating to a destination Oracle RAC system. With queue-to-queue propagation, you are no longer required to re-point a database link if the owner instance of the queue fails on Oracle RAC. This mode supports multiple propagations to the same target database if the target queues are different. The default is queue-to-dblink. To verify if queue-to-queue propagation is being used, in non-Streams environments query DBA_QUEUE_SCHEDULES.DESTINATION - if a remote queue is listed along with the remote database link, then queue-to-queue propagation is being used. For Streams environments, the DBA_PROPAGATION.QUEUE_TO_QUEUE column can be checked.See the following note for a method to switch between the two modes:Document 827473.1 How to alter propagation from queue-to-queue to queue-to-dblink 2.1.3. Combined Capture and Apply (CCA) for Streams In 11g Oracle Streams environments, an optimization called Combined Capture and Apply (CCA) is implemented by default when possible. Although a propagation is configured in this case, Streams does not use it; instead it passes information directly from capture to an apply receiver. To see if CCA is in use: COLUMN CAPTURE_NAME HEADING 'Capture Name' FORMAT A30COLUMN OPTIMIZATION HEADING 'CCA Mode?' FORMAT A10SELECT CAPTURE_NAME, DECODE(OPTIMIZATION,0, 'No','Yes') OPTIMIZATIONFROM V$STREAMS_CAPTURE; Also, see the following note:Document 463820.1 Streams Combined Capture and Apply in 11g 2.2. Queue Table Compatibility There are three types of queue table compatibility. In more recent databases, queue tables may be present in all three modes of compatibility: 8.0 - earliest version, deprecated in 10.2 onwards 8.1 - support added for RAC, asynchronous notification, secure queues, queue level access control, rule-based subscribers, separate storage of history information 10.0 - if the database is in 10.1-compatible mode, then the default value for queue table compatibility is 10.0 2.3. Single vs Multiple Consumer Queue Tables If more than one recipient can dequeue a message from a queue, then its queue table is multiple consumer. You can propagate messages from a multiple-consumer queue to a single-consumer queue. Propagation from a single-consumer queue to a multiple-consumer queue is not possible. 3. How to Use This Guide 3.1. Are Messages Being Propagated at All, or is the Propagation Just Slow? Run the following query on the source database for the propagation (assuming that it is running): select TOTAL_NUMBER from DBA_QUEUE_SCHEDULES where QNAME='<source_queue_name>'; If TOTAL_NUMBER is increasing, then propagation is most likely functioning, although it may be slow. For performance issues, see Section 7. 3.2. Propagation Between Persistent User-Created Queues See Sections 4 and 5 (and optionally Section 6 if performance is an issue). 3.3. Propagation Between Buffered User-Created Queues See Sections 4, 5, and 6 (and optionally Section 7 if performance is an issue). 3.4. Propagation between Oracle Streams Queues (without Combined Capture and Apply (CCA) Optimization) See Sections 4 and 6 (and optionally Section 7 if performance is an issue). 3.5. Propagation between Oracle Streams Queues (with Combined Capture and Apply (CCA) Optimization) Although an AQ propagation is not used directly in this case, some characteristics of the message transfer are inferred from the propagation parameters used. Some parts of Sections 4 and 6 still apply. 3.6. Messaging Gateway Propagations This note does not apply to Messaging Gateway propagations. 4. Basic AQ Propagation Troubleshooting 4.1. Double-check Your Code Make sure that you are consistent in your usage of the database link(s) names, queue names, etc. It may be useful to plot a diagram of which queues are connected via which database links to make sure that the logical structure is correct. 4.2. Verify that Job Queue Processes are Running 4.2.1. Versions 10.2 and Lower - DBA_JOBS Package For versions 10.2 and lower, a scheduled propagation is managed by DBMS_JOB package. The propagation is performed by job queue process background processes. Therefore we need to verify that there are sufficient processes available for the propagation process. We should have at least 4 job queue processes running and preferably more depending on the number of other jobs running in the database. It should be noted that for AQ specific work, AQ will only ever use half of the job queue processes available.An issue caused by an inadequate job queue processes parameter setting is described in the following note:Document 298015.1 Kwqjswproc:Excep After Loop: Assigning To Self 4.2.1.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; 4.2.1.2. Job Queue Processes in Memory The following command will show how many job queue processes are currentlyin use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.1.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (spids) of job queue processes involved in propagation via select p.SPID, p.PROGRAM from V$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOBand j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%'; and these SPIDs can be used to check at the operating system level that they exist.In 8i a job queue process will have a name similar to: ora_snp1_<instance_name>.In 9i onwards you will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.2.2. Version 11.1 and Above - Oracle Scheduler In version 11.1 and above, Oracle Scheduler is used to perform AQ and Streams propagations. Oracle Scheduler automatically tunes the number of slave processes for these jobs based on the load on the computer system, and the JOB_QUEUE_PROCESSES initialization parameter is only used to specify the maximum number of slave processes. Therefore, the JOB_QUEUE_PROCESSES initialization parameter does not need to be set (it defaults to a very high number), unless you want to limit the number of slaves that can be created. If JOB_QUEUE_PROCESSES = 0, no propagation jobs will run.See the following note for a discussion of Oracle Streams 11g and Oracle Scheduler:Document 1083608.1 11g Streams and Oracle Scheduler 4.2.2.1. Job Queue Processes in Initalization Parameter File The parameter JOB_QUEUE_PROCESSES in the init.ora/spfile should be > 0, and preferably be left at its default value. The value can be changed dynamically via connect / as sysdbaalter system set JOB_QUEUE_PROCESSES=10; To set the JOB_QUEUE_PROCESSES parameter to its default value, run: connect / as sysdbaalter system reset JOB_QUEUE_PROCESSES; and then bounce the instance. 4.2.2.2. Job Queue Processes in Memory The following command will show how many job queue processes are currently in use by this instance (this may be different than what is in the init.ora/spfile): connect / as sysdbashow parameter job; 4.2.2.3. OS PIDs Corresponding to Job Queue Processes Identify the operating system process ids (SPIDs) of job queue processes involved in propagation via col PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_namefrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDRand jr.JOB_name=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%'; and these SPIDs can be used to check at the operating system level that they exist.You will see a coordinator process: ora_cjq0_ and multiple slave processes: ora_jnnn_<instance_name>, where nnn is an integer between 1 and 999. 4.3. Check the Alert Log and Any Associated Trace Files The first place to check for propagation failures is the alert logs at all sites (local and if relevant all remote sites). When a job queue process attempts to execute a schedule and fails it will always write an error stack to the alert log. This error stack will also be written in a job queue process trace file, which will be written to the BACKGROUND_DUMP_DEST location for 10.2 and below, and in the DIAGNOSTIC_DEST location for 11g. The fact that errors are written to the alert log demonstrates that the schedule is executing. This means that the problem could be with the set up of the schedule. In this example the ORA-02068 demonstrates that the failure was at the remote site. Further investigation revealed that the remote database was not open, hence the ORA-03114 error. Starting the database resolved the problem. Thu Feb 14 10:40:05 2002 Propagation Schedule for (AQADM.MULTIPLEQ, SHANE816.WORLD) encountered following error:ORA-04052: error occurred when looking up Remote object [email protected]: error occurred at recursive SQL level 4ORA-02068: following severe error from SHANE816ORA-03114: not connected to ORACLEORA-06512: at "SYS.DBMS_AQADM_SYS", line 4770ORA-06512: at "SYS.DBMS_AQADM", line 548ORA-06512: at line 1 Other potential errors that may be written to the alert log can be found in the following notes:Document 827184.1 AQ Propagation with CLOB data types Fails with ORA-22990 (11.1)Document 846297.1 AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn] (10.2, 11.1)Document 731292.1 ORA-25215 Reported on Local Propagation When Using Transformation with ANYDATA queue tables (10.2, 11.1, 11.2)Document 365093.1 ORA-07445 [kwqppay2aqe()+7360] Reported on Propagation of a Transformed Message (10.1, 10.2)Document 219416.1 Advanced Queuing Propagation Fails with ORA-22922 (9.0)Document 1203544.1 AQ Propagation Aborted with ORA-600 [ociksin: invalid status] on SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE After Upgrade (11.1, 11.2)Document 1087324.1 ORA-01405 ORA-01422 reported by Advanced Queuing Propagation schedules after RAC reconfiguration (10.2)Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370 incorrect usage of method" (9.2, 10.2, 11.1, 11.2)Document 332792.1 ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up Statspack (8.1, 9.0, 9.2, 10.1)Document 353325.1 ORA-24056: Internal inconsistency for QUEUE <queue_name> and destination <dblink> (8.1, 9.0, 9.2, 10.1, 10.2, 11.1, 11.2)Document 787367.1 ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2 (10.1, 10.2)Document 566622.1 ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1 (9.2, 10.1)Document 731539.1 ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTP (9.0, 9.2, 10.1, 10.2, 11.1)Document 253131.1 Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555) (9.2)Document 118884.1 How to unschedule a propagation schedule stuck in pending stateDocument 222992.1 DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 1204080.1 AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.Document 1233675.1 AQ Propagation stops after upgrade to 11.2.0.1 ORA-30757 4.3.1. Errors Related to Incorrect Network Configuration The most common propagation errors result from an incorrect network configuration. The list below contains common errors caused by tnsnames.ora file or database links being configured incorrectly: - ORA-12154: TNS:could not resolve service name- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor- ORA-12514: TNS:listener could not resolve SERVICE_NAME - ORA-12541: TNS-12541 TNS:no listener 4.4. Check the Database Links Exist and are Functioning Correctly For schedules to remote databases confirm the database link exists via. SQL> col DBLINK for a45SQL> select QNAME, NVL(REGEXP_SUBSTR(DESTINATION, '[^@]+', 1, 2), DESTINATION) dblink2 from DBA_QUEUE_SCHEDULES3 where MESSAGE_DELIVERY_MODE = 'PERSISTENT';QNAME DBLINK------------------------------ ---------------------------------------------MY_QUEUE ORCL102B.WORLD Connect as the owner of the link and select across it to verify it works and connects to the database we expect. i.e. select * from ALL_QUEUES@ ORCL102B.WORLD; You need to ensure that the userid that scheduled the propagation (using DBMS_AQADM.SCHEDULE_PROPAGATION or DBMS_PROPAGATION_ADM.CREATE_PROPAGATION if using Streams) has access to the database link for the destination. 4.5. Has Propagation Been Correctly Scheduled? Check that the propagation schedule has been created and that a job queue process has been assigned. Look for the entry in DBA_QUEUE_SCHEDULES and SYS.AQ$_SCHEDULES for your schedule. For 10g and below, check that it has a JOBNO entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_JOBS with that JOBNO. For 11g and above, check that the schedule has a JOB_NAME entry in SYS.AQ$_SCHEDULES, and that there is an entry in DBA_SCHEDULER_JOBS with that JOB_NAME. Check the destination is as intended and spelled correctly. SQL> select SCHEMA, QNAME, DESTINATION, SCHEDULE_DISABLED, PROCESS_NAME from DBA_QUEUE_SCHEDULES;SCHEMA QNAME DESTINATION S PROCESS------- ---------- ------------------ - -----------AQADM MULTIPLEQ AQ$_LOCAL N J000 AQ$_LOCAL in the destination column shows that the queue to which we are propagating to is in the same database as the source queue. If the propagation was to a remote (different) database, a database link will be in the DESTINATION column. The entry in the SCHEDULE_DISABLED column, N, means that the schedule is NOT disabled. If Y (yes) appears in this column, propagation is disabled and the schedule will not be executed. If not using Oracle Streams, propagation should resume once you have enabled the schedule by invoking DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE (for 10.2 Oracle Streams and above, the DBMS_PROPAGATION_ADM.START_PROPAGATION procedure should be used). The PROCESS_NAME is the name of the job queue process currently allocated to execute the schedule. This process is allocated dynamically at execution time. If the PROCESS_NAME column is null (empty) the schedule is not currently executing. You may need to execute this statement a number of times to verify if a process is being allocated. If a process is at some time allocated to the schedule, it is attempting to execute. SQL> select SCHEMA, QNAME, LAST_RUN_DATE, NEXT_RUN_DATE from DBA_QUEUE_SCHEDULES;SCHEMA QNAME LAST_RUN_DATE NEXT_RUN_DATE------ ----- ----------------------- ----------------------- AQADM MULTIPLEQ 13-FEB-2002 13:18:57 13-FEB-2002 13:20:30 In 11g, these dates are expressed in TIMESTAMP WITH TIME ZONE datatypes. If the NEXT_RUN_DATE and NEXT_RUN_TIME columns are null when this statement is executed, the scheduled propagation is currently in progress. If they never change it would suggest that the schedule itself is never executing. If the next scheduled execution is too far away, change the NEXT_TIME parameter of the schedule so that schedules are executed more frequently (assuming that the window is not set to be infinite). Parameters of a schedule can be changed using the DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE call. In 10g and below, scheduling propagation posts a job in the DBA_JOBS view. The columns are more or less the same as DBA_QUEUE_SCHEDULES so you just need to recognize the job and verify that it exists. SQL> select JOB, WHAT from DBA_JOBS where WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';JOB WHAT---- ----------------- 720 next_date := sys.dbms_aqadm.aq$_propaq(job); For 11g, scheduling propagation posts a job in DBA_SCHEDULER_JOBS instead: SQL> select JOB_NAME from DBA_SCHEDULER_JOBS where JOB_NAME like 'AQ_JOB$_%';JOB_NAME------------------------------AQ_JOB$_41 If no job exists, check DBA_QUEUE_SCHEDULES to make sure that the schedule has not been disabled. For 10g and below, the job number is dynamic for AQ propagation schedules. The procedure that is executed to expedite a propagation schedule runs, removes itself from DBA_JOBS, and then reposts a new job for the next scheduled propagation. The job number should therefore always increment unless the schedule has been set up to run indefinitely. 4.6. Is the Schedule Executing but Failing to Complete? Run the following query: SQL> select FAILURES, LAST_ERROR_MSG from DBA_QUEUE_SCHEDULES;FAILURES LAST_ERROR_MSG------------ -----------------------1 ORA-25207: enqueue failed, queue AQADM.INQ is disabled from enqueueingORA-02063: preceding line from SHANE816 The failures column shows how many times we have attempted to execute the schedule and failed. Oracle will attempt to execute the schedule 16 times after which it will be removed from the DBA_JOBS or DBA_SCHEDULER_JOBS view and the schedule will become disabled. The column DBA_QUEUE_SCHEDULES.SCHEDULE_DISABLED will show 'Y'. For 11g and above, the DBA_SCHEDULER_JOBS.STATE column will show 'BROKEN' for the job corresponding to DBA_QUEUE_SCHEDULES.JOB_NAME. Prior to 10g the back off algorithm for failures was exponential, whereas from 10g onwards it is linear. The propagation will become disabled on the 17th attempt. Only the last execution failure will be reflected in the LAST_ERROR_MSG column. That is, if the schedule fails 5 times for 5 different reasons, only the last set of errors will be recorded in DBA_QUEUE_SCHEDULES. Any errors need to be resolved to allow propagation to continue. If propagation has also become disabled due to 17 failures, first resolve the reason for the error and then re-enable the schedule using the DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE procedure, or DBMS_PROPAGATION_ADM.START_PROPAGATION if using 10.2 or above Oracle Streams. As soon as the schedule executes successfully the error message entries will be deleted. Oracle does not keep a history of past failures. However, when using Oracle Streams, the errors will be retained in the DBA_PROPAGATION view even after the schedule resumes successfully. See the following note for instructions on how to clear out the errors from the DBA_PROPAGATION view:Document 808136.1 How to clear the old errors from DBA_PROPAGATION view?If a schedule is active and no errors are being reported then the source queue may not have any messages to be propagated. 4.7. Do the Propagation Notification Queue Table and Queue Exist? Check to see that the propagation notification queue table and queue exist and are enabled for enqueue and dequeue. Propagation makes use of the propagation notification queue for handling propagation run-time events, and the messages in this queue are stored in a SYS-owned queue table. This queue should never be stopped or dropped and the corresponding queue table never be dropped. 10g and belowThe propagation notification queue table is of the format SYS.AQ$_PROP_TABLE_n, where 'n' is the RAC instance number, i.e. '1' for a non-RAC environment. This queue and queue table are created implicitly when propagation is first scheduled. If propagation has been scheduled and these objects do not exist, try unscheduling and rescheduling propagation. If they still do not exist contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ$_PROP_TABLE_1SQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ$_PROP_NOTIFY_1 YES YESAQ$_AQ$_PROP_TABLE_1_E NO NO If the AQ$_PROP_NOTIFY_1 queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_1_E should not be enabled for enqueue or dequeue.11g and aboveThe propagation notification queue table is of the format SYS.AQ_PROP_TABLE, and is created when the database is created. If they do not exist, contact Oracle Support. SQL> select QUEUE_TABLE from DBA_QUEUE_TABLES2 where QUEUE_TABLE like '%PROP_TABLE%' and OWNER = 'SYS';QUEUE_TABLE------------------------------AQ_PROP_TABLESQL> select NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED2 from DBA_QUEUES where owner='SYS'3 and QUEUE_TABLE like '%PROP_TABLE%';NAME ENQUEUE DEQUEUE------------------------------ ------- -------AQ_PROP_NOTIFY YES YESAQ$_AQ_PROP_TABLE_E NO NO If the AQ_PROP_NOTIFY queue is not enabled for enqueue or dequeue, it should be so enabled using DBMS_AQADM.START_QUEUE. However, the exception queue AQ$_AQ$_PROP_TABLE_E should not be enabled for enqueue or dequeue. 4.8. Does the Remote Queue Exist and is it Enabled for Enqueueing? Check that the remote queue the propagation is transferring messages to exists and is enabled for enqueue: SQL> select DESTINATION from USER_QUEUE_SCHEDULES where QNAME = 'OUTQ';DESTINATION-----------------------------------------------------------------------------"AQADM"."INQ"@M2V102.ESSQL> select OWNER, NAME, ENQUEUE_ENABLED, DEQUEUE_ENABLED from [email protected];OWNER NAME ENQUEUE DEQUEUE-------- ------ ----------- -----------AQADM INQ YES YES 4.9. Do the Target and Source Database Charactersets Differ? If a message fails to propagate, check the database charactersets of the source and target databases. Investigate whether the same message can propagate between the databases with the same characterset or it is only a particular combination of charactersets which causes a problem. 4.10. Check the Queue Table Type Agreement Propagation is not possible between queue tables which have types that differ in some respect. One way to determine if this is the case is to run the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure for the two queues that the propagation operates on. If the types do not agree, DBMS_AQADM.VERIFY_QUEUE_TYPES will return '0'.For AQ propagation between databases which have different NLS_LENGTH_SEMANTICS settings, propagation will not work, unless the queues are Oracle Streams ANYDATA queues.See the following notes for issues caused by lack of type agreement:Document 1079577.1 Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"Document 282987.1 Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueDocument 353754.1 Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT 4.11. Enable Propagation Tracing 4.11.1. System Level This is set it in the init.ora/spfile as follows: event="24040 trace name context forever, level 10" and restart the instanceThis event cannot be set dynamically with an alter system command until version 10.2: SQL> alter system set events '24040 trace name context forever, level 10'; To unset the event: SQL> alter system set events '24040 trace name context off'; Debugging information will be logged to job queue trace file(s) (jnnn) as propagation takes place. You can check the trace file for errors, and for statements indicating that messages have been sent. For the most part the trace information is understandable. This trace should also be uploaded to Oracle Support if a service request is created. 4.11.2. Attaching to a Specific Process We can also attach to an existing job queue processes that is running a propagation schedule and trace it individually using the oradebug utility, as follows:10.2 and below connect / as sysdbaselect p.SPID, p.PROGRAM from v$PROCESS p, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j where s.SID=jr.SID and s.PADDR=p.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 11g connect / as sysdbacol PROGRAM for a30select p.SPID, p.PROGRAM, j.JOB_NAMEfrom v$PROCESS p, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j where s.SID=jr.SESSION_ID and s.PADDR=p.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%';-- For the process id (SPID) attach to it via oradebug and generate the following traceoradebug setospid <SPID>oradebug unlimitoradebug Event 10046 trace name context forever, level 12oradebug Event 24040 trace name context forever, level 10-- Trace the process for 5 minutesoradebug Event 10046 trace name context offoradebug Event 24040 trace name context off-- The following command returns the pathname/filename to the file being written tooradebug tracefile_name 4.11.3. Further Tracing The previous tracing steps only trace the job queue process executing the propagation on the source. At times it is useful to trace the propagation receiver process (the session which is enqueueing the messages into the target queue) on the target database which is associated with the job queue process on the source database.These following queries provide ways of identifying the processes involved in propagation so that you can attach to them via oradebug to generate trace information.In order to identify the propagation receiver process you need to execute the query as a user with privileges to access the v$ views in both the local and remote databases so the database link must connect as a user with those privileges in the remote database. The <DBLINK> in the queries should be replaced by the appropriate database link.The queries have two forms due to the differences between operating systems. The value returned by 'Rem Process' is the operating system identifier of the propagation receiver on the remote database. Once identified, this process can be attached to and traced on the remote database using the commands given in Section 4.11.2.10.2 and below - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from v$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 10.2 and below - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_JOBS_RUNNING jr, V$SESSION s, DBA_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SID and s.PADDR=pl.ADDR and jr.JOB=j.JOB and j.WHAT like '%sys.dbms_aqadm.aq$_propaq(job)%' and pl.SPID=sr.PROCESS; 11g - Windows select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=substr(sr.PROCESS, instr(sr.PROCESS,':')+1); 11g - Unix select pl.SPID "JobQ Process", pl.PROGRAM, sr.PROCESS "Rem Process" from V$PROCESS pl, DBA_SCHEDULER_RUNNING_JOBS jr, V$SESSION s, DBA_SCHEDULER_JOBS j, V$SESSION@<DBLINK> sr where s.SID=jr.SESSION_ID and s.PADDR=pl.ADDR and jr.JOB_NAME=j.JOB_NAME and j.JOB_NAME like '%AQ_JOB$_%%' and pl.SPID=sr.PROCESS;   5. Additional Troubleshooting Steps for AQ Propagation of User-Enqueued and Dequeued Messages 5.1. Check the Privileges of All Users Involved Ensure that the owner of the database link has the necessary privileges on the aq packages. SQL> select TABLE_NAME, PRIVILEGE from USER_TAB_PRIVS;TABLE_NAME PRIVILEGE------------------------------ ----------------------------------------DBMS_LOCK EXECUTEDBMS_AQ EXECUTEDBMS_AQADM EXECUTEDBMS_AQ_BQVIEW EXECUTEQT52814_BUFFER SELECT Note that when queue table is created, a view called QT<nnn>_BUFFER is created in the SYS schema, and the queue table owner is given SELECT privileges on it. The <nnn> corresponds to the object_id of the associated queue table. SQL> select * from USER_ROLE_PRIVS;USERNAME GRANTED_ROLE ADM DEF OS_------------------------------ ------------------------------ ---- ---- ---AQ_USER1 AQ_ADMINISTRATOR_ROLE NO YES NOAQ_USER1 CONNECT NO YES NOAQ_USER1 RESOURCE NO YES NO It is good practice to configure central AQ administrative user. All admin and processing jobs are created, executed and administered as this user. This configuration is not mandatory however, and the database link can be owned by any existing queue user. If this latter configuration is used, ensure that the connecting user has the necessary privileges on the AQ packages and objects involved. Privileges for an AQ Administrative user Execute on DBMS_AQADM Execute on DBMS_AQ Granted the AQ_ADMINISTRATOR_ROLE Privileges for an AQ user Execute on DBMS_AQ Execute on the message payload Enqueue privileges on the remote queue Dequeue privileges on the originating queue Privileges need to be confirmed on both sites when propagation is scheduled to remote destinations. Verify that the user ID used to login to the destination through the database link has been granted privileges to use AQ. 5.2. Verify Queue Payload Types AQ will not propagate messages from one queue to another if the payload types of the two queues are not verified to be equivalent. An AQ administrator can verify if the source and destination's payload types match by executing the DBMS_AQADM.VERIFY_QUEUE_TYPES procedure. The results of the type checking will be stored in the SYS.AQ$_MESSAGE_TYPES table. This table can be accessed using the object identifier OID of the source queue and the address database link of the destination queue, i.e. [schema.]queue_name[@destination]. Prior to Oracle 9i the payload (message type) had to be the same for all the queue tables involved in propagation. From Oracle9i onwards a transformation can be used so that payloads can be converted from one type to another. The following procedural call made on the source database can verify whether we can propagate between the source and the destination queue tables. connect aq_user1/[email protected] serverout onDECLARErc_value number;BEGINDBMS_AQADM.VERIFY_QUEUE_TYPES(src_queue_name => 'AQ_USER1.Q_1', dest_queue_name => 'AQ_USER2.Q_2',destination => 'dbl_aq_user2.es',rc => rc_value);dbms_output.put_line('rc_value code is '||rc_value);END;/ If propagation is possible then the return code value will be 1. If it is 0 then propagation is not possible and further investigation of the types and transformations used by and in conjunction with the queue tables is required. With regard to comparison of the types the following sql can be used to extract the DDL for a specific type with' %' changed appropriately on the source and target. This can then be compared for the source and target. SET LONG 20000 set pagesize 50 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE',false); SELECT DBMS_METADATA.GET_DDL('TYPE',t.type_name) from user_types t WHERE t.type_name like '%'; EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'DEFAULT'); 5.3. Check Message State and Destination The first step in this process is to identify the queue table associated with the problem source queue. Although you schedule propagation for a specific queue, most of the meta-data associated with that queue is stored in the underlying queue table. The following statement finds the queue table for a given queue (note that this is a multiple-consumer queue table). SQL> select QUEUE_TABLE from DBA_QUEUES where NAME = 'MULTIPLEQ';QUEUE_TABLE --------------------MULTIPLEQTABLE For a small amount of messages in a multiple-consumer queue table, the following query can be run: SQL> select MSG_STATE, CONSUMER_NAME, ADDRESS from AQ$MULTIPLEQTABLE where QUEUE = 'MULTIPLEQ';MSG_STATE CONSUMER_NAME ADDRESS-------------- ----------------------- -------------READY AQUSER2 [email protected] AQUSER1READY AQUSER3 AQADM.INQ In this example we see 2 messages ready to be propagated to remote queues and 1 that is not. If the address column is blank, the message is not scheduled for propagation and can only be dequeued from the queue upon which it was enqueued. The MSG_STATE column values are discussed in Document 102330.1 Advanced Queueing MSG_STATE Values and their Interpretation. If the address column has a value, the message has been enqueued for propagation to another queue. The first row in the example includes a database link (@M2V102.ES). This demonstrates that the message should be propagated to a queue at a remote database. The third row does not include a database link so will be propagated to a queue that resides on the same database as the source queue. The consumer name is the intended recipient at the target queue. Note that we are not querying the base queue table directly; rather, we are querying a view that is available on top of every queue table, AQ$<queue_table_name>.A more realistic query in an environment where the queue table contains thousands of messages is8.0.3-compatible multiple-consumer queue table and all compatibility single-consumer queue tables select count(*), MSG_STATE, QUEUE from AQ$<queue_table_name>  group by MSG_STATE, QUEUE; 8.1.3 and 10.0-compatible queue tables select count(*), MSG_STATE, QUEUE, CONSUMER_NAME from AQ$<queue_table_name>group by MSG_STATE, QUEUE, CONSUMER_NAME; For multiple-consumer queue tables, if you did not see the expected CONSUMER_NAME , check the syntax of the enqueue code and verify the recipients are declared correctly. If a recipients list is not used on enqueue, check the subscriber list in the AQ$_<queue_table_name>_S view (note that a single-consumer queue table does not have a subscriber view. This view records all members of the default subscription list which were added using the DBMS_AQADM.ADD_SUBSCRIBER procedure and also those enqueued using a recipient list. SQL> select QUEUE, NAME, ADDRESS from AQ$MULTIPLEQTABLE_S;QUEUE NAME ADDRESS---------- ----------- -------------MULTIPLEQ AQUSER2 [email protected] AQUSER1 In this example we have 2 subscribers registered with the queue. We have a local subscriber AQUSER1, and a remote subscriber AQUSER2, on the queue INQ, owned by AQADM, at M2V102.ES. Unless overridden with a recipient list during enqueue every message enqueued to this queue will be propagated to INQ at M2V102.ES.For 8.1 style and above multiple consumer queue tables, you can also check the following information at the target: select CONSUMER_NAME, DEQ_TXN_ID, DEQ_TIME, DEQ_USER_ID, PROPAGATED_MSGID from AQ$<queue_table_name> where QUEUE = '<QUEUE_NAME>'; For 8.0 style queues, if the queue table supports multiple consumers you can obtain the same information from the history column of the queue table: select h.CONSUMER, h.TRANSACTION_ID, h.DEQ_TIME, h.DEQ_USER, h.PROPAGATED_MSGIDfrom AQ$<queue_table_name> t, table(t.history) h where t.Q_NAME = '<QUEUE_NAME>'; A non-NULL TRANSACTION_ID indicates that the message was successfully propagated. Further, the DEQ_TIME indicates the time of propagation, the DEQ_USER indicates the userid used for propagation, and the PROPAGATED_MSGID indicates the message ID of the message that was enqueued at the destination. 6. Additional Troubleshooting Steps for Propagation in an Oracle Streams Environment 6.1. Is the Propagation Enabled? For a propagation job to propagate messages, the propagation must be enabled. For Streams, a special view called DBA_PROPAGATION exists to convey information about Streams propagations. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled. To query for this: SELECT p.PROPAGATION_NAME, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled','N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_MSGFROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION pWHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(s.DESTINATION, '[^@]+', 1, 2), s.DESTINATION) AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME AND MESSAGE_DELIVERY_MODE = 'PERSISTENT' order by PROPAGATION_NAME; At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database restart. If an error is indicated by the above query, an attempt to disable the propagation and then re-enable it can be made. In the examples below, for the propagation named STRMADMIN_PROPAGATE where the queue name is STREAMS_QUEUE owned by STRMADMIN and the destination database link is ORCL2.WORLD, the commands would be:10.2 and above exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE'); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); If the above does not fix the problem, stop the propagation specifying the force parameter (2nd parameter on stop_propagation) as TRUE: exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true); exec dbms_propagation_adm.start_propagation('STRMADMIN_PROPAGATE'); The statistics for the propagation as well as any old error messages are cleared when the force parameter is set to TRUE. Therefore if the propagation schedule is stopped with FORCE set to TRUE, and upon restart there is still an error message in DBA_PROPAGATION, then the error message is current.9.2 or 10.1 exec dbms_aqadm.disable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms.aqadm.enable_propagation_schedule('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); If the above does not fix the problem, perform an unschedule of propagation and then schedule_propagation: exec dbms_aqadm.unschedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); exec dbms_aqadm.schedule_propagation('STRMADMIN.STREAMS_QUEUE','ORCL2.WORLD'); Typically if the error from the first query in Section 6.1 recurs after restarting the propagation as shown above, further troubleshooting of the error is needed. 6.2. Check Propagation Rule Sets and Transformations Inspect the configuration of the rules in the rule set that is associated with the propagation process to make sure that they evaluate to TRUE as expected. If not, then the object or schema will not be propagated. Remember that when a negative rule evaluates to TRUE, the specified object or schema will not be propagated. Finally inspect any rule-based transformations that are implemented with propagation to make sure they are changing the data in the intended way.The following query shows what rule sets are assigned to a propagation: select PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME "Positive Rule Set",NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME "Negative Rule Set"from DBA_PROPAGATION; The next two queries list the propagation rules and their conditions. The first is for the positive rule set, the second is for the negative rule set: set long 4000select rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES rwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER and RULE_SET_NAME in(select RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME;   set long 4000select c.PROPAGATION_NAME, rsr.RULE_SET_OWNER||'.'||rsr.RULE_SET_NAME RULE_SET ,rsr.RULE_OWNER||'.'||rsr.RULE_NAME RULE_NAME,r.RULE_CONDITION CONDITION fromDBA_RULE_SET_RULES rsr, DBA_RULES r ,DBA_PROPAGATION cwhere rsr.RULE_NAME = r.RULE_NAME and rsr.RULE_OWNER = r.RULE_OWNER andrsr.RULE_SET_OWNER=c.NEGATIVE_RULE_SET_OWNER and rsr.RULE_SET_NAME=c.NEGATIVE_RULE_SET_NAMEand rsr.RULE_SET_NAME in(select NEGATIVE_RULE_SET_NAME from DBA_PROPAGATION) order by rsr.RULE_SET_OWNER, rsr.RULE_SET_NAME; 6.3. Determining the Total Number of Messages and Bytes Propagated As in Section 3.1, determining if messages are flowing can be instructive to see whether the propagation is entirely hung or just slow. If the propagation is not in flow control (see Section 6.5.2), but the statistics are incrementing slowly, there may be a performance issue. For Streams implementations two views are available that can assist with this that can show the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site: the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destination site. It is helpful to query both to determine if messages are being delivered to the target. Look for the statistics to increase.Source: select QUEUE_SCHEMA, QUEUE_NAME, DBLINK,HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS, TOTAL_BYTESfrom V$PROPAGATION_SENDER; Target: select SRC_QUEUE_SCHEMA, SRC_QUEUE_NAME, SRC_DBNAME, DST_QUEUE_SCHEMA, DST_QUEUE_NAME, HIGH_WATER_MARK, ACKNOWLEDGEMENT, TOTAL_MSGS from V$PROPAGATION_RECEIVER; 6.4. Check Buffered Subscribers The V$BUFFERED_SUBSCRIBERS view displays information about subscribers for all buffered queues in the instance. This view can be queried to make sure that the site that the propagation is propagating to is listed as a subscriber address for the site being propagated from: select QUEUE_SCHEMA, QUEUE_NAME, SUBSCRIBER_ADDRESS from V$BUFFERED_SUBSCRIBERS; The SUBSCRIBER_ADDRESS column will not be populated when the propagation is local (between queues on the same database). 6.5. Common Streams Propagation Errors 6.5.1. ORA-02082: A loopback database link must have a connection qualifier. This error can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database. 6.5.2. ORA-25307: Enqueue rate too high. Enable flow control DBA_QUEUE_SCHEDULES will display this informational message for propagation when the automatic flow control (10g feature of Streams) has been invoked.Similar to Streams capture processes, a Streams propagation process can also go into a state of 'flow control. This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which messages are being enqueued into at target queue.This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, usually no action is required by the DBA. Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.The following document contains more information:Document 302109.1 Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlSee the following document for one potential cause of this situation:Document 1097115.1 Oracle Streams Apply Reader is in 'Paused' State 6.5.3. ORA-25315 unsupported configuration for propagation of buffered messages This error typically occurs when the target database is RAC and usually indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve the problem, use queue-to-queue propagation for buffered messages. 6.5.4. ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation For cause/fixes refer to:Document 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation 6.5.5. Stopping or Dropping a Streams Propagation Hangs See the following note:Document 1159787.1 Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It Hang 6.6. Streams Propagation-Related Notes for Common Issues Document 437838.1 Streams Specific PatchesDocument 749181.1 How to Recover Streams After Dropping PropagationDocument 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentDocument 564649.1 ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveDocument 553017.1 Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201Document 944846.1 Streams Propagation Fails Ora-7445 [kohrsmc]Document 745601.1 ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'Document 333068.1 ORA-23603: Streams Enqueue Aborted Eue To Low SGADocument 363496.1 Ora-25315 Propagating on RAC StreamsDocument 368237.1 Unable to Unschedule Propagation. Streams Queue is InvalidDocument 436332.1 dbms_propagation_adm.stop_propagation hangsDocument 727389.1 Propagation Fails With ORA-12528Document 730911.1 ORA-4063 Is Reported After Dropping Negative Prop.RulesetDocument 460471.1 Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsDocument 1165583.1 ORA-600 [kwqpuspse0-ack] In Streams EnvironmentDocument 1059029.1 Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationDocument 556309.1 Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedDocument 839568.1 Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''Document 311021.1 Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredDocument 359971.1 STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068Document 1101616.1 DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747 7. Performance Issues A propagation may seem to be slow if the queries from Sections 3.1 and 6.3 show that the message statistics are not changing quickly. In Oracle Streams, this more usually is due to a slow apply process at the target rather than a slow propagation. Propagation could be inferred to be slow if the message statistics are changing, and the state of a capture process according to V$STREAMS_CAPTURE.STATE is PAUSED FOR FLOW CONTROL, but an ORA-25307 'Enqueue rate too high. Enable flow control' warning is NOT observed in DBA_QUEUE_SCHEDULES per Section 6.5.2. If this is the case, see the following notes / white papers for suggestions to increase performance:Document 335516.1 Master Note for Streams Performance RecommendationsDocument 730036.1 Overview for Troubleshooting Streams Performance IssuesDocument 780733.1 Streams Propagation Tuning with Network ParametersWhite Paper: http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-streams-performance-130059.pdfWhite Paper: Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2, http://www.oracle.com/technetwork/database/features/availability/maa-10gr2-streams-configuration-132039.pdf, See APPENDIX A: USING STREAMS CONFIGURATIONS OVER A NETWORKFor basic AQ propagation, the network tuning in the aforementioned Appendix A of the white paper 'Oracle Streams Configuration Best Practices: Oracle Database 10g Release 10.2' is applicable. References NOTE:102330.1 - Advanced Queueing MSG_STATE Values and their InterpretationNOTE:102771.1 - Advanced Queueing Propagation using PL/SQLNOTE:1059029.1 - Combined Capture and Apply (CCA) : Capture aborts : ORA-1422 after schedule_propagationNOTE:1079577.1 - Advanced Queuing Propagation Fails With "ORA-22370: incorrect usage of method"NOTE:1083608.1 - 11g Streams and Oracle SchedulerNOTE:1087324.1 - ORA-01405 ORA-01422 reported by Adavanced Queueing Propagation schedules after RAC reconfigurationNOTE:1097115.1 - Oracle Streams Apply Reader is in 'Paused' StateNOTE:1101616.1 - DBMS_PROPAGATION_ADM.DROP_PROPAGATION FAILS WITH ORA-1747NOTE:1159787.1 - Troubleshooting Streams Propagation When It is Not Functioning and Attempts to Stop It HangNOTE:1165583.1 - ORA-600 [kwqpuspse0-ack] In Streams EnvironmentNOTE:118884.1 - How to unschedule a propagation schedule stuck in pending stateNOTE:1203544.1 - AQ PROPAGATION ABORTED WITH ORA-600[OCIKSIN: INVALID STATUS] ON SYS.DBMS_AQADM_SYS.AQ$_PROPAGATION_PROCEDURE AFTER UPGRADENOTE:1204080.1 - AQ Propagation Failing With ORA-25329 After Upgraded From 8i or 9i to 10g or 11g.NOTE:219416.1 - Advanced Queuing Propagation fails with ORA-22922NOTE:222992.1 - DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE Returns ORA-24082NOTE:253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management (ORA-1555)NOTE:282987.1 - Propagated Messages marked UNDELIVERABLE after Drop and Recreate Of Remote QueueNOTE:298015.1 - Kwqjswproc:Excep After Loop: Assigning To SelfNOTE:302109.1 - Streams Propagation Error: ORA-25307 Enqueue rate too high. Enable flow controlNOTE:311021.1 - Streams Propagation Process : Ora 12154 After Reboot with Transparent Application Failover TAF configuredNOTE:332792.1 - ORA-04061 error relating to SYS.DBMS_PRVTAQIP reported when setting up StatspackNOTE:333068.1 - ORA-23603: Streams Enqueue Aborted Eue To Low SGANOTE:335516.1 - Master Note for Streams Performance RecommendationsNOTE:353325.1 - ORA-24056: Internal inconsistency for QUEUE and destination NOTE:353754.1 - Streams Messaging Propagation Fails between Single and Multi-byte Charactersets when using Chararacter Length Semantics in the ADT.NOTE:359971.1 - STREAMS propagation to Primary of physical Standby configuation errors with Ora-01033, Ora-02068NOTE:363496.1 - Ora-25315 Propagating on RAC StreamsNOTE:365093.1 - ORA-07445 [kwqppay2aqe()+7360] reported on Propagation of a Transformed MessageNOTE:368237.1 - Unable to Unschedule Propagation. Streams Queue is InvalidNOTE:368912.1 - Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environmentNOTE:421237.1 - ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams PropagationNOTE:436332.1 - dbms_propagation_adm.stop_propagation hangsNOTE:437838.1 - Streams Specific PatchesNOTE:460471.1 - Propagation Blocked by Qmon Process - Streams_queue_table / 'library cache lock' waitsNOTE:463820.1 - Streams Combined Capture and Apply in 11gNOTE:553017.1 - Stream Propagation Process Errors Ora-4052 Ora-6554 From 11g To 10201NOTE:556309.1 - Changing Propagation/ queue_to_queue : false -> true does does not work; no LCRs propagatedNOTE:564649.1 - ORA-02068/ORA-03114/ORA-03113 Errors From Streams Propagation Process - Remote Database is Available and Unschedule/Reschedule Does Not ResolveNOTE:566622.1 - ORA-22275 when propagating >4K AQ$_JMS_TEXT_MESSAGEs from 9.2.0.8 to 10.2.0.1NOTE:727389.1 - Propagation Fails With ORA-12528NOTE:730036.1 - Overview for Troubleshooting Streams Performance IssuesNOTE:730911.1 - ORA-4063 Is Reported After Dropping Negative Prop.RulesetNOTE:731292.1 - ORA-25215 Reported On Local Propagation When Using Transformation with ANYDATA queue tablesNOTE:731539.1 - ORA-29268: HTTP client error 401 Unauthorized Error when the AQ Servlet attempts to Propagate a message via HTTPNOTE:745601.1 - ORA-23603 'STREAMS enqueue aborted due to low SGA' Error from Streams Propagation, and V$STREAMS_CAPTURE.STATE Hanging on 'Enqueuing Message'NOTE:749181.1 - How to Recover Streams After Dropping PropagationNOTE:780733.1 - Streams Propagation Tuning with Network ParametersNOTE:787367.1 - ORA-22275 reported on Propagating Messages with LOB component when propagating between 10.1 and 10.2NOTE:808136.1 - How to clear the old errors from DBA_PROPAGATION view ?NOTE:827184.1 - AQ Propagation with CLOB data types Fails with ORA-22990NOTE:827473.1 - How to alter propagation from queue_to_queue to queue_to_dblinkNOTE:839568.1 - Propagation failing with error: ORA-01536: space quota exceeded for tablespace ''NOTE:846297.1 - AQ Propagation Fails : ORA-00600[kope2upic2954] or Ora-00600[Kghsstream_copyn]NOTE:944846.1 - Streams Propagation Fails Ora-7445 [kohrsmc]

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • Windows Phone–A beautiful phone which I admire but I don’t recommend to friends and family

    - by Gopinath
    Microsoft’s Windows Phones are the most beautiful phones I’ve seen. Look at the photo which Microsoft shared on their Facebook page today. It’s gorgeous. Windows Phones come in vibrant colors and the user interface is very lively. When you keep an iPhone, Android Phone & a Windows Phone on a table, Windows Phone definitely stands out. Android and iOS interfaces are routine – a bunch of apps icons arranged in rows and multiple screens. Windows Phone is very different, the live tiles concept mesmerizes us. I love Windows Phone, but neither I buy one nor I recommend to family/friends! Why? Because it does not have all the Apps I need. Microsoft advertises that Windows Phone has 100K apps on its Windows Market Place. It’s true, there are 100K+ apps available for Windows Phone but not many of them are really useful and most of the popular Apps I use on Android are not available. When I say this to my friends at Microsoft, they don’t agree and one of them asked me list the apps that are not available. For him today I spent an hour quickly scanning through the apps installed on my Google Nexus and searched for same apps on Windows Market Place. As expected many of them are not available. Here is the list of my favorite Android apps that are not available for Windows Phone Mint – I use this app more than any of the Banking Apps I’ve installed on my mobile. It’s one app to keep a tab on all the expenses and income, the best money management and tracking app. Google Chrome – Web without Google Chrome is too boring, either on Desktop or on mobile. IE is too heavy and Firefox is loosing its grip. Chrome is the new darling of web. Pulse, Flipboard – Flipboard and Pulse are one of the best apps for reading news and following content of favorite blogs. Dropbox – Sync content across devices and provides access to your content on any device.It really does not matter what is your gadget – mobile, tablet or computer; Dropbox lets you access your content. GMail, Google Maps – Should I say how important are these two apps in our day to day life!! Vonage Extension – For around 30 bucks a month, Vonage provide landline service in USA + unlimited calls to India and many other countries + Vonage Extension App that lets Android/iOS mobile to make unlimited international calls for free. Without Vonage Extension app, I’m almost cutoff from my family and friends back home in India. Instagram – The most popular camera app used from a common man to celebrities. Raaga, Dhingana  – Music is part and parcel of life and these two apps are the most like popular apps to listen to Indian music. Quora – Quora is the place where most of the sensible discussions happen on web. Google Analytics, Google Adsense – I’m a blogger and these two apps mean a lot to me The list goes on and on! There are many useful apps that are not available on Windows Phone – TuneIn, MyTWC, Chrome To Phone, Google Voice, etc. Without all these apps, Windows Phone is just another old Nokia phone. Even though Windows Phone is the most beautiful phone, it needs Apps to attract customers. Without apps a smartphone is more or less a dumb feature phone which we loved to use before release of iPhone. Wish in an year or two the beautiful Windows Phone may have all the missing Apps. When it happens I’ll buy a phone for myself and recommend it to my family & friends. But till then I prefer to stay away.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Agile Database Techniques: Effective Strategies for the Agile Software Developer – book review

    - by DigiMortal
       Agile development expects mind shift and developers are not the only ones who must be agile. Every chain is as strong as it’s weakest link and same goes also for development teams. Agile Database Techniques: Effective Strategies for the Agile Software Developer by Scott W. Ambler is book that calls also data professionals to be part of agile development. Often are DBA-s in situation where they are not part of application development and later they have to survive large set of applications that all use databases different way. Of course, only some of these applications are not problematic when looking what database server has to do to serve them. I have seen many applications that rape database servers because developers have no clue what is going on in database (~3K queries to database per web application request – have you seen something like this? I have…) Agile Database Techniques covers some object and database design technologies and gives suggestions to development teams about topics they need help or assistance by DBA-s. The book is also good reading for DBA-s who usually are not very strong in object technologies. You can take this book as bridge between these two worlds. I think teams that build object applications that use databases should buy this book and try at least one or two projects out with Ambler’s suggestions. Table of contents Foreword by Jon Kern. Foreword by Douglas K. Barry. Acknowledgments. Introduction. About the Author. Part One: Setting the Foundation. Chapter 1: The Agile Data Method. Chapter 2: From Use Cases to Databases — Real-World UML. Chapter 3: Data Modeling 101. Chapter 4: Data Normalization. Chapter 5: Class Normalization. Chapter 6: Relational Database Technology, Like It or Not. Chapter 7: The Object-Relational Impedance Mismatch. Chapter 8: Legacy Databases — Everything You Need to Know But Are Afraid to Deal With. Part Two: Evolutionary Database Development. Chapter 9: Vive L’ Évolution. Chapter 10: Agile Model-Driven Development (AMDD). Chapter 11: Test-Driven Development (TDD). Chapter 12: Database Refactoring. Chapter 13: Database Encapsulation Strategies. Chapter 14: Mapping Objects to Relational Databases. Chapter 15: Performance Tuning. Chapter 16: Tools for Evolutionary Database Development. Part Three: Practical Data-Oriented Development Techniques. Chapter 17: Implementing Concurrency Control. Chapter 18: Finding Objects in Relational Databases. Chapter 19: Implementing Referential Integrity and Shared Business Logic. Chapter 20: Implementing Security Access Control. Chapter 21: Implementing Reports. Chapter 22: Realistic XML. Part Four: Adopting Agile Database Techniques. Chapter 23: How You Can Become Agile. Chapter 24: Bringing Agility into Your Organization. Appendix: Database Refactoring Catalog. References and Suggested Reading. Index.

    Read the article

  • Spending the summer at camp… Web Camp, that is

    - by Jon Galloway
    Microsoft is sponsoring a series of Web Camps this summer. They’re a series of free two day events being held worldwide, and I’m really excited about being taking part. The camp is targeted at a broad range of developer background and experience. Content builds from 101 level introductory material to 200-300 level coverage, but we hit some advanced bits (e.g. MVC 2 features, jQuery templating, IIS 7 features, etc.) that advanced developers may not yet have seen. We start with a lap around ASP.NET & Web Forms, then move on to building and application with ASP.NET MVC 2, jQuery, and Entity Framework 4, and finally deploy to IIS. I got to spend some time working with James before the first Web Camp refining the content, and I think he’s packed about as much goodness into the time available as is scientifically possible. The content is really code focused – we start with File/New Project and spend the day building a real, working application. The second day of the Web Camp provides attendees an opportunity to get hands on. There are two options: Join a team and build an application of your choice Work on a lab or tutorial James Senior and I kicked off the fun with the first Web Camp in Toronto a few weeks ago. It was sold out, lots of fun, and by all accounts a great way to spend two days. I’m really enthusiastic about the format. Rather than just listening to speakers and then forgetting everything in a few days, attendees actually build something of their choice. They get an opportunity to pitch projects they’re interested in, form teams, and build it – getting experience with “real world” problems, with all the help they need from experienced developers. James got help on the second day practical part from the good folks that run Startup Weekend. Startup Weekend is a fantastic program that gathers developers together to build cool apps in a weekend, so their input on how to organize successful teams for weekend projects was invaluable. Nick Seguin joined us in Toronto, and in addition to making sure that everything flowed smoothly, he just added a lot of fun and excitement to the event, reminding us all about how much fun it is to come up with a cool idea and just build it. In addition to the Toronto camp, I’ll be at the Mountain View, London, Munich, and New York camps over the next month. London is sold out, but the rest still have space available, so come join us! Here’s the full list, with the ones I’ll be at bolded because - you know - it’s my blog. The the whole speaker list is great, including Scott Guthrie, Scott Hanselman, James Senior, Rachel Appel, Dan Wahlin, and Christian Wenz. Toronto May 7-8 (James Senior and I were thrown out on our collective ears) Moscow May 19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 (I’m speaking with Rachel Appel) Sydney May 28-29 Singapore June 04-05 London June 04-05 (I’m speaking with Christian Wenz – SOLD OUT) Munich June 07-08 (I’m speaking with Christian Wenz) Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 (I’m speaking with Dan Wahlin) Come say hi!

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • You Are Hiring But Do Candidate&rsquo;s Want to Work For You

    - by david.talamelli
    So here you are – it has happened, you are now interviewing for that position that you have either applied for or maybe were called about. Whether you are an “active” candidate looking for a job or a “passive” candidate who was contacted about the opportunity, it doesn’t matter now. Regardless of the circumstances of how you got to the interview stage, how you and your new potential manager connect with each other at interview will play a part in whether you are successful in landing that job. The best manager/employee relationships I think tend to be the ones where both the manager and employee have a common goal that they are both working towards and they work together in unison to achieve these goals. Candidates – when you are interviewing for a role, remember that an interview is a two way process. An interview shouldn’t be just a case of a company interviewing you to see if you are a good fit for a certain role. Don’t forget in an interview process it is equally important that you take the opportunity to similarly interview the company to see if that role/company are the right place for you to move to as the next step in your career. I think an interview should not only be a chance for a Hiring Manager to get to better know a candidate and asses his capability and cultural fit for a team/company but it should also be a chance for the candidate to similarly assess a company or manager about whether they are someone that they want to work with. Managers – I know Recruiters have been talking about the “war for talent” since before many of you were managers, but there is no denying it – it exists. You are not only competing with other companies for talented individuals but you are also competing with the existing companies that those talented individuals are working at. Companies are not going to let the people they have identified as superstars resign without a fight (this is the classic Counter Offer scenario which may be another blog post in itself). So how do we get these great people – their current employer will do all they can to keep them, everyone else wants them – does this mean all hope is lost? No, absolutely not. The same reasons that have always existed on why candidates are interested in other opportunities is still there: it could be that someone is looking for career advancement, or they want the chance to work with new technology or maybe you have an opportunity that is exactly what that person is looking to do. As a Hiring Manager don’t just conduct your interviews in question/answer mode. You should talk to that individual to work out what it is they are looking for and you can then relate how your role addresses that. It is potentially going to be the two of you working together so you two are the ones who have to be most comfortable with each other. Don’t oversell the role – set realistic expectations of what that candidate can expect working in your team – give them the good, the bad and the ugly so they can make an informed decision. Manager’s think back to when you last were looking for a job and put yourself in the candidate’s shoes. When you were looking for a job, what was it that you wanted to know about Oracle, or what was it that you wanted more information about. There are some great Business Leaders that work here at Oracle – if you are one of them it is likely that you already are doing all these things anyway. The good news for you is that you are also likely raising yourself head and shoulders above what many interviewers do – that in itself gives you a competitive advantage in this ‘war for talent’ but as a great Business Leader you already know that

    Read the article

  • Routes for IIS Classic and Integrated Mode

    - by imran_ku07
         Introduction:             ASP.NET MVC Routing feature makes it very easy to provide clean URLs. You just need to configure routes in global.asax file to create an application with clean URLs. In most cases you define routes works in IIS 6, IIS 7 (or IIS 7.5) Classic and Integrated mode. But in some cases your routes may only works in IIS 7 Integrated mode, like in the case of using extension less URLs in IIS 6 without a wildcard extension map. So in this article I will show you how to create different routes which works in IIS 6 and IIS 7 Classic and Integrated mode.       Description:             Let's say that you need to create an application which must work both in Classic and Integrated mode. Also you have no control to setup a wildcard extension map in IIS. So you need to create two routes. One with extension less URL for Integrated mode and one with a URL with an extension for Classic Mode.   routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults );               Now you have set up two routes, one for Integrated mode and one for Classic mode. Now you only need to ensure that Integrated mode route should only match if the application is running in Integrated mode and Classic mode route should only match if the application is running in Classic mode. For making this work you need to create two custom constraint for Integrated and Classic mode. So replace the above routes with these routes,     routes.MapRoute( "DefaultClassic", // Route name "{controller}.aspx/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new ClassicModeConstraint() }// Constraints ); routes.MapRoute( "DefaultIntegrated", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional }, // Parameter defaults new { mode = new IntegratedModeConstraint() }// Constraints );            The first route which is for Classic mode adds a ClassicModeConstraint and second route which is for Integrated mode adds a IntegratedModeConstraint. Next you need to add the implementation of these constraint classes.     public class ClassicModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return !HttpRuntime.UsingIntegratedPipeline; } } public class IntegratedModeConstraint : IRouteConstraint { public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection) { return HttpRuntime.UsingIntegratedPipeline; } }             HttpRuntime.UsingIntegratedPipeline returns true if the application is running on Integrated mode; otherwise, it returns false. So routes for Integrated mode only matched when the application is running on Integrated mode and routes for Classic mode only matched when the application is not running on Integrated mode.       Summary:             During developing applications, sometimes developers are not sure that whether this application will be host on IIS 6 or IIS 7 (or IIS 7.5) Integrated mode or Classic mode. So it's a good idea to create separate routes for both Classic and Integrated mode so that your application will use extension less URLs where possible and use URLs with an extension where it is not possible to use extension less URLs. In this article I showed you how to create separate routes for IIS Integrated and Classic mode. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Know Your Audience, And/Or Your Customer

    - by steve.diamond
    Yesterday I gave an internal presentation to about 20 Oracle employees on "messaging," not messaging technology, but embarking on the process of building messages. One of the elements I covered was the importance of really knowing and understanding your audience. As a humorous reference I included two side-by-side photos of Oakland A's fans and Oakland Raiders fans. The Oakland A's fans looked like happy-go-lucky drunk types. The Oakland Raiders fans looked like angry extras from a low budget horror flick. I then asked my presentation attendees what these two groups had in common. Here's what I heard. --They're human (at least I THINK they're human). --They're from Oakland. --They're sports fans. After that, it was anyone's guess. A few days earlier we were putting the finishing touches on a sales presentation for one of our product lines. We had included an upfront "lead in" addressing how the economy is improving, yet that doesn't mean sales executives will have any more resources to add to their teams, invest in technology, etc. This "lead in" included miscellaneous news article headlines and statistics validating the slowly improving economy. When we subjected this presentation to internal review two days ago, this upfront section in particular was scrutinized. "Is the economy really getting better? I (exclamation point) don't think it's really getting better. Haven't you seen the headlines coming out of Greece and Europe?" Then the question TO ME became, "Who will actually be in the audience that sees and hears this presentation? Will s/he be someone like me? Or will s/he be someone like the critic who didn't like our lead-in?" We took the safe route and removed that lead in. After all, why start a "pitch" with a component that is arguably subjective? What if many of our audience members are individuals at organizations still facing a strong headwind? For reasons I won't go into here, it was the right decision to make. The moral of the story: Make sure you really know your audience. Harness the wisdom of the information your organization's CRM systems collect to get that fully informed "customer view." Conduct formal research. Conduct INFORMAL research. Ask lots of questions. Study industries and scenarios that have nothing to do with yours to see "how they do it." Stop strangers in coffee shops and on the street...seriously. Last week I caught up with an old friend from high school who recently retired from a 25 year career with the USMC. He said, "I can learn something from every single person I come into contact with." What a great way of approaching the world. Then, think about and write down what YOU like and dislike as a customer. But also remember that when it comes to your company's products, you are most likely NOT the customer, so don't go overboard in superimposing your own world view. Approaching the study of customers this way adds rhyme, reason and CONTEXT to lengthy blog posts like this one. Know your audience.

    Read the article

  • Why is my machine unable to mount my SMB drives ("CIFS VFS: Error connecting to socket. Aborting operation", return code -115)?

    - by downbeat
    I have a machine running Precise (12.04 x64), and I cannot mount my SMB drives (I have 3, we'll call them public, private and download). It used to work (a week or two ago) and I didn't touch fstab! The machine hosting the shares is a commercial NAS, and I'm not seeing anything that would indicate it's an issue with the NAS. I have an older machine which I updated to Precise at the same time (both fresh installed, not dist-upgrade), so should have a very similar configuration. It is not having any problems. I am not having problems on windows machines/partitions either, only one of my Precise machines. The two machines are using identical entries in fstab and identical /etc/samba/smb.conf files. I don't think I've ever changed smb.conf (has never mattered before). My fstab entries all basically look like this: //10.1.1.111/public /media/public cifs credentials=/home/downbeat/.credentials,iocharset=utf8,uid=downbeat,gid=downbeat,file_mode=0644,dir_mode=0755 0 0 Here's the dmesg output on boot: [ 51.162198] CIFS VFS: Error connecting to socket. Aborting operation [ 51.162369] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.194106] CIFS VFS: Error connecting to socket. Aborting operation [ 51.194250] CIFS VFS: cifs_mount failed w/return code = -115 [ 51.198120] CIFS VFS: Error connecting to socket. Aborting operation [ 51.198243] CIFS VFS: cifs_mount failed w/return code = -115 There are no other errors I see in the dmesg output. Originally when I ran 'testparm -s', the output contained these lines ERROR: lock directory /var/run/samba does not exist ERROR: pid directory /var/run/samba does not exist Here's the samba related programs I have installed: $ dpkg --list|grep -i samba ii libpam-winbind 2:3.6.3-2ubuntu2.3 Samba nameservice and authentication integration plugins ii libwbclient0 2:3.6.3-2ubuntu2.3 Samba winbind client library ii nautilus-share 0.7.3-1ubuntu2 Nautilus extension to share folder using Samba ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii samba-common 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii samba-common-bin 2:3.6.3-2ubuntu2.3 common files used by both the Samba server and client ii winbind 2:3.6.3-2ubuntu2.3 Samba nameservice integration server $ dpkg --list|grep -i smb ii dmidecode 2.11-4 SMBIOS/DMI table decoder ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii python-smbc 1.0.13-0ubuntu1 Python bindings for Samba clients (libsmbclient) ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix ii smbfs 2:5.1-1ubuntu1 Common Internet File System utilities - compatibility package $ dpkg --list|grep -i cifs ii cifs-utils 2:5.1-1ubuntu1 Common Internet File System utilities ii libsmbclient 2:3.6.3-2ubuntu2.3 shared library for communication with SMB/CIFS servers ii smbclient 2:3.6.3-2ubuntu2.3 command-line SMB/CIFS clients for Unix I originally noticed that my other machine had "libpam-winbind" and "nautilus-share" installed and the machine with the issue did not. Installing those two packages solved my errors with 'testparm -s', but did not fix my issue. Finally, I tried to purge and reinstall these packages smbclient smbfs cifs-utils samba-common samba-common-bin Still no luck. Again, it used to work; now it doesn't. Very similarly configured machine works (but some packages are out of date on the working machine). The NAS has only one interface/IP address, nmblookup works to find it's IP from it's hostname (from the machine with the issue) and it responds to a ping. Please any help would be great. I've been searching on AskUbuntu, SuperUser, ubuntuforums and plain old search engines for a week now and it's driving me crazy!

    Read the article

  • SQL SERVER – Solution of Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. I will be not able to cover every single solution which is posted as a comment, however, I would like to for sure cover few interesting entries. However, I am selecting 5 solutions which are different (not necessary they are most optimal or best – just different and interesting). Just for clarity I am involving the original problem statement here. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO -- Insert Your Solutions here -- Swap value of Column Gender SELECT * FROM SimpleTable GO DROP TABLE SimpleTable GO Here are the five most interesting and different solutions I have received. Solution by Roji P Thomas UPDATE S SET S.Gender = D.Gender FROM SimpleTable S INNER JOIN SimpleTable D ON S.Gender != D.Gender I really loved the solutions as it is very simple and drives the point home – elegant and will work pretty much for any values (not necessarily restricted by the option in original question ‘male’ or ‘female’). Solution by Aneel CREATE TABLE #temp(id INT, datacolumn CHAR(4)) INSERT INTO #temp VALUES(1,'gent'),(2,'lady'),(3,'lady') DECLARE @value1 CHAR(4), @value2 CHAR(4) SET @value1 = 'lady' SET @value2 = 'gent' UPDATE #temp SET datacolumn = REPLACE(@value1 + @value2,datacolumn,'') Aneel has very interesting solution where he combined both the values and replace the original value. I personally liked this creativity of the solution. Solution by SIJIN KUMAR V P UPDATE SimpleTable SET Gender = RIGHT(('fe'+Gender), DIFFERENCE((Gender),SOUNDEX(Gender))*2) Sijin has amazed me with Difference and Soundex function. I have never visualized that above two functions can resolve the problem. Hats off to you Sijin. Solution by Nikhildas UPDATE St SET St.Gender = t.Gender FROM SimpleTable St CROSS Apply (SELECT DISTINCT gender FROM SimpleTable WHERE St.Gender != Gender) t I was expecting that someone will come up with this solution where they use CROSS APPLY. This is indeed very neat and for sure interesting exercise. If you do not know how CROSS APPLY works this is the time to learn. Solution by mistermagooo UPDATE SimpleTable SET Gender=X.NewGender FROM (VALUES('male','female'),('female','male')) AS X(OldGender,NewGender) WHERE SimpleTable.Gender=X.OldGender As per author this is a slow solution but I love how syntaxes are placed and used here. I love how he used syntax here. I will say this is the most beautifully written solution (not necessarily it is best). Bonus: Solution by Madhivanan Somehow I was confident Madhi – SQL Server MVP will come up with something which I will be compelled to read. He has written a complete blog post on this subject and I encourage all of you to go ahead and read it. Now personally I wanted to list every single comment here. There are some so good that I am just amazed with the creativity. I will write a part of this blog post in future. However, here is the challenge for you. Challenge: Go over 50+ various solutions listed to the simple problem here. Here are my two asks for you. 1) Pick your best solution and list here in the comment. This exercise will for sure teach us one or two things. 2) Write your own solution which is yet not covered already listed 50 solutions. I am confident that there is no end to creativity. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >