Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 281/307 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Automating Form Login

    - by Greg_Gutkin
    Introduction A common task in configuring a web application for proxying in Pagelet Producer is setting up form autologin. PP provides a wizard-like tool for detecting the login form fields, but this is usually only the first step in configuring this feature. If the generated configuration doesn't seem to work, some additional manual modifications will be needed to complete the setup. This article will try to guide you through this process while steering you away from common pitfalls. For the purposes of this article, let's assume the following characteristics about your environment: Web Application Base URL: http://host/app (configured as Resource Source URL in PP) Pagelet Producer Base URL: http://pp/pagelets Form Field Auto-Detection Form Autologin is configured in the PP Admin UI under resource_name/Autologin/Form Login. First, you'll enter the URL to the login form under "Login Form Identification". This will enable the admin wizard to connect to and display the login page. Caution: RedirectsMake sure the entered URL matches what you see in the browser's address bar, when the application login page is displayed. For example, even though you may be able to reach the login page by simply typing http://host/app, the URL you end up on may change to http://host/app/login via browser redirect(s).The second URL is the one you will want to use. Caution: External Login ServersThe login page may actually come from a different server than the application you are trying to proxy. For example, you may notice that the login page URL changes to http://hostB/appB. This is common when external SSO products are involved. There are two ways of dealing with this situation. One is to configure Pagelet Producer to participate in SSO. This approach is out of scope of this article and is discussed in a separate whitepaper (TODO add link). The second approach is to use the autologin feature to provide stored credentials to the SSO login form. Since the login form URL is not an extension of the application base URL (PP resource URL), you will need to add a new PP resource for the SSO server and configure the login form on that resource instead of the original application resource. One side benefit of this additional resource is that it can reused for other applications relying on the same SSO server for login. After entering the login page URL (make sure dropdown says "URL"), click "Automatically Detect Form Fields". This will bring up the web app's login page in a new browser window. Fill it out and submit it as you would normally. If everything goes right, Pagelet Producer will intercept the submitted values and fill out all the needed configuration data in the Admin UI. If the login form window doesn't close or configuration data doesn't get filled in, you may have not entered the login page URL correctly. Review the two cautionary notes above and make any necessary changes. If the form fields got filled automatically, it's time to save the configuration and test it out. If you can access a protected area of the backend application via a proxied PP URL without filling out its login form, then you are pretty much done with login form configuration. The only other step you will need to complete before declaring this aspect of configuration production ready is configuring form field source. You may skip to that section below. Manual Login Form Identification Let's take a closer look at Login Form Identification. This determines how Pagelet Producer recognizes login forms as such. URL The most efficient way of detecting login forms is by looking at the page URL. This method can only be used under the following conditions: Login page URL must be different from the post login application URLs. Login page URL must stay constant regardless of the path it takes to reach the page. For example, reaching the login page by going to the application base URL or to a specific protected URL must result in a redirect to the same login page URL (query string excluded). If only the query string parameters change, just leave out the query string from the configured login page URL. If either of these conditions is not fullfilled, you must switch to the RegEx approach below. RegEx If the login page URL is not uniform enough across all scenarios or is indistinguishable from other page locations, PP can be configured to recognize it by looking at the page markup itself. This is accomplished by changing the dropdown to "RegEx". If regular expressions scare you, take comfort from the fact that in most cases you won't need to enter any special regex characters. Let's look at an example: Say you have a login form that looks like <form id='loginForm' action='login?from=pageA' > <input id='user'> <input id='pass'> </form> Since this form has an id attribute, you can be reasonably sure that this login form can be uniquely identified across the web application by this snippet: "id='loginForm'". (Unless, of course your backend web application contains login forms to other apps). Since no wildcards are needed to find this snippet, you can just enter it as is into the RegEx field - no special regular expression characters needed! If the web developer who created the form wasn't kind enough to provide a unique id, you will need to look for other snippets of the page to uniquely identify it. It could be the action URL, an input field id, or some other markup fragment. You should abstain from using UI text as an identifier it may change in translated versions of the page and prevent the login page logic from working for international users. You may need to turn to regular expression wildcard syntax if no simple matches work. For more information on regular expression, refer to the Resources section. Form Submit Location Now we'll look at the form submit location. If the captured URL contains query string parameters that will likely change from one form submission to the next, you will need to change its type to RegEx. This type will tell Pagelet Producer to parse the login page for the action URL and submit to the value found. The regular expression needs to point at the actual action URL with its first grouping expression. Taking the example form definition above, the form submit location regex would be: action='(.*?)' The parentheses are used to identify the actual action URL, while the rest of the expression provides the context for finding it. Expression .*? is a so-called reluctant wildcard that matches any character excluding the single quote that follows. See Resources section below for further information on regular expressions. Manual Form Field Detection If the Admin UI form field detection wizard fails to populate login form configuration page, you will have to enter the fields by hand. Use a built-in browser developer tool or addon (e.g. Firebug) to inspect the form element and its children input elements. For each input element (including hidden elements), create an entry under Form Fields. Change its Source according to the next section. Form Field Source Change the source of any of the fields not exposed to the users of the login form (i.e. hidden fields) to "Generated". This means Pagelet Producer will just use the values returned by the web app rather than supplying values it stored. For fields that contain sensitive data or vary from user to user (e.g. username & password), change the source to User (Credential) Vault. Logging Support To help you troubleshoot you autologin configuration, PP provides some useful logging support. To turn on detailed logging for the autologin feature, navigate to Settings in Admin UI. Under Logging, change the log level for AutoLogin to Finest. Known Limitations Autologin feature may not work as expected if login form fields (not just the values, but the DOM elements themselves) are generated dynamically by client side JavaScript. Resources RegEx RegEx Reference from Java RegEx Test Tool

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help.

    - by adam.hawley
    Between all the press coverage on the unauthorized release of 251,287 diplomatic documents and on previous extensive releases of classified documents on the events in Iraq and Afghanistan, one could be forgiven for thinking massive leaks are really an issue for governments, but it is not: It is an issue for corporations as well. In fact, corporations are apparently set to be the next big target for things like Wikileaks. Just the threat of such a release against one corporation recently caused the price of their stock to drop 3% after the leak organization claimed to have 5GB of information from inside the company, with the implication that it might be damaging or embarrassing information. At the moment of this blog anyway, we don't know yet if that is true or how they got the information but how did the diplomatic cable leak happen? For the diplomatic cables, according to press reports, a private in the military, with some appropriate level of security clearance (that is, he apparently had the correct level of security clearance to be accessing the information...he reportedly didn't "hack" his way through anything to get to the documents which might have raised some red flags...), is accused of accessing the material and copying it onto a writeable CD labeled "Lady Gaga" and walking out the door with it. Upload and... Done. In the same article, the accused is quoted as saying "Information should be free. It belongs in the public domain." Now think about all the confidential information in your company or non-profit... from credit card information, to phone records, to customer or donor lists, to corporate strategy documents, product cost information, etc, etc.... And then think about that last quote above from what was a very junior level person in the organization...still feeling comfortable with your ability to control all your information? So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work? A major first step it to make it physically, logistically much harder to walk away with the information. If the user with malicious intent has no way to copy to removable or moble media (USB sticks, thumb drives, CDs, DVDs, memory cards, or even laptop disk drives) then, as a practical matter it is much more difficult to physically move the information outside the firewall. But how can you control access tightly and reliably and still keep your hundreds or even thousands of users productive in their daily job? Oracle Desktop Virtualization products can help.Oracle's comprehensive suite of desktop virtualization and access products allow your applications and, most importantly, the related data, to stay in the (highly secured) data center while still allowing secure access from just about anywhere your users need to be to be productive.  Users can securely access all the data they need to do their job, whether from work, from home, or on the road and in the field, but fully configurable policies set up centrally by privileged administrators allow you to control whether, for instance, they are allowed to print documents or use USB devices or other removable media.  Centrally set policies can also control not only whether they can download to removable devices, but also whether they can upload information (see StuxNet for why that is important...)In fact, by using Sun Ray Client desktop hardware, which does not contain any disk drives, or removable media drives, even theft of the desktop device itself would not make you vulnerable to data loss, unlike a laptop that can be stolen with hundreds of gigabytes of information on its disk drive.  And for extreme security situations, Sun Ray Clients even come standard with the ability to use fibre optic ethernet networking to each client to prevent the possibility of unauthorized monitoring of network traffic.But even without Sun Ray Client hardware, users can leverage Oracle's Secure Global Desktop software or the Oracle Virtual Desktop Client to securely access server-resident applications, desktop sessions, or full desktop virtual machines without persisting any application data on the desktop or laptop being used to access the information.  And, again, even in this context, the Oracle products allow you to control what gets uploaded, downloaded, or printed for example.Another benefit of Oracle's Desktop Virtualization and access products is the ability to rapidly and easily shut off user access centrally through administrative polices if, for example, an employee changes roles or leaves the company and should no longer have access to the information.Oracle's Desktop Virtualization suite of products can help reduce operating expense and increase user productivity, and those are good reasons alone to consider their use.  But the dynamics of today's world dictate that security is one of the top reasons for implementing a virtual desktop architecture in enterprises.For more information on these products, view the webpages on www.oracle.com and the Oracle Technology Network website.

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • Guide to Downloading Oracle Fusion Middleware 11g Products

    - by Daniel Mortimer
    IntroductionThe idea of writing a blog about downloading software seems a bit strange .. right? After all, surely just give me the web download link and away I go!? Unfortunately, life is not so simple if you are a DBA or Systems Administrator tasked with staging Oracle Fusion Middleware 11g products for your chosen business technology stack. Here are the challenges: Oracle Fusion Middleware is not a single product, it is a family of products - a media pack with many many "disks" - which ones do I pick? Are the products I pick certified / supported on my chosen platform? Which download site do I use? I need to be on the latest and greatest - how do I get hold of the latest product patch set? The purpose of this blog is to give you a roadmap to get you through these challenges. Oracle Fusion Middleware 11g - A Product SuiteThe first thing to appreciate is that Oracle Fusion Middleware 11g is not a single product. It is a product suite, an umbrella label for many products. Typically you don't download the whole media pack - well not unless you want to stage 124 Parts - a total of 68 Gig  - instead you pick the pieces that are required for your chosen Middleware solution. Therefore, you need to research / understand which products are required to build your solution. In this respect, before you go looking for the software pick and persue the product guide from the table below which matches your situation:  Installing a New / Vanilla FMW 11g architecture Oracle Fusion Middleware Installation Planning Guide 11g  Upgrading Oracle Application Server 10g to FMW 11g Oracle Fusion Middleware Upgrade Planning Guide 11g  Patching an existing FMW 11g architecture Oracle Fusion Middleware Patching Guide 11g Certification Information Ok, so now you have an idea of what Fusion Middleware products you need. It's time to check whether these products are certified against your chosen platform. There are two places to find this information:My Oracle Support Certification Tab PageFigure 1.1 My Oracle Support Certification Tab Page - "Search on SOA Suite" Figure 1.2 My Oracle Support Certification Tab Page - "SOA Suite Search Result" The FMW 11g Certification Central Hub (in the format of xls spreadsheet)Figure 2: Screenshot of FMW 11g Release 1 Certification xls spreadsheet Hints / Tips: Fusion Middleware 11g certification information has only recently been added into the Certification Tab page and I think it is the more friendly way to access the information. However, due to some restrictions with the Certification Tab page interface some of the more, let's say obscure certification information, is still to be only found in the Certification spreadsheet. Be aware that to find certification information via the My Oracle Support Certification Tab page you must enter the FMW 11g product name e.g. "Oracle SOA Suite". Do NOT enter "Oracle Fusion Middleware". The certification information does not exist at this product suite level.  For example, if you are building a solution which includes Oracle SOA Suite Oracle WebCenter then you will have to look up the certification information for each product in turn.After choosing the product name, select the latest patch set version. This will not only tell you whether your chosen product is available at that patch set version but provide the certification information relevant to that version.  If the product is not available under the latest patch set version, seek the information under previous patch set versions. Important: Make a careful note of the Oracle WebLogic Server version which is certified with your chosen product and patch set version. Oracle WebLogic Server is the core component of a Oracle Fusion Middleware 11g home. It is important therefore to ensure later on that you download the version of Oracle WebLogic Server which is compatible and certified with your chosen product and patch set version.Also - sorry to state the obvious, but please do not take certification information from the screenshots above. The screenshots are only good for the time they were entered into the blog. To ensure you have the latest information, interactively look up the certification details. For more information about finding certification information, bookmark and readMy Oracle Support Certification Tool for Oracle Fusion Middleware Products [Doc ID 1368736.1]How to Find Certification Details for Oracle Application Server 10g and Oracle Fusion Middleware 11g [Doc ID 431578.1] Downloading the Software Now you should be ready to download the software. There are two download locations Oracle Software Delivery Cloud (formerly known as E-Delivery)Figure 3 - Screenshot of Fusion Middleware Download from Delivery CloudOracle Fusion Middleware Download Page on Oracle Technology NetworkFigure 4 - Screenshot of OTN Product Download Screen Hints / Tips: Your choice of download location should be primarily driven by your licensing needs. Take note of the wording on the OTN site - to quote:"The downloads below are provided for evaluators under the OTN License Agreement. Licensed customers should download their software via our Oracle Software Delivery Cloud site, which offers different license terms."However, it has to be said that the presentation of the most of the product download pages on OTN does make the job easier. The Software Delivery Cloud provides you with a flat list of the Oracle Fusion Middleware 11g media pack. You have to know what you are looking for and pick out the right pieces :-( The OTN product download pages present not only the download for the product you want but also its dependencies such as WebLogic Server and Repository Creation Utility. So, even if your licensing requirements drive you towards the cloud, it is still worthwhile checking the OTN pages if only as a guide to what you need to pick out from the flat list found on the cloud site. Latest Patch Set This is an area which may cause you confusion - especially if you are more familiar with the Oracle Application Server 10g patching story. From Patch Set 11.1.1.6 and higher, the majority of FMW 11g products (N.B there are exceptions) provide installers which can be used both to update existing FMW 11g product installs or build brand new ones. This is good news because, unless you are dealing with one of the exceptions, it means you do not have to download base software and a patch set. At the time of the writing, the two significant exceptions are: Portal/Forms/Reports/Discoverer 11g Release 1 (11.1.1.x) Identity Access Management 11g Release 1 (11.1.1.x) The other key message here is ensure you are grabbing a version of Oracle WebLogic Server which is compatible with your chosen product patch set version. Get this wrong and you will hit errors / problems at AS Instance Configuration Time.The go to place is this document - Oracle Fusion Middleware Download, Installation, and Configuration Readme FilesIn fact, this README document pretty much takes you through what I have blogged above. The only thing is you need to know which README to choose, and that's why planning your FMW 11g technology stack and viewing certification information comes into play beforehand. And Finally As the Oracle Fusion Middleware Download, Installation, and Configuration Readme Files states don't forget to check FMW 11g System Requirements FMW 11g Product Interoperability

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • Chock-full of Identity Customers at Oracle OpenWorld

    - by Tanu Sood
      Oracle Openworld (OOW) 2012 kicks off this coming Sunday. Oracle OpenWorld is known to bring in Oracle customers, organizations big and small, from all over the world. And, Identity Management is no exception. If you are looking to catch up with Oracle Identity Management customers, hear first-hand about their implementation experiences and discuss industry trends, business drivers, solutions and more at OOW, here are some sessions we recommend you attend: Monday, October 1, 2012 CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Subject matter experts from Kaiser Permanente and SuperValu share the stage with Amit Jasuja, Snior Vice President, Oracle Identity Management and Security to discuss how the latest advances in Identity Management are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. Tuesday, October 2, 2012 CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 With more than 5 billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 As organizations seek to encourage more and more user self service, business users are now primary end users for identity management installations.  Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. Wednesday, October 3, 2012 CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? How do you externalize such entitlements to allow dynamic changes without having to touch the application code? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. Thursday, October 4, 2012 CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. And here’s the complete listing of all the Identity Management sessions at Oracle OpenWorld.

    Read the article

  • Using the HTML5 &lt;input type=&quot;file&quot; multiple=&quot;multiple&quot;&gt; Tag in ASP.NET

    - by Rick Strahl
    Per HTML5 spec the <input type="file" /> tag allows for multiple files to be picked from a single File upload button. This is actually a very subtle change that's very useful as it makes it much easier to send multiple files to the server without using complex uploader controls. Please understand though, that even though you can send multiple files using the <input type="file" /> tag, the process of how those files are sent hasn't really changed - there's still no progress information or other hooks that allow you to automatically make for a nicer upload experience without additional libraries or code. For that you will still need some sort of library (I'll post an example in my next blog post using plUpload). All the new features allow for is to make it easier to select multiple images from disk in one operation. Where you might have required many file upload controls before to upload several files, one File control can potentially do the job. How it works To create a file input box that allows with multiple file support you can simply do:<form method="post" enctype="multipart/form-data"> <label>Upload Images:</label> <input type="file" multiple="multiple" name="File1" id="File1" accept="image/*" /> <hr /> <input type="submit" id="btnUpload" value="Upload Images" /> </form> Now when the file open dialog pops up - depending on the browser and whether the browser supports it - you can pick multiple files. Here I'm using Firefox using the thumbnail preview I can easily pick images to upload on a form: Note that I can select multiple images in the dialog all of which get stored in the file textbox. The UI for this can be different in some browsers. For example Chrome displays 3 files selected as text next to the Browse… button when I choose three rather than showing any files in the textbox. Most other browsers display the standard file input box and display the multiple filenames as a comma delimited list in the textbox. Note that you can also specify the accept attribute in the <input> tag, which specifies a mime-type to specify what type of content to allow.Here I'm only allowing images (image/*) and the browser complies by just showing me image files to display. Likewise I could use text/* for all text formats registered on the machine or text/xml to only show XML files (which would include xml,xst,xsd etc.). Capturing Files on the Server with ASP.NET When you upload files to an ASP.NET server there are a couple of things to be aware of. When multiple files are uploaded from a single file control, they are assigned the same name. In other words if I select 3 files to upload on the File1 control shown above I get three file form variables named File1. This means I can't easily retrieve files by their name:HttpPostedFileBase file = Request.Files["File1"]; because there will be multiple files for a given name. The above only selects the first file. Instead you can only reliably retrieve files by their index. Below is an example I use in app to capture a number of images uploaded and store them into a database using a business object and EF 4.2.for (int i = 0; i < Request.Files.Count; i++) { HttpPostedFileBase file = Request.Files[i]; if (file.ContentLength == 0) continue; if (file.ContentLength > App.Configuration.MaxImageUploadSize) { ErrorDisplay.ShowError("File " + file.FileName + " is too large. Max upload size is: " + App.Configuration.MaxImageUploadSize); return View("UploadClassic",model); } var image = new ClassifiedsBusiness.Image(); var ms = new MemoryStream(16498); file.InputStream.CopyTo(ms); image.Entered = DateTime.Now; image.EntryId = model.Entry.Id; image.ContentType = "image/jpeg"; image.ImageData = ms.ToArray(); ms.Seek(0, SeekOrigin.Begin); // resize image if necessary and turn into jpeg Bitmap bmp = Imaging.ResizeImage(ms.ToArray(), App.Configuration.MaxImageWidth, App.Configuration.MaxImageHeight); ms.Close(); ms = new MemoryStream(); bmp.Save(ms,ImageFormat.Jpeg); image.ImageData = ms.ToArray(); bmp.Dispose(); ms.Close(); model.Entry.Images.Add(image); } This works great and also allows you to capture input from multiple input controls if you are dealing with browsers that don't support multiple file selections in the file upload control. The important thing here is that I iterate over the files by index, rather than using a foreach loop over the Request.Files collection. The files collection returns key name strings, rather than the actual files (who thought that was good idea at Microsoft?), and so that isn't going to work since you end up getting multiple keys with the same name. Instead a plain for loop has to be used to loop over all files. Another Option in ASP.NET MVC If you're using ASP.NET MVC you can use the code above as well, but you have yet another option to capture multiple uploaded files by using a parameter for your post action method.public ActionResult Save(HttpPostedFileBase[] file1) { foreach (var file in file1) { if (file.ContentLength < 0) continue; // do something with the file }} Note that in order for this to work you have to specify each posted file variable individually in the parameter list. This works great if you have a single file upload to deal with. You can also pass this in addition to your main model to separate out a ViewModel and a set of uploaded files:public ActionResult Edit(EntryViewModel model,HttpPostedFileBase[] uploadedFile) You can also make the uploaded files part of the ViewModel itself - just make sure you use the appropriate naming for the variable name in the HTML document (since there's Html.FileFor() extension). Browser Support You knew this was coming, right? The feature is really nice, but unfortunately not supported universally yet. Once again Internet Explorer is the problem: No shipping version of Internet Explorer supports multiple file uploads. IE10 supposedly will, but even IE9 does not. All other major browsers - Chrome, Firefox, Safari and Opera - support multi-file uploads in their latest versions. So how can you handle this? If you need to provide multiple file uploads you can simply add multiple file selection boxes and let people either select multiple files with a single upload file box or use multiples. Alternately you can do some browser detection and if IE is used simply show the extra file upload boxes. It's not ideal, but either one of these approaches makes life easier for folks that use a decent browser and leaves you with a functional interface for those that don't. Here's a UI I recently built as an alternate uploader with multiple file upload buttons: I say this is my 'alternate' uploader - for my primary uploader I continue to use an add-in solution. Specifically I use plUpload and I'll discuss how that's implemented in my next post. Although I think that plUpload (and many of the other packaged JavaScript upload solutions) are a better choice especially for large uploads, for simple one file uploads input boxes work well enough. The advantage of this solution is that it's very easy to handle on the server side. Any of the JavaScript controls require special handling for uploads which I'll also discuss in my next post.© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Weekend With #iPad

    - by andrewbrust
    Saturday morning, I got up, got dressed and took a 7-minute walk up to the Apple Store in New York’s Meatpacking District to pick up my reserved iPad.  This precinct, which borders Greenwich Village (where I live and grew up) was, when I was a kid, a very industrial and smelly neighborhood during the day  and a rough neighborhood at night.  So imagine my sense of irony as I walked up Hudson Street towards 14th Street, to go wait in line with a bunch of hipsters to buy an iPad on launch day. Numerous blue T-shirt-clad Apple store workers were on hand to check people in to the line specifically identified for people who had reserved an iPad.  Others workers passed out water and all of them, I kid you not, applauded people as they got their chance to go into the store and buy their devices.  They also cheered people and yelled “congratulations” as they left.  The event had all the charm of a mass wedding officiated by Reverend Sung Myung Moon.  Once inside, a nice dude named Trey, with lots of tattoos on his calves, helped me and I acquired my device in short order.  Another guy helped me activate the device, which was comical, because that has to be done through iTunes, which I hadn’t logged into in a while. Turns out my user id was my email address from the company I sold 5 1/2 years ago.  Who knew?  Regardless, I go the device working, packed up and left the store, shuddering as I was cheered and congratulated.  By this time (about 10:30am) the line for reserved units and even walk-ins, was gone.  The iPhone launch this was not. As much as I detested the Apple Store experience, I must say the device is really nice.  the screen is bright, the colors are bold, and the experience is ultra-smooth.  I quickly tested Safari, YouTube, Google Maps, and then installed a few apps, including the New York Times Editors’ Choice and a couple of Twitter clients. Some initial raves: Google Maps and Street View on the iPad is just amazing.  The screen is full-size like a PC or Mac, but it’s right in front of you and responding to taps and flicks and pinches and it’s really engulfing.  Video and photos are really nice on this device, despite the fact that 16:9 and anamorphic aspect ration content is letter boxed.  It still looks amazing.  And apps that are designed especially for the iPad, including The Weather Channel and Gilt and Kayak just look stunning.  The richness, the friendly layout, the finger-friendly UIs, and the satisfaction of not having a keyboard between you and the information you’re managing, while you sit on a couch or an easy chair, is just really a beautiful thing.  The mere experience of seeing these apps’ splash screens causes a shiver and Goosebumps.  Truly.  The iPad is not a desktop machine, and it’s not pocket device.  That doesn’t mean it’s useless though.  It’s the perfect “couchtop” computer. Now some downsides: the WiFi radio seems a bit flakey.  More than a few times, I have had to toggle the WiFi off and back on to get it to connect properly.  Worse yet, the iPad is totally bamboozled by the fact that I have four WiFi access points in my house, each with the same SSID.  My laptops are smart enough to roam from one to the other, but the iPad seems to maintain an affinity for the downstairs access point, even if I’m turning it on two flights up.  Telling the iPad to “forget” my WiFi network and then re-associate with it doesn’t help. More downers: as you might expect, there are far more applications developed for the iPhone than the iPad.  And although iPhone apps run on the iPad, that provides about the same experience as watching standard def on a big HD flat panel, complete with the lousy choice of thick black borders or zooming the picture in to fill the screen.  And speaking of iPhone Apps, I can’t get the Sonos one to work.  Ideally, they’d have a dedicated iPad app and it would work on the first try.  And the iPad is just as bad as any netbook when it comes to being a magnet for fingerprints.  The lack of multi-tasking is quite painful too – truly, I don’t mind if only one app can be active at once, but the lack of ability to switch between apps, and the requirement to return to the home screen and re-launch a previous app to switch back, is already old and I’ve had the thing less than 48 hours. These are just initial impressions.  I’ll have a fuller analysis soon, after I’ve had some more break-in time with my new toy.  I’ll be thinking not just about the iPad and iPhone but also about Android, the 2.1 update for which was pushed to my Droid today, and Windows Phone 7, whose “hub” concept I now understand the value of.  This has been a great year for alternative computing devices, and I see no net downside for Apple, Google or Microsoft.  Exciting times.

    Read the article

  • Lync Server 2010

    - by ManojDhobale
    Microsoft Lync Server 2010 communications software and its client software, such as Microsoft Lync 2010, enable your users to connect in new ways and to stay connected, regardless of their physical location. Lync 2010 and Lync Server 2010 bring together the different ways that people communicate in a single client interface, are deployed as a unified platform, and are administered through a single management infrastructure. Workload Description IM and presence Instant messaging (IM) and presence help your users find and communicate with one another efficiently and effectively. IM provides an instant messaging platform with conversation history, and supports public IM connectivity with users of public IM networks such as MSN/Windows Live, Yahoo!, and AOL. Presence establishes and displays a user’s personal availability and willingness to communicate through the use of common states such as Available or Busy. This rich presence information enables other users to immediately make effective communication choices. Conferencing Lync Server includes support for IM conferencing, audio conferencing, web conferencing, video conferencing, and application sharing, for both scheduled and impromptu meetings. All these meeting types are supported with a single client. Lync Server also supports dial-in conferencing so that users of public switched telephone network (PSTN) phones can participate in the audio portion of conferences. Conferences can seamlessly change and grow in real time. For example, a single conference can start as just instant messages between a few users, and escalate to an audio conference with desktop sharing and a larger audience instantly, easily, and without interrupting the conversation flow. Enterprise Voice Enterprise Voice is the Voice over Internet Protocol (VoIP) offering in Lync Server 2010. It delivers a voice option to enhance or replace traditional private branch exchange (PBX) systems. In addition to the complete telephony capabilities of an IP PBX, Enterprise Voice is integrated with rich presence, IM, collaboration, and meetings. Features such as call answer, hold, resume, transfer, forward and divert are supported directly, while personalized speed dialing keys are replaced by Contacts lists, and automatic intercom is replaced with IM. Enterprise Voice supports high availability through call admission control (CAC), branch office survivability, and extended options for data resiliency. Support for remote users You can provide full Lync Server functionality for users who are currently outside your organization’s firewalls by deploying servers called Edge Servers to provide a connection for these remote users. These remote users can connect to conferences by using a personal computer with Lync 2010 installed, the phone, or a web interface. Deploying Edge Servers also enables you to federate with partner or vendor organizations. A federated relationship enables your users to put federated users on their Contacts lists, exchange presence information and instant messages with these users, and invite them to audio calls, video calls, and conferences. Integration with other products Lync Server integrates with several other products to provide additional benefits to your users and administrators. Meeting tools are integrated into Outlook 2010 to enable organizers to schedule a meeting or start an impromptu conference with a single click and make it just as easy for attendees to join. Presence information is integrated into Outlook 2010 and SharePoint 2010. Exchange Unified Messaging (UM) provides several integration features. Users can see if they have new voice mail within Lync 2010. They can click a play button in the Outlook message to hear the audio voice mail, or view a transcription of the voice mail in the notification message. Simple deployment To help you plan and deploy your servers and clients, Lync Server provides the Microsoft Lync Server 2010, Planning Tool and the Topology Builder. Lync Server 2010, Planning Tool is a wizard that interactively asks you a series of questions about your organization, the Lync Server features you want to enable, and your capacity planning needs. Then, it creates a recommended deployment topology based on your answers, and produces several forms of output to aid your planning and installation. Topology Builder is an installation component of Lync Server 2010. You use Topology Builder to create, adjust and publish your planned topology. It also validates your topology before you begin server installations. When you install Lync Server on individual servers, the installation program deploys the server as directed in the topology. Simple management After you deploy Lync Server, it offers the following powerful and streamlined management tools: Active Directory for its user information, which eliminates the need for separate user and policy databases. Microsoft Lync Server 2010 Control Panel, a new web-based graphical user interface for administrators. With this web-based UI, Lync Server administrators can manage their systems from anywhere on the corporate network, without needing specialized management software installed on their computers. Lync Server Management Shell command-line management tool, which is based on the Windows PowerShell command-line interface. It provides a rich command set for administration of all aspects of the product, and enables Lync Server administrators to automate repetitive tasks using a familiar tool. While the IM and presence features are automatically installed in every Lync Server deployment, you can choose whether to deploy conferencing, Enterprise Voice, and remote user access, to tailor your deployment to your organization’s needs.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Pella Increases Online Appointment Scheduling and Rapidly Personalizes and Updates Marketing Initiatives

    - by Michael Snow
    Originally posted on Oracle Customers page.Oracle Customer: Pella CorporationLocation:  Pella, IowaIndustry: Industrial Manufacturing Employees:  7,100 Pella Corporation is an innovative leader in creating a better view for homes and businesses by designing, testing, manufacturing, and installing quality windows and doors for new construction, remodeling, and replacement applications. A family-owned company, Pella has an 88-year history of innovation and, today, is the second-largest manufacturer in the country of windows and doors, including patio, entry, and storm doors. The company has 10 manufacturing facilities in United States and window and door showrooms across the United States and Canada. In-home consultations are an important part of Pella’s sales process. Several years ago, the company launched an online appointment scheduling tool to improve customer convenience. While the functionality worked well, the company wanted to increase online conversion rates and decrease the number of incomplete, online appointment schedules. It also wanted to give its business analysts and other line-of-business personnel the ability to update the scheduling tool and interface quickly, without needing IT team intervention and recoding, to better capitalize on opportunities and personalize the interface for specific markets. Pella also looked to reduce IT complexity by selecting a system that integrated easily with its Oracle E-Business Suite Release 12.1 enterprise applications.Pella, which has a large Oracle footprint, selected Oracle WebCenter Sites as the foundation for its new, real-time appointment scheduling application. It used the solution to re-engineer the scheduling process and the information required to set up an appointment. Just a few months after launch, it is seeing improvement in the number of appointments booked online and experiencing fewer abandoned appointments during the scheduling process. As important, Pella can now quickly and easily make changes to images, video, and content displayed on the scheduling tool interface, delivering greater business agility. Previously, such changes required a developer and weeks of coding and testing. Today, a member of Pella’s business analyst team can complete the changes in hours. This capability enables Pella to personalize the Web experience for customers. For example, it can display different products or images for clients in different regions.The solution is also highly scalable. Pella is using Oracle WebCenter Sites for appointment scheduling now and plans to migrate Pella.com, its configurator tool, and dealer microsites onto the platform. Further, Pella plans to leverage the solution to optimize mobile devices. “Moving ahead, we expect to extensively leverage Oracle WebCenter Sites to gain greater flexibility in updating the Web experience, thanks to the ability to make updates quickly without developer resources. Segmentation and targeting capabilities will allow us to create a more personalized experience across both traditional and mobile platforms,” said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. A word from Pella Corporation "Oracle WebCenter Sites?from the start?delivered important benefits. We’ve redesigned the online scheduling process and are seeing more potential customers completing consultation bookings online. More important, the solution opens a world of other possibilities as we plan to migrate Pella.com and our dealer microsites to the platform, and leverage it to optimize the Web experience for our mobile devices.” – Teri Lancaster, IT Manager, Customer Experience Applications, Pella Corporation Oracle Product and Services Oracle WebCenter Sites Why Oracle Pella has a long-standing relationship with Oracle. “We look to Oracle first for a solution. Our Oracle account team came to us with several solutions, and Oracle WebCenter Sites delivered the scalability, ease-of-use, flexibility, and scalability that we required for the appointment scheduling initiative and other Web projects on the horizon, including migrating Pella.com and optimizing our site for mobile platforms,”said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. Implementation Process The Pella implementation team, working with Oracle partner Element Solutions, LLC, integrated the appointment setting application with Pella.com as well as the company’s Oracle E-Business Suite customer relationship management applications. Using Oracle WebCenter Site’s development tools and subversion capabilities to develop the application, the Element Solutions and Pella teams could work remotely and collaboratively, accelerating deployment. Pella went live with the new scheduling tool in just six months. Partner Oracle PartnerElement Solutions, LLC Element Solutions was instrumental at every major stage of the project, including design creation and approval, development, training, and rollout. “Element Solutions was a vital partner for our Oracle WebCenter Sites initiative. The team provided guidance, and more important, critical knowledge transfer at every stage?which equipped us to get the most out of this powerful and versatile solution. We were definitely collaboration partners,” Lancaster said. Resources Pella Corporation Upgrades Enterprise Applications to Continue to Improve Manufacturing Efficiency Thousands of Customers Successfully and Smoothly Upgrade to Oracle E-Business Suite 12.1 for New Functionality, Lower Operating Costs and Improved Shared Operations Managing the Virtual World

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >