Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 213/230 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Introduction – Day 1 of 31

    - by pinaldave
    List of all the Interview Questions and Answers Series blogs Posts covering interview questions and answers always make for interesting reading.  Some people like the subject for their helpful hints and thought provoking subject, and others dislike these posts because they feel it is nothing more than cheating.  I’d like to discuss the pros and cons of a Question and Answer format here. Interview Questions and Answers are Helpful Just like blog posts, books, and articles, interview Question and Answer discussions are learning material.  The popular Dummy’s books or Idiots Guides are not only for “dummies,” but can help everyone relearn the fundamentals.  Question and Answer discussions can serve the same purpose.  You could call this SQL Server Fundamentals or SQL Server 101. I have administrated hundreds of interviews during my career and I have noticed that sometimes an interviewee with several years of experience lacks an understanding of the fundamentals.  These individuals have been in the industry for so long, usually working on a very specific project, that the ABCs of the business have slipped their mind. Or, when a college graduate is looking to get into the industry, he is not expected to have experience since he is just graduated. However, the new grad is expected to have an understanding of fundamentals and theory.  Sometimes after the stress of final exams and graduation, it can be difficult to remember the correct answers to interview questions, though. An interview Question and Answer discussion can be very helpful to both these individuals.  It is simply a way to go back over the building blocks of a topic.  Many times a simple review like this will help “jog” your memory, and all those previously-memorized facts will come flooding back to you.  It is not a way to re-learn a topic, but a way to remind yourself of what you already know. A Question and Answer discussion can also be a way to go over old topics in a more interesting manner.  Especially if you have been working in the industry, or taking lots of classes on the topic, everything you read can sound like a repeat of what you already know.  Going over a topic in a new format can make the material seem fresh and interesting.  And an interested mind will be more engaged and remember more in the end. Interview Questions and Answers are Harmful A common argument against a Question and Answer discussion is that it will give someone a “cheat sheet.” A new guy with relatively little experience can read the interview questions and answers, and then memorize them. When an interviewer asks him the same questions, he will repeat the answers and get the job. Honestly, is he good hire because he memorized the interview questions? Wouldn’t it be better for the interviewer to hire someone with actual experience?  The answer is not as easy as it seems – there are many different factors to be considered. If the interviewer is asking fundamentals-related questions only, he gets the answers he wants to hear, and then hires this first candidate – there is a good chance that he is hiring based on personality rather than experience.  If the interviewer is smart he will ask deeper questions, have more than one person on the interview team, and interview a variety of candidates.  If one interviewee happens to memorize some answers, it usually doesn’t mean he will automatically get the job at the expense of more qualified candidates. Another argument against interview Question and Answers is that it will give candidates a false sense of confidence, and that they will appear more qualified than they are. Well, if that is true, it will not last after the first interview when the candidate is asked difficult questions and he cannot find the answers in the list of interview Questions and Answers.  Besides, confidence is one of the best things to walk into an interview with! In today’s competitive job market, there are often hundreds of candidates applying for the same position.  With so many applicants to choose from, interviewers must make decisions about who to call back and who to hire based on their gut feeling.  One drawback to reading an interview Question and Answer article is that you might sound very boring in your interview – saying the same thing as every single candidate, and parroting answers that sound like someone else wrote them for you – because they did.  However, it is definitely better to go to an interview prepared, just make sure that you give a lot of thought to your answers to make them sound like your own voice.  Remember that you will be hired based on your skills as well as your personality, so don’t think that having all the right answers will make get you hired.  A good interviewee will be prepared, confident, and know how to stand out. My Opinion A list of interview Questions and Answers is really helpful as a refresher or for beginners. To really ace an interview, one needs to have real-world, hands-on experience with SQL Server as well. Interview questions just serve as a starter or easy read for experienced professionals. When I have to learn new technology, I often search online for interview questions and get an idea about the breadth and depth of the technology. Next Action I am going to write about interview Questions and Answers for next 30 days. I have previously written a series of interview questions and answers; now I have re-written them keeping the latest version of SQL Server and current industry progress in mind. If you have faced interesting interview questions or situations, please write to me and I will publish them as a guest post. If you want me to add few more details, leave a comment and I will make sure that I do my best to accommodate. Tomorrow we will start the interview Questions and Answers series, with a few interesting stories, best practices and guest posts. We will have a prize give-away and other awards when the series ends. List of all the Interview Questions and Answers Series blogs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

  • What is SharePoint Out of the Box?

    - by Bil Simser
    It’s always fun in the blog-o-sphere and SharePoint bloggers always keep the pot boiling. Bjorn Furuknap recently posted a blog entry titled Why Out-of-the-Box Makes No Sense in SharePoint, quickly followed up by a rebuttal by Marc Anderson on his blog. Okay, now that we have all the players and the stage what’s the big deal? Bjorn started his post saying that you don’t use “out-of-the-box” (OOTB) SharePoint because it makes no sense. I have to disagree with his premise because what he calls OOTB is basically installing SharePoint and admiring it, but not using it. In his post he lays claim that modifying say the OOTB contacts list by removing (or I suppose adding) a column, now puts you in a situation where you’re no longer using the OOTB functionality. Really? Side note. Dear Internet, please stop comparing building software to building houses. Or comparing software architecture to building architecture. Or comparing web sites to making dinner. Are you trying to dumb down something so the general masses understand it? Comparing a technical skill to a construction operation isn’t the way to do this. Last time I checked, most people don’t know how to build houses and last time I checked people reading technical SharePoint blogs are generally technical people that understand the terms you use. Putting metaphors around software development to make it easy to understand is detrimental to the goal. </rant> Okay, where were we? Right, adding columns to lists means you are no longer using the OOTB functionality. Yeah, I still don’t get it. Another statement Bjorn makes is that using the OOTB functionality kills the flexibility SharePoint has in creating exactly what you want. IMHO this really flies in the absolute face of *where* SharePoint *really* shines. For the past year or so I’ve been leaning more and more towards OOTB solutions over custom development for the simple reason that its expensive to maintain systems and code and assets. SharePoint has enabled me to do this simply by providing the tools where I can give users what they need without cracking open up Visual Studio. This might be the fact that my day job is with a regulated company and there’s more scrutiny with spending money on anything new, but frankly that should be the position of any responsible developer, architect, manager, or PM. Do you really want to throw money away because some developer tells you that you need a custom web part when perhaps with some creative thinking or expectation setting with customers you can meet the need with what you already have. The way I read Bjorn’s terminology of “out-of-the-box” is install the software and tell people to go to a website and admire the OOTB system, but don’t change it! For those that know things like WordPress, DotNetNuke, SubText, Drupal or any of those content management/blogging systems, its akin to installing the software and setting up the “Hello World” blog post or page, then staring at it like it’s useful. “Yes, we are using WordPress!”. Then not adding a new post, creating a new category, or adding an About page. Perhaps I’m wrong in my interpretation. This leads us to what is OOTB SharePoint? To many people I’ve talked to the last few hours on twitter, email, etc. it is *not* just installing software but actually using it as it was fit for purpose. What’s the purpose of SharePoint then? It has many purposes, but using the OOTB templates Microsoft has given you the ability to collaborate on projects, author/share/publish documents, create pages, track items/contacts/tasks/etc. in a multi-user web based interface, and so on. Microsoft has pretty clear definitions of these different levels of SharePoint we’re talking about and I think it’s important for everyone to know what they are and what they mean. Personalization and Administration To me, this is the OOTB experience. You install the product and then are able to do things like create new lists, sites, edit and personalize pages, create new views, etc. Basically use the platform services available to you with Windows SharePoint Services (or SharePoint Foundation in 2010) to your full advantage. No code, no special tools needed, and very little user training required. Could you take someone who has never done anything in a website or piece of software and unleash them onto a site? Probably not. However I would argue that anyone who’s configured the Outlook reading layout or applied styles to a Word document probably won’t have too much difficulty in using SharePoint OUT OF THE BOX. Customization Here’s where things might get a bit murky but to me this is where you start looking at HTML/ASPX page code through SharePoint Designer, using jQuery scripts and plugging them into Web Part Pages via a Content Editor Web Part, and generally enhancing the site. The JavaScript debate might kick in here claiming it’s no different than C#, and frankly you can totally screw a site up with jQuery on a CEWP just as easily as you can with a C# delegate control deployed to the server file system. However (again, my blog, my opinion) the customization label comes in when I need to access the server (for example creating a custom theme) or have some kind of net-new element I add to the system that wasn’t there OOTB. It’s not content (like a new list or site), it’s code and does something functional. Development Here’s were the propeller hats come on and we’re talking algorithms and unit tests and compilers oh my. Software is deployed to the server, people are writing solutions after some kind of training (perhaps), there might be some specialized tools they use to craft and deploy the solutions, there’s the possibility of exceptions being thrown, etc. There are a lot of definitions here and just like customization it might get murky (do you let non-developers build solutions using development, i.e. jQuery/C#?). In my experience, it’s much more cost effective keeping solutions under the first two umbrellas than leaping into the third one. Arguably you could say that you can’t build useful solutions without *some* kind of code (even just some simple jQuery). I think you can get a *lot* of value just from using the OOTB experience and I don’t think you’re constraining your users that much. I’m not saying Marc or Bjorn are wrong. Like Obi-Wan stated, they’re both correct “from a certain point of view”. To me, SharePoint Out of the Box makes total sense and should not be dismissed. I just don’t agree with the premise that Bjorn is basing his statements on but that’s just my opinion and his is different and never the twain shall meet.

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Win a set of Infragistics Silverlight Controls with Data Visualization!

    - by mbcrump
    Infragistics recently released their new Silverlight Data Visualization Controls. I saw a couple of samples and had to take a look. I headed over to their website and downloaded the controls. I first noticed the hospital floor-plan demo shown on their site and started thinking of ways that I could use this in my own organization. I emailed them asking if I could give away the Silverlight Data Visualization controls on my site and they said, Yes! They also wanted to throw in the standard Silverlight Line of Business controls. (combined they are worth about $3000 US). I am very thankful they were willing to help the Silverlight community with this giveaway. So some quick rules below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of Infragistics Silverlight Controls with Data Visualization ($3000 Value) Random winner will be announced on January 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) For extra entries simply: Retweet a link to this page using the following URL [ http://mcrump.me/iscfree ]. It does not matter what the tweet says, just as long as the URL is the same. Unlimited tweets, but please don’t go crazy! This URL will allow me to track the users that Tweet this page. Don’t forget to visit Infragistics because they made this possible. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Silverlight Line of Business Control page is here. You can also check out the live demos here. The Data Visualization page is here. You can also check out the live demos here. Don’t worry about the Samples/Help Documentation. You can install all of that to your local HDD when you are installing it. I am going to walk you through the Silverlight Controls recently released by Infragistics. Begin by downloading the trial version and running the executable. If you downloaded the Complete bundle then you will have the following options to pick from. I like having help documentation and samples on my local HDD in case I do not have access to the internet and want to code. After it is installed, you may want to take a look at your Toolbox in Visual Studio 2010. Look for NetAdvantage 10.3 Silverlight and you will see that you now have access to all of these controls. At this point, to use the controls it’s as simple as drag/drop onto your Silverlight container. It will create the proper Namespaces for you. I wanted to highlight a few of the controls that I liked the most: Grid – After using the Infragistics grid you will wonder how you ever survived using the grid supplied by Microsoft standard controls.  This grid was designed to get your application up and running very fast. It’s simple to bind, it handles LARGE DataSets, easy to filter and allows endless possibilities of formatting data. The screenshot below is an example of the grid. For a real-time updating demo click here. SpellChecker- If your users are creating emails or performing any other function that requires Spell Checking then this control is great. Check out the screenshots below: In this first screen, I have a word that is not in the dictionary [DotNet]. The Spell Checker finds the word and allows the user to correct it. What is so great about Infragistics controls is that it only takes a few lines of code to have a full-featured Spell Checker in your application. TagCloud – This is a control that I haven’t seen anywhere else. It allows you to create keywords for popular search terms. This is very similar to TagCloud seen all over the internet.  Below is a screenshot that shows “Facebook” being a very popular item in the cloud. You can link these items to a hyperlink if you wanted. Importing/Exporting from Excel – I work with data a majority of the time. We all know the importance of Excel in our organizations, its used a lot. With Infragistics controls it make importing and exporting data from a Grid into Excel a snap. One of the things that I liked most about this control was the option to choose the Excel format (2003 or 2007). I haven’t seen this feature in other controls. Creating/Saving/Extracting/Uploading Zip Files – This is another control that I haven’t seen many others making. It allows you to basically manipulate a zip file in any way you like. You can even create a password on the zip file. Schedule – The Schedule that Infragistics provides resembles Outlook’s calendar. I think that it’s important for a user to see your app for the first time and immediately be able to start using because they are already familiar with the UI. The Schedule control accomplishes that in my opinion. I have just barely scratched the surface with the Infragistics Silverlight Line of Business controls. To check all of them then click here. A quick thing to note is that this giveaway also comes with the following Silverlight Data Visualization Controls. Below is a screenshot that list all of them:   I wanted to highlight 2 of the controls that I liked the most: xamBarcode– The xamBarcode supports the following Symbologies: Below is an example of the barcode generated by Infragistics controls. This is a high resolution barcode that you will not have to wonder if your scanner can read it. As long as you have ink in your printer your barcode will read it. I used a Symbol barcode reader to test this barcode. xamTreemap– I’ve never seen a way of displaying data like this before, but I like it. You can style this anyway that you like of course and it also comes with an Office 2010 Theme. Thanks to Infragistics for providing the controls to one lucky reader. I hope that you enjoyed this post and good luck to those that entered the contest.  Subscribe to my feed

    Read the article

  • Guide to Downloading Oracle Fusion Middleware 11g Products

    - by Daniel Mortimer
    IntroductionThe idea of writing a blog about downloading software seems a bit strange .. right? After all, surely just give me the web download link and away I go!? Unfortunately, life is not so simple if you are a DBA or Systems Administrator tasked with staging Oracle Fusion Middleware 11g products for your chosen business technology stack. Here are the challenges: Oracle Fusion Middleware is not a single product, it is a family of products - a media pack with many many "disks" - which ones do I pick? Are the products I pick certified / supported on my chosen platform? Which download site do I use? I need to be on the latest and greatest - how do I get hold of the latest product patch set? The purpose of this blog is to give you a roadmap to get you through these challenges. Oracle Fusion Middleware 11g - A Product SuiteThe first thing to appreciate is that Oracle Fusion Middleware 11g is not a single product. It is a product suite, an umbrella label for many products. Typically you don't download the whole media pack - well not unless you want to stage 124 Parts - a total of 68 Gig  - instead you pick the pieces that are required for your chosen Middleware solution. Therefore, you need to research / understand which products are required to build your solution. In this respect, before you go looking for the software pick and persue the product guide from the table below which matches your situation:  Installing a New / Vanilla FMW 11g architecture Oracle Fusion Middleware Installation Planning Guide 11g  Upgrading Oracle Application Server 10g to FMW 11g Oracle Fusion Middleware Upgrade Planning Guide 11g  Patching an existing FMW 11g architecture Oracle Fusion Middleware Patching Guide 11g Certification Information Ok, so now you have an idea of what Fusion Middleware products you need. It's time to check whether these products are certified against your chosen platform. There are two places to find this information:My Oracle Support Certification Tab PageFigure 1.1 My Oracle Support Certification Tab Page - "Search on SOA Suite" Figure 1.2 My Oracle Support Certification Tab Page - "SOA Suite Search Result" The FMW 11g Certification Central Hub (in the format of xls spreadsheet)Figure 2: Screenshot of FMW 11g Release 1 Certification xls spreadsheet Hints / Tips: Fusion Middleware 11g certification information has only recently been added into the Certification Tab page and I think it is the more friendly way to access the information. However, due to some restrictions with the Certification Tab page interface some of the more, let's say obscure certification information, is still to be only found in the Certification spreadsheet. Be aware that to find certification information via the My Oracle Support Certification Tab page you must enter the FMW 11g product name e.g. "Oracle SOA Suite". Do NOT enter "Oracle Fusion Middleware". The certification information does not exist at this product suite level.  For example, if you are building a solution which includes Oracle SOA Suite Oracle WebCenter then you will have to look up the certification information for each product in turn.After choosing the product name, select the latest patch set version. This will not only tell you whether your chosen product is available at that patch set version but provide the certification information relevant to that version.  If the product is not available under the latest patch set version, seek the information under previous patch set versions. Important: Make a careful note of the Oracle WebLogic Server version which is certified with your chosen product and patch set version. Oracle WebLogic Server is the core component of a Oracle Fusion Middleware 11g home. It is important therefore to ensure later on that you download the version of Oracle WebLogic Server which is compatible and certified with your chosen product and patch set version.Also - sorry to state the obvious, but please do not take certification information from the screenshots above. The screenshots are only good for the time they were entered into the blog. To ensure you have the latest information, interactively look up the certification details. For more information about finding certification information, bookmark and readMy Oracle Support Certification Tool for Oracle Fusion Middleware Products [Doc ID 1368736.1]How to Find Certification Details for Oracle Application Server 10g and Oracle Fusion Middleware 11g [Doc ID 431578.1] Downloading the Software Now you should be ready to download the software. There are two download locations Oracle Software Delivery Cloud (formerly known as E-Delivery)Figure 3 - Screenshot of Fusion Middleware Download from Delivery CloudOracle Fusion Middleware Download Page on Oracle Technology NetworkFigure 4 - Screenshot of OTN Product Download Screen Hints / Tips: Your choice of download location should be primarily driven by your licensing needs. Take note of the wording on the OTN site - to quote:"The downloads below are provided for evaluators under the OTN License Agreement. Licensed customers should download their software via our Oracle Software Delivery Cloud site, which offers different license terms."However, it has to be said that the presentation of the most of the product download pages on OTN does make the job easier. The Software Delivery Cloud provides you with a flat list of the Oracle Fusion Middleware 11g media pack. You have to know what you are looking for and pick out the right pieces :-( The OTN product download pages present not only the download for the product you want but also its dependencies such as WebLogic Server and Repository Creation Utility. So, even if your licensing requirements drive you towards the cloud, it is still worthwhile checking the OTN pages if only as a guide to what you need to pick out from the flat list found on the cloud site. Latest Patch Set This is an area which may cause you confusion - especially if you are more familiar with the Oracle Application Server 10g patching story. From Patch Set 11.1.1.6 and higher, the majority of FMW 11g products (N.B there are exceptions) provide installers which can be used both to update existing FMW 11g product installs or build brand new ones. This is good news because, unless you are dealing with one of the exceptions, it means you do not have to download base software and a patch set. At the time of the writing, the two significant exceptions are: Portal/Forms/Reports/Discoverer 11g Release 1 (11.1.1.x) Identity Access Management 11g Release 1 (11.1.1.x) The other key message here is ensure you are grabbing a version of Oracle WebLogic Server which is compatible with your chosen product patch set version. Get this wrong and you will hit errors / problems at AS Instance Configuration Time.The go to place is this document - Oracle Fusion Middleware Download, Installation, and Configuration Readme FilesIn fact, this README document pretty much takes you through what I have blogged above. The only thing is you need to know which README to choose, and that's why planning your FMW 11g technology stack and viewing certification information comes into play beforehand. And Finally As the Oracle Fusion Middleware Download, Installation, and Configuration Readme Files states don't forget to check FMW 11g System Requirements FMW 11g Product Interoperability

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • B2B and B2C alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester Research, Inc.’s first Commerce Wave focused on B2B, “The Forrester Wave™: B2B Commerce Suites, Q4 2013,” released earlier this month. We believe that the report validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, we feel that the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: front-office content, community, and commerce features that meet customer expectations for 24x7x365 ordering, real-time customer service, and expedited shipping — both online and on mobile devices: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). It seems that this market momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management. Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery. We would like to expand on each of these 3 areas: As Forrester suggests, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.  On the topic of web content management, we were pleased to see Forrester cite Oracle’s differentiated digital experience capability in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one. So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle certainly has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we'll address in an exciting new release planned for the next 12 months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP. To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: -       2013 B2B Commerce Trends Report -       B2B Commerce Whitepaper: Consumerization, Complexity, Change -       B2B Commerce Webcast: What Industry Trend Setters Do Right -       Internet Retailer, Web Drives Sales for B2B Companies -       Internet Retailer Article, The Web Means Business: B2B Companies Beef Up Their Websites,        borrowing from b2c retailers and breaking new ground -       Internet Retailer Article, B2B e-Commerce is poised for growth

    Read the article

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Let’s Get Social

    - by Kristin Rose
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} You can try to run from it like a bad Facebook picture but you can’t hide. Social media as we know it is quickly taking over our lives and is not going away any time soon. Though attempting to reach as many Twitter followers as Lady Gaga is daunting, learning how to leverage social media to meet your customer’s needs is not. For Oracle, this means interacting directly with our partners through our many social media outlets, and refraining from posting a mindless status on the pastrami on rye we ate for lunch today… though it was delicious. The “correct” way to go about social media is going to mean something different to each company. For example, sending a customer more than one friend request a day may not be the best way to get their attention, but using social media as a two-way marketing channel is. Oracle’s Partner Business Center’s (PBC) twitter handle was recently mentioned by Elateral as the “ideal way to engage with your market and use social media in the channel”. Why you ask? Because the PBC has two named social media leads manning the Twitter feed at all times, helping partners get the information and answers they need more quickly than a Justin Bieber video gone viral. So whether you want to post a video of your favorite customer attempting the Marshmallow challenge or tweet like there’s no tomorrow, be sure to follow @OraclePartnerBiz today, and see how they can help you achieve your next partner milestone with Oracle. Happy Socializing, The OPN Communications Team v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2004

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2004 written by Hans-Otto Lochmann, Armin Neudert and myself. TRACK Active FoxPro Pages Back in 1996 Peter Herzog invented a FoxPro based solution to provide intranet capabilities for one of his customers. Nearly at the same time Rick Strahl had the same task and created WestWind Web Connection (WWWC). The aspect that developers have to have a full Visual FoxPro development environment to create WWWC solutions was the starting point of a "personal sportive competition" of Peter to write his own solution. But the main aspect has to be that it doesn't rely on a full VFP version in order to run. The VFP runtime should enough and the source code has to be compiled and interpreted on the fly. So, as Microsoft released Active Server Pages a name for Peter's solution was found: Active FoxPro Pages (AFP). During the years many drawbacks, design aspects as well as technological hassles forced ProLib Software to refactor the product. This way many limits like DCOM configuration, file-based information transfer between Web server and AFP, missing features (like upload forms or other Web servers than IIS) and extensibility were eliminated. As a consequence ProLib Software decided to rewrite Active FoxPro Pages in mid of 2002 completely. Christof Wollenhaupt, before his marriage known as Christof Lange, and Jochen Kirstätter had to solve this task. AFP 3.0 was officially released at German Devcon in November 2002. Today AFP has six distributors world-wide and there is a lot more information available online than before version 3.0. Directly after a short welcome speech by Rainer Becker, Jochen Kirstätter - aka JoKi - opened today's AFP track and introduced the basic concepts how Active FoxPro Pages works in general, explained the AFP terminilogy and every single component, and presented a small Walk-Through about how to write an AFP-based Web solution. Actually his presentation slides themselves were an AFP Web application. This way it was easy to integrate accompanying AFP samples on the fly. Additionally it was shown that no Visual FoxPro development environment is needed to create a Web application. A simple text editor like NotePad or any WYSIWYG editor on the market is usable to fullfil customer's requirements.Welcome at least two new speakers - Nina Schwanzer and Bernhard Reiter. Both are working at ProLib Software and this year's conference is their first time as speakers. And they did their job very well. The whole session was kind of a "ping pong" game and those two complemented each other to keep the audience in tension. First, they described typical requirements a modern desktop application should fullfil - online registration and activation, auto-update capabilities, or even frontend to administer a Web application on a remote system via internet, and explained how possible solutions like Web Services (using the SOAP interface), DCOM, and even .NET might solve those requirements. But any of those ways has different drawbacks like complicated installation or configuration, or extraordinary download sizes. Next, they introduced a technology they developed and used in a customer's project: Active FoxPro Pages Remote Procedure Call (AFP RPC). [...]   In the next session JoKi described how to extend Active FoxPro Pages. On the one hand AFP provides a plugin interface, and on the other hand any addon for Visual FoxPro might be usable as well. During the first half he spoke about the plugin interface and wrote live a new AFP extension - the Devcon plugin. Later he questioned any former step and showed that a single AFP document may solve the problem as well. So, developing extensions is only interesting if they are re-usable and generic. At the end he talked about multiple interfaces for the same business logic. For instance plain VFP class, COM server and .NET integration. Currently there are several specialized AFP extensions for sending mail, for using cryptographic routines (ie. based on .NET classes), or enhanced methods to handle HTML/XML strings.Rainer Becker and Peter Herzog introduced a new development for Visual Extend (VFX) - an AFP form builder. With this builder creating an AFP Web form designed with Visual FoxPro's form designer was a matter of seconds. The builder itself is currently in pre-release status and will be part of the VFX framework in the future. It was very impressive to see that the whole design of a form as well as most parts of its functionality were exported to a combination of HTML, JavaScript and Active FoxPro Pages. At half-time Jürgen "wOOdy" Wondzinski and JoKi changed places with Rainer and Peter, and presented some Web solutions in AFP. [...] Visual FoxPro 9.0 und Linux Is Linux still a topic for Visual FoxPro developers based on the activities during this year? In his session Jochen Kirstätter - aka JoKi - went not through the technical steps and requirements on how to setup and run FoxPro on a Linux client. Instead, he explained what Linux actually is, and talked about the high variety of distributions. In fact there are a lot of distributions around but since some several years there are some specialized ones available: Live Distributions (aka LiveCDs).The intension of LiveCDs is to run a full-featured Linux operating system on any personal computer directly from a bootable medium, like CD, DVD, or even USB memory stick, without installation on a hard disk. One of the first Linux LiveCDs was made by Klaus Knopper and is well-known as Knoppix. Today, many other LiveCDs are based on the concepts of Knoppix. During the session Jochen booted Morphix, a very light-weighted LiveCD, on his notebook, and actually showed the attendees that testing and playing around with Linux is absolutely easy. Running a text processing application swept away most of the contrary aspects the audience had. Okay, where is the part about FoxPro? Well, there are several scenarios a customer might require usage of Linux, and actually with all of them FoxPro could deal with. I guess that one of the more common ones is the situation that a customer has a heterogeneous intranet with Windows clients and Linux servers, i.e. Windows XP Professional and any Linux distribution on their servers. Even in this scenario there are two variants hidden! Why? Well, on the one hand there is a software package called Samba, that provides Windows server capabilities to a Linux system, and on the other hand there are several SQL servers for Linux, like PostgreSQL, DB2 and MySQL. Either way, FoxPro is able to deal with these scenarios, but you as developer have to know what you are talking about with your customers. And even if there's no Windows operating system, you are able to provide a FoxPro-based solution. Using the wine library - wine stands for Wine Is Not an Emulator - you are able to run your VFP applications on Linux clients, too; but not without reading VFP's EULA. Licenses were also part the session, and Jochen discussed the meaning of Open Source and its misunderstanding throughout most developers. Open Source does not mean that it's without a fee. Instead, it stands for access to the source code of an application or tool. And, VFP itself is one of the best samples to explain Open Source due to fact that since years, VFP is shipped with the xSource.zip archive. [...]

    Read the article

  • Brighton Rocks: UA Europe 2011

    - by ultan o'broin
    User Assistance Europe 2011 was held in Brighton, UK. Having seen Quadrophenia a dozen times, I just had to go along (OK, I wanted to talk about messages in enterprise applications). Sadly, it rained a lot, though that was still eminently more tolerable than being stuck home in Dublin during Bloomsday. So, here are my somewhat selective highlights and observations from the conference, massively skewed towards my own interests, as usual. Enjoyed Leah Guren's (Cow TC) great start ‘keynote’ on the Cultural Dimensions of Software Help Usage. Starting out by revisiting Hofstede's and Hall's work on culture (how many times I have done this for Multilingual magazine?) and then Neilsen’s findings on age as an indicator of performance, Leah showed how it is the expertise of the user that user assistance (UA) needs to be designed for (especially for high-end users), with some considerations made for age, while the gender and culture of users are not major factors. Help also needs to be contextual and concise, embedded close to the action. That users are saying things like “If I want help on Office, I go to Google ” isn't all that profound at this stage, but it is always worth reiterating how search can be optimized to return better results for users. Interestingly, regardless of user education level, the issue of information quality--hinging on the lynchpin of terminology reflecting that of the user--is critical. Major takeaway for me there. Matthew Ellison’s sessions on embedded help and demos were also impressive. Embedded help that is concise and contextual is definitely a powerful UX enabler, and I’m pleased to say that in Oracle Fusion Applications we have embraced the concept fully. Matthew also mentioned in his session about successful software demos that the principle of modality with demos is a must. Look no further than Oracle User Productivity Kit demos See It!, Try It!, Know It, and Do It! modes, for example. I also found some key takeaways in the presentation by Marie-Louise Flacke on notes and warnings. Here, legal considerations seemed to take precedence over providing any real information to users. I was delighted when Marie-Louise called out the Oracle JDeveloper documentation as an exemplar of how to use notes and instructions instead of trying to scare the bejaysus out of people and not providing them with any real information they’d find useful instead. My own session on designing messages for enterprise applications was well attended. Knowing your user profiles (remember user expertise is the king maker for UA so write for each audience involved), how users really work, the required application business and UI rules, what your application technology supports, and how messages integrate with the enterprise help desk and support policies and you will go much further than relying solely on the guideline of "writing messages in plain language". And, remember the value in warnings and confirmation messages too, and how you can use them smartly. I hope y’all got something from my presentation and from my answers to questions afterwards. Ellis Pratt stole the show with his presentation on applying game theory to software UA, using plenty of colorful, relevant examples (check out the Atlassian and DropBox approaches, for example), and striking just the right balance between theory and practice. Completely agree that the approach to take here is not to make UA itself a game, but to invoke UA as part of a bigger game dynamic (time-to-task completion, personal and communal goals, personal achievement and status, and so on). Sure there are gotchas and limitations to gamification, and we need to do more research. However, we'll hear a lot more about this subject in coming years, particularly in the enterprise space. I hope. I also heard good things about the different sessions about DITA usage (including one by Sonja Fuga that clearly opens the door for major innovation in the community content space using WordPress), the progressive disclosure of information (Cerys Willoughby), an overview of controlled language (or "information quality", as I like to position it) solutions and rationale by Dave Gash, and others. I also spent time chatting with Mike Hamilton of MadCap Software, who showed me a cool demo of their Flare product, and the Lingo translation solution. I liked the idea of their licensing model for workers-on-the-go; that’s smart UX-awareness in itself. Also chatted with Julian Murfitt of Mekon about uptake of DITA in the enterprise space. In all, it's worth attending UA Europe. I was surprised, however, not to see conference topics about mobile UA, community conversation and content, and search in its own right. These are unstoppable forces now, and the latter is pretty central to providing assistance now to all but the most irredentist of hard-copy fetishists or advanced technical or functional users working away on the back end of applications and systems. Only saw one iPad too (says the guy who carries three laptops). Tweeting during the conference was pretty much nonexistent during the event, so no community energy there. Perhaps all this can be addressed next year. I would love to see the next UA Europe event come to Dublin (despite Bloomsday, it's not a bad place place, really) now that hotels are so cheap and all. So, what is my overall impression of the state of user assistance in Europe? Clearly, there are still many people in the industry who feel there is something broken with the traditional forms of user assistance (particularly printed doc) and something needs to be done about it. I would suggest they move on and try and embrace change, instead. Many others see new possibilities, offered by UX and technology, as well as the reality of online user behavior in an increasingly connected world and that is encouraging. Such thought leaders need to be listened to. As Ellis Pratt says in his great book, Trends in Technical Communication - Rethinking Help: “To stay relevant means taking a new perspective on the role (of technical writer), and delivering “products” over and above the traditional manual and online Help file... there are a number of new trends in this field - some complementary, some conflicting. Whatever trends emerge as the norm, it’s likely the status quo will change.” It already has, IMO. I hear similar debates in the professional translation world about the onset of translation crowd sourcing (the Facebook model) and machine translation (trust me, that battle is over). Neither of these initiatives has put anyone out of a job and probably won't, though the nature of the work might change. If anything, such innovations have increased the overall need for professional translators as user expectations rise, new audiences emerge, and organizations need to collate and curate user-generated content, combining it with their own. Perhaps user assistance professionals can learn from other professions and grow accordingly.

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • DialogFX: A New Approach to JavaFX Dialogs

    - by HecklerMark
    How would you like a quick and easy drop-in dialog box capability for JavaFX? That's what I was thinking when a weekend presented itself. And never being one to waste a good weekend...  :-) After doing some "roll-your-own" basic dialog building for a JavaFX app, I recently stumbled across Anton Smirnov's work on GitHub. It was a good start, but it wasn't exactly what I was after, and ideas just kept popping up of things I'd do differently. I wanted something a bit more streamlined, a bit easier to just "drop in and use". And so DialogFX was born. DialogFX wasn't intended to be overly fancy, overly clever - just useful and robust. Here were my goals: Easy to use. A dialog "system" should be so simple to use a new developer can drop it in quickly with nearly no learning curve. A seasoned developer shouldn't even have to think, just tap in a few lines and go. Why should dialogs slow "actual development"?  :-) Defaults. If you don't specify something (dialog type, buttons, etc.), a good dialog system should still work. It may not be pretty, but it shouldn't throw gears. Sharable. It's all open source. Even the icons are in the commons, so they can be reused at will. Let's take a look at some screen captures and the code used to produce them.   DialogFX INFO dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX();        dialog.setTitleText("Info Dialog Box Example");        dialog.setMessage("This is an example of an INFO dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ERROR dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ERROR);        dialog.setTitleText("Error Dialog Box Example");        dialog.setMessage("This is an example of an ERROR dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX ACCEPT dialog Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.ACCEPT);        dialog.setTitleText("Accept Dialog Box Example");        dialog.setMessage("This is an example of an ACCEPT dialog box, created using DialogFX.");        dialog.showDialog(); DialogFX Question dialog (Yes/No) Screen captures Windows Mac  Sample code         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. Would you like to continue?");        dialog.showDialog(); DialogFX Question dialog (custom buttons) Screen captures Windows Mac  Sample code         List<String> buttonLabels = new ArrayList<>(2);        buttonLabels.add("Affirmative");        buttonLabels.add("Negative");         DialogFX dialog = new DialogFX(Type.QUESTION);        dialog.setTitleText("Question Dialog Box Example");        dialog.setMessage("This is an example of an QUESTION dialog box, created using DialogFX. This also demonstrates the automatic wrapping of text in DialogFX. Would you like to continue?");        dialog.addButtons(buttonLabels, 0, 1);        dialog.showDialog(); A couple of things to note You may have noticed in that last example the addButtons(buttonLabels, 0, 1) call. You can pass custom button labels in and designate the index of the default button (responding to the ENTER key) and the cancel button (for ESCAPE). Optional parameters, of course, but nice when you may want them. Also, the showDialog() method actually returns the index of the button pressed. Rather than create EventHandlers in the dialog that really have little to do with the dialog itself, you can respond to the user's choice within the calling object. Or not. Again, it's your choice.  :-) And finally, I've Javadoc'ed the code in the main places. Hopefully, this will make it easy to get up and running quickly and with a minimum of fuss. How Do I Get (Git?) It? To try out DialogFX, just point your browser here to the DialogFX GitHub repository and download away! Please take a look, try it out, and let me know what you think. All feedback welcome! All the best, Mark 

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >