Search Results

Search found 2620 results on 105 pages for 'stand alone'.

Page 79/105 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Do NOT Change "Copy Local” project references to false, unless understand subsequences.

    - by Michael Freidgeim
    To optimize performance of visual studio build I've found multiple recommendations to change CopyLocal property for dependent dlls to false,e.g. From http://stackoverflow.com/questions/690033/best-practices-for-large-solutions-in-visual-studio-2008 CopyLocal? For sure turn this offhttp://stackoverflow.com/questions/280751/what-is-the-best-practice-for-copy-local-and-with-project-referencesAlways set the Copy Local property to false and enforce this via a custom msbuild stephttp://codebetter.com/patricksmacchia/2007/06/20/benefit-from-the-c-and-vb-net-compilers-perf/BenefitBenefitMy advice is to always set ‘Copy Local’ to falseSome time ago we've tried to change the setting to false, and found that it causes problem for deployment of top-level projects.Recently I've followed the suggestion and changed the settings for middle-level projects. It didn't cause immediate issues, but I was warned by Readify Consultant Colin Savage about possible errors during deploymentsI haven't undone the changes immediately and we found a few issues during testing.There are many scenarios, when you need to have Copy Local’ left to True.The concerns are highlighted in some stack overflow answers, but they have small number of votes.Top-level projects:  set copy local = true.First of all, it doesn't work correctly for top-level projects, i.e. executables or web sites.As pointed in the answer http://stackoverflow.com/a/6529461/52277for all the references in the one at the top set copy local = true.Alternatively you have to change output directory as it's described in http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/If you set ‘ Copy Local = false’, VS will, unless you tell it otherwise, place each assembly alone in its own .\bin\Debugdirectory. Because of this, you will need to configure VS to place assemblies together in the same directory. To do so, for each VS project, go to VS > Project Properties > Build tab > Output path, and set the Ouput path to ..\bin\Debugfor debug configuration, and ..\bin\Release for release configuration.Second-level  dependencies:  set copy local = true.Another example when copylocal =false fails on run-time, is when top level assembly doesn't directly referenced one of indirect dependencies.E..g. Top-level assembly A has reference to assembly B with copylocal =true, but assembly B has reference to assembly C with copylocal =false. Most likely assembly C will be missing on runtime and will cause errors E.g. http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1Copy local is important for deployment scenarios and tools. As a general rule you should use CopyLocal=True and http://stackoverflow.com/questions/602765/when-should-copy-local-be-set-to-true-and-when-should-it-not?lq=1 Unfortunately there are some quirks and CopyLocal won't necessary work as expected for assembly references in secondary assemblies structured as shown below.MainApp.exe MyLibrary.dll ThirdPartyLibrary.dll (if in the GAC CopyLocal won't copy to MainApp bin folder)This makes xcopy deployments difficult . .Reflection called DLLs  dependencies:  set copy local = true.E.g user can see error "ISystem.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information."The fix for the issue is recommended in http://stackoverflow.com/a/6200173/52277"I solved this issue by setting the Copy Local attribute of my project's references to true."In general, the problems with investigation of deployment issues may overweight the benefits of reduced build time. Setting the Copy Local to false without considering deployment issues is not a good idea.

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • How do I fix my resolution after Directx install through Steam?

    - by Justin
    I'm a bit long-winded so see bottom for quick version and specs. Friendly Hello: Hello all on these askUbuntu pages, I just recently built my own computer and decided to switch to Ubuntu for the extra coolness. I've been learning a lot through all this, and mostly been trying to figure out issues on my own (read: Google searches). However, I couldn't seem to find others with this problem so I've come here for help. Detailed Recount: So I just used WINE and WINETRICKS to install Steam. All went well and it worked. Then I went to trying a game out. I remembered that Orcs Must Die! worked from http://www.steamgamesonlinux.com/ so I tried that out. After selecting to download it, that's when the problem occurred. The screen suddenly zoomed in!!! I think it's the resolution right? Half the screen is cut off and I can't see parts of the right side of windows. My theory is that this is due to Direct X being installed through Steam, as Steam automatically installed it as I chose to download the game. It didn't even ask me to install Direct X or not ): It all happened so fast. This all being said, the game works fine! It looks a little strange, as if the resolution was off, but it plays just fine. What I did so far: Restarted my computer. Didn't work -_- Researched Steam installing DirectX on Ubuntu then messing up resolution and couldn't really find anything. Researched uninstalling DirectX from Ubuntu but only found uninstalling DirectX after having been installed with Wine, not through Steam. Got mad and ate my feelings. Tried "xrandr -s 0" but it didn't do anything. Ran xrandr alone and terminal showed this: Screen 0: minimum 8 x 8, current 640 x 480, maximum 16384 x 16384 DVI-I-0 connected 640x480+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 640x480 59.9*+ 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) About now I was mad so I played Odin's Sphere then took a nap. Back to it! I entered the following: xrandr --output DVI-I-0 --mode 1024x768 But I was met with this message: xrandr: cannot find mode 1024x768 I get the same messages for 800x600, 1400x1050, and seemingly any other combination of numbers. I then tried Going into System Settings then Displays, then playing around in there. My Resolution is set to 640x480 and there are no other options for me to choose from. Rotation has Normal, Clockwise, Counter Clockwise, and 180 Degrees. It's set to Normal and I haven't messed with that. Launcher Placement has Unknown and All Displays as its two options. It's set to Unknown, but moving it to All Displays doesn't seem to do anything. Finally, when I click Detect Displays, nothing seems to happen. Quick Version: Linux noob. Steam installed with Wine and Winetricks. Steam downloaded and installed game + DirectX. Resolution messed up now (I think; pretty sure), can't fix it, very annoying, no idea what's going on, halp! Specs: Ubuntu Version 12.04 Wine Version 1.4.1 Have not changed any settings in Wine Using Winetricks Graphics Card: http://www.gigabyte.com/products/pro...px?pid=4361#sp Drivers: Proprietary (Installing those were a LOT of fun) Also let it be known that I have a DVI to VGA cord running from my Graphics card to my monitor. If any more information is needed I am ready to report. Thank You: Thanks a lot for your help and all the work you do to support noob ubuntuers like me (:

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • Upgrading to Oracle Enterprise Manager 12c Release 2: Top Tips One Must Know

    - by AnkurGupta
    Recently Oracle announced incremental release of Enterprise Manager 12c called Enterprise Manager 12c Release 2 (EM12c R2) which includes several new exciting features (Press announcement). Right before the official release, we upgraded an internal production site from EM 12c R1 to EM 12c R2 and had an extremely pleasant experience. Let me share few key takeaways as well as few tips from this upgrade exercise. I - Why Should You Upgrade To Enterprise Manager 12c Release 2 While an upgrade is usually recommended primarily to take benefit of the latest features (which is valid for this upgrade as well), I found several other compelling reasons purely from deployment perspective. Standardize your EM deployment:  Enterprise Manager comprises of several different components (OMS, agents, plug-ins, etc) and it might be possible that these are at varied patch levels in your environment. For instance, in case of an environment containing Bundle Patch 1 (customer announcement), there is a good chance that you may not have all the components up-to-date. There are two possible reasons. Bundle Patch 1 involved patching different components (OMS, agents, plug-ins) with multiple one-off patches which may not have been applied to all components yet. Bundle Patch 1 for different platforms were not released together. Which means you may not have got the chance to patch all the components on different platforms. Note: BP1 patches are not mandatory to upgrade to EM12c R2 release EM 12c R2 provides an excellent opportunity to standardize your Cloud Control environment (OMS, repository and agents) and plug-ins to latest versions in single shot. All platform releases are made available simultaneously: For the very first time in the history of EM release, all the platforms were released on day one itself, which means you do not need to wait for platform specific binaries for EM OMS or Agent to perform install or upgrades in a heterogeneous environment. Highly refined and automated process – Upgrade process is by far the smoothest and the cleanest as compared to previous releases of Enterprise manager. Following are the ones that stand out. Automatic Plug-in management – Plug-in upgrade along with new plug-in deployment is supported in upgrade installer wizard which means bulk of the updates to OMS and repository can be done in the same workflow. Saves time and minimizes user inputs. Plug-in Upgrade or Migrate Auto Update: While doing the OMS and repository upgrade, you can use Auto Update screen in Oracle Universal Installer to check for any updates/patches. That will help you to avoid the know issues and will make sure that your upgrade is successful. Allows mass upgrade of EM Agents – A new dedicated menu has been added in the EM console for agent upgrade. Agent upgrade workflow is extremely simple that requires agent name as the only input. ADM / JVMD Manager/Agent upgrade – complete process is supported via UI screens. EM12c R2 Upgrade Guide is much simpler to follow as compared to those for earlier releases. This is attributed to the simpler upgrade process. Robust and Performing Platform: EM12c R2 release not only includes several new features, but also provides a more stable platform which incorporates several fixes and enhancements in the Enterprise Manager framework. II - Few Tips To Remember In my last post (blog link) I shared few tips and tricks from my experience applying the Bundle Patch. Recently I upgraded the same site to EM 12c R2 and found few points that you must take note of, while planning this upgrade. The tips below are also applicable to EM 12c R1 environments that do not have Bundle Patch 1 patches applied. Verify the monitored application certification – Specific targets like E-Business Suite have not yet been certified as managed target in EM 12c R2. Therefore make sure to recheck the Enterprise Manager certification Matrix on My Oracle Support before planning the upgrade. Plan downtime – Because EM 12c R2 is an incremental release of EM 12c, for EM 12c R1 to EM 12c R2 upgrade supports only 1-system upgrade approach, which mean there will be downtime. OMS name change after upgrade – In case of multi OMS environments, additional OMS is renamed after upgrade, which has few implications when you upgrade JVMD and ADP agents on OMS. This is well documented in upgrade guide but make sure you read through all the notes. Upgrading BI Publisher– EM12c R2 is certified with BI Publisher 11.1.1.6.0 only. Therefore in case you are using EM 12c R1 which is integrated with BI Publisher 11.1.1.5.0, you must upgrade the BI Publisher to 11.1.1.6.0. Follow the steps from Advanced Installation and Configuration Guide here. Perform Post upgrade Tasks – Make sure to perform post upgrade steps mentioned in documentation here. These include critical changes that must be done right after upgrade to get the right configuration. For instance Database plug-in should be upgraded to Revision 3 (12.1.0.2.0 [u120804]). Delete old OMS Home – EM12c R1 to EM12c R2 is an out of place upgrade, which means it creates a new oracle home for OMS, plug-ins, etc. Therefore please ensure that You have sufficient extra space for new OMS before starting the upgrade process. You clean up the old OMS home after the upgrade process. Steps are available here. DO NOT remove the agent home on OMS host, because agent is upgraded in-place. If you have standby OMS setup then do look into the steps to upgrade the standby OMS from the upgrade guide before going ahead. Read the right documentation – Make sure to follow the Upgrade guide which provides the most comprehensive information on EM12c R2 upgrade process. Additionally you can refer other resources to get familiar with upgrade concepts. Recorded webcast - Oracle Enterprise Manager Cloud Control 12c Release 2 Installation and Upgrade Overview Presentation - Understanding Enterprise Manager 12.1.0.2 Upgrade We are very excited about this latest release and will look forward to hear back any feedback from your upgrade experience!

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • Release Management as Orchestra

    - by ericajanine
    I read an excellent, concise article (http://www.buildmeister.com/articles/software_release_management_best_practices) on the basics of release management practices. In the article, it states "Release Management is often likened to the conductor of an orchestra, with the individual changes to be implemented the various instruments within it." I played in music ensembles for years, so this is especially close to my heart as example. I learned most of my discipline from hours and hours of practice at the hand of a very skilled conductor and leader. I also learned that the true magic in symphonic performance is one where everyone involved is focused on one sound, one goal. In turn, that solid focus creates a sound and experience bigger than just mechanics alone accomplish. In symphony, a conductor's true purpose is to make you, a performer, better so the overall sound and end product is better. The big picture (the performance of the composition) is the end-game, and all musicians in the orchestra know without question their part makes up an important but incomplete piece of that performance. A good conductor works with each section (e.g. group) to ensure their individual pieces are solid. Let's restate: The conductor leads and is responsible for ensuring those pieces are solid. While the performers themselves are doing the work, the conductor is the final authority on when the pieces are ready or not. If not, the conductor initiates the efforts to get them ready or makes the decision to scrap their parts altogether for the sake of an overall performance. Let it sink in, because it's clear--It is not the performer's call if they play their part as agreed, it's the conductor's final call to allow it. In comparison, if a software release manager is a conductor, the only way for that manager to be effective is to drive the overarching process and execution of individual pieces of a software development lifecycle. It does not mean the release manager performs each and every piece, it means the release manager has oversight and influence because the end-game is a successful software enhancin a useable environment. It means the release manager, not the developer or development manager, has the final call if something goes into a software release. Of course, this is not a process of autocracy or dictation of absolute rule, it's cooperative effort. But the release manager must have the final authority to make a decision if something is ready to be added to the bigger piece, the overall symphony of software changes being considered for package and release. It also goes without saying a release manager, like a conductor, must have full autonomy and isolation from other software groups. A conductor is the one on the podium waving a little stick at the each section and cueing them for their parts, not yelling from the back of the room while also playing a tuba and taking direction from the horn section. I have personally seen where release managers are relegated to being considered little more than coordinators, red-tapers to "satisfy" the demands of an audit group without being bothered to actually respect all that a release manager gives a group willing to employ them fully. In this dysfunctional scenario, development managers, project managers, business users, and other stakeholders have been given nearly full clearance to demand and push their agendas forward, causing a tail-wagging-the-dog scenario where an inherent conflict will ensue. Depending on the strength, determination for peace, and willingness to overlook a built-in expectation that is wrong, the release manager here must face the crafted conflict head-on and diffuse it as quickly as possible. Then, the release manager must clearly make a case why a change cannot be released without negative impact to all parties involved. If a political agenda is solely driving a software release, there IS no symphony, there is no "software lifecycle". It's just out-of-tune noise. More importantly, there is no real conductor. Sometimes, just wanting to make a beautiful sound is not enough. If you are a release manager, are you freed up enough to move, to conduct the sections of software creation to ensure a solid release performance is possible? If not, it's time to take stock in what your role actually is and see if that is what you truly want to achieve in your position. If you are, then you can successfully build your career and that of the people in your groups to create truly beautiful software (music) together.

    Read the article

  • Top Tier, A-Game Talent - How to Land em'

    - by GeekAgilistMercenary
    Recently the question came up from a close friend of mine, "will my PhD help me attain a higher income in the north west?"  I had to tell him, that it might get him a little more, but it won't get him in the top income brackets for the occupation.  Another time, a few days later, someone else asked this too.  Then again, I see a job posting that requires a Bachelors Degree and some other nonsense.  The job posting even states they want "A-Game" talent. I am almost shocked at how poorly part of this industry doesn't realize how unimportant a degree is to getting real top tier, a-game talent.  (and yes, I get a little riled up about this matter) You Can't Make Good Software Developers.  No college out there is going to train someone to be in the top 10%, and absolutely not to be in the top 5% of skill levels.  Colleges can NOT do this.  It is up to the individual, and the individual alone.  If top tier talent seems to come from a college, one should check their premise and look at the motivations the individuals have to go to that school.  There is most likely a reason that top tier talent appears to be made there.  The college however, can only guide or assist, but I repeat that "top tier talent is a very individualistic endeavor". Some might say, well a group is needed, support is needed, this and that are needed.  True, an individual needs a support system and a college can provide that, but it generally ends there.  The support group helps, provides a sounding wall, and provides correlation to good ideas for the a-game top tier geek.  But again, the endeavor is the individuals desire. top tier talent is a very individualistic endeavor - Me Hiring Top Tier, A-Game Talent There are a few things when trying to hire this level of game player. The first thing is to not require a degree of any sort.  Sure, it looks good, but it won't dictate anything other than the individual was able to go through the regimented steps of college. List the skills and ideas that you would like to find in an individual.  Think of two people meeting for the first time, what do you want to know about the other individual.  Team fit is absolutely fundamental for top tier talent.  That support group that I mentioned above, top tier talent works best with a solid group of players. Keep your technology up to date, moving forward, and don't bore your top talent if you manage to get it.  If the company slows down, they will leave.  The more valuable they find out they are, the lower tolerance they'll have for this.  For managers, directors, and leaders in an organization this is THE challenge for them. Provide opportunities not just for advancement, but ways for them to advance their knowledge such as training, a book budget, or other means.  Even if some software they want to use isn't used ton the project, get it for them (within reason of course ? couple $100 or even a few $1000 for a good software license to MSDN, Tellerik, or other suite of software is ideal). Don't push them to, and don't let them overwork themselves into burnout.  This, as a leader in an organization is easy to do if one finds themselves actually hiring top talent.  Because top talent just provides results and more results.  But they are human, they will break, don't be the cause of that or you'll lose your talent. For now, that is it from me on this topic, back to the revenue, code, projects, and pushing forward. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • NTFS Issues in Windows 7 and 2008 R2 - 'Is it a Bug?'

    - by renewieldraaijer
    I have been using the various versions of the Microsoft Windows product line since NT4 and I really thought I knew the ins and outs about the NTFS filesystem by now. There were always a few rules of thumb to understand what happens if you move data around. These rules were: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. The same goes for moving data between disk partitions. Only when you move data within the same partition, the permissions are kept."  Recently I was asked to assist in troubleshooting some NTFS related issues. This forced me to have another good look at this theory. To my surprise I found out that this theory does not completely stand anymore. Apparently some things have changed since the release of Windows Vista / Windows 2008. Since the release of these Operating Systems, a move within the same disk partition results in the data inheriting the permissions of the location it is being copied into. A major change in the NTFS filesystem you would think!  Not quite! The above only counts when the move operation is being performed by using Windows Explorer. A move by using the 'move' command from within a cmd prompt for example, retains the NTFS permissions, just like before in Windows XP and older systems. Conclusion: The Windows Explorer is responsible for changing the ACL's of the moved data. This is a remarkable change, but if you follow this theory, the resulting ACL after a move operation is still predictable.  We could say that since Windows Vista and Windows 2008, a new rule set applies: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. Same goes for moving data between disk partitions and within disk partitions. Only when you move data within the same partition by using something else than the Windows Explorer, the permissions are kept." The above behavior should be unchanged in Windows 7 / Windows 2008 R2, compared to Windows Vista / 2008. But somehow the NTFS permissions are not so predictable in Windows 7 and Windows 2008 R2. Moving data within the same disk partition the one time results in the permissions being kept and the next time results in inherited permissions from the destination location. I will try to demonstrate this in a few examples: Example 1 (Incorrect behavior): Consider two folders, 'Folder A' and 'Folder B' with the following permissions configured.                    Now we create the test file 'test file 1.txt' in 'Folder A' and afterwards move this file to 'Folder B' using Windows Explorer.                       According to the new theory, the file should inherit the permissions of 'Folder B' and therefore 'Group B' should appear in the ACL of 'test file 1.txt'. In the screenshot below the resulting permissions are displayed. The permissions from the originating location are kept, while the permissions of 'Folder B' should be inherited.                   Example 2 (Correct behavior): Again, consider the same two folders. This time we make a small modification to the ACL of 'Folder A'. We add 'Group C' to the ACL and again we create a file in 'Folder A' which we name 'test file 2.txt'.                    Next, we move 'test file 2.txt' to 'Folder B'.                       Again, we check the permissions of 'test file 2.txt' at the target location. We can now see that the permissions are inherited. This is what should be happening, and can be considered 'correct behavior' for Windows Vista / 2008 / 7 / 2008 R2. It remains uncertain why this behavior is so inconsistent. At this time, this is under investigation with Microsoft Support. The investigation has been going for the last two weeks and it is beginning to look like there is no rational reason for this, other than a bug in the Windows Explorer in Windows 7 and 2008 R2. As soon as there is any certainty on this, I will note it here in this blog.                   The examples above are harmless tests, by using my own laptop. If you would create the same set of folders and groups, and configure exactly the same permissions, you will see exactly the same behavior. Be sure to use Windows 7 or Windows 2008 R2.   Initially the problem arose at a customer site where move operations on data on the fileserver by users would result in unpredictable results. This resulted in the wrong set of people having àccess permissions on data that they should not have permissions to. Off course this is something we want to prevent at all costs.   I have also done several tests with move operations by using the move command in a cmd prompt. This way the behavior is always consistent. The inconsistent behavior is only exposed when using the Windows Explorer to initiate the move operation, and only when using Windows 7 or Windows 2008 R2 systems. It is evident that this behavior changes when the ACL of a folder has been changed, for example by adding an extra entry. The reason for this remains uncertain though. To be continued…. A dutch version of this post can be found at: http://blogs.platani.nl/?p=612

    Read the article

  • Create Orchard Module in a Separate Project

    - by Steve Michelotti
    The Orchard Project is a new OOS Microsoft project that is being developed up on CodePlex. From the Orchard home page on CodePlex, it states “Orchard project is focused on delivering a .NET-based CMS application that will allow users to rapidly create content-driven Websites, and an extensibility framework that will allow developers and customizers to provide additional functionality through modules and themes.” The Orchard Project site contains additional information including documentation and walkthroughs. The ability to create a composite solution based on a collection of modules is a compelling feature. In Orchard, these modules can just be created as simple MVC Areas or they can also be created inside of stand-alone web application projects.  The walkthrough for writing an Orchard module that is available on the Orchard site uses a simple Area that is created inside of the host application. It is based on the Orchard MIX presentation. This walkthrough does an effective job introducing various Orchard concepts such as hooking into the navigation system, theme/layout system, content types, and more.  However, creating an Orchard module in a separate project does not seem to be concisely documented anywhere. Orchard ships with several module OOTB that are in separate assemblies – but again, it’s not well documented how to get started building one from scratch. The following are the steps I took to successfully get an Orchard module in a separate project up and running. Step 1 – Download the OrchardIIS.zip file from the Orchard Release page. Unzip and open up the solution. Step 2 – Add your project to the solution. I named my project “Orchard.Widget” and used and “MVC 2 Empty Web Application” project type. Make sure you put the physical path inside the “Modules” sub-folder to the main project like this: At this point the solution should look like: Step 3 – Add assembly references to Orchard.dll and Orchard.Core.dll. Step 4 – Add a controller and view.  I’ll just create a Hello World controller and view. Notice I created the view as a partial view (*.ascx). Also add the [Themed] attribute to the top of the HomeController class just like the normal Orchard walk through shows it. Step 5 – Add Module.txt to the project root. The is a very important step. Orchard will not recognize your module without this text file present.  It can contain just the name of your module: name: Widget Step 6 – Add Routes.cs. Notice I’ve given an area name of “Orchard.Widget” on lines 26 and 33. 1: using System; 2: using System.Collections.Generic; 3: using System.Web.Mvc; 4: using System.Web.Routing; 5: using Orchard.Mvc.Routes; 6:   7: namespace Orchard.Widget 8: { 9: public class Routes : IRouteProvider 10: { 11: public void GetRoutes(ICollection<RouteDescriptor> routes) 12: { 13: foreach (var routeDescriptor in GetRoutes()) 14: { 15: routes.Add(routeDescriptor); 16: } 17: } 18:   19: public IEnumerable<RouteDescriptor> GetRoutes() 20: { 21: return new[] { 22: new RouteDescriptor { 23: Route = new Route( 24: "Widget/{controller}/{action}/{id}", 25: new RouteValueDictionary { 26: {"area", "Orchard.Widget"}, 27: {"controller", "Home"}, 28: {"action", "Index"}, 29: {"id", ""} 30: }, 31: new RouteValueDictionary(), 32: new RouteValueDictionary { 33: {"area", "Orchard.Widget"} 34: }, 35: new MvcRouteHandler()) 36: } 37: }; 38: } 39: } 40: } Step 7 – Add MainMenu.cs. This will make sure that an item appears in the main menu called “Widget” which points to the module. 1: using System; 2: using Orchard.UI.Navigation; 3:   4: namespace Orchard.Widget 5: { 6: public class MainMenu : INavigationProvider 7: { 8: public void GetNavigation(NavigationBuilder builder) 9: { 10: builder.Add(menu => menu.Add("Widget", item => item.Action("Index", "Home", new 11: { 12: area = "Orchard.Widget" 13: }))); 14: } 15:   16: public string MenuName 17: { 18: get { return "main"; } 19: } 20: } 21: } Step 8 – Clean up web.config. By default Visual Studio adds numerous sections to the web.config. The sections that can be removed are: appSettings, connectionStrings, authentication, membership, profile, and roleManager. Step 9 – Delete Global.asax. This project will ultimately be running from inside the Orchard host so this “sub-site” should not have its own Global.asax.   Now you’re ready the run the app.  When you first run it, the “Widget” menu item will appear in the main menu because of the MainMenu.cs file we added: We can then click the “Widget” link in the main menu to send us over to our view:   Packaging From start to finish, it’s a relatively painless experience but it could be better. For example, a Visual Studio project template that encapsulates aspects from this blog post would definitely make it a lot easier to get up and running with creating an Orchard module.  Another aspect I found interesting is that if you read the first paragraph of the walkthrough, it says, “You can also develop modules as separate projects, to be packaged and shared with other users of Orchard CMS (the packaging story is still to be defined, along with marketplaces for sharing modules).” In particular, I will be extremely curious to see how the “packaging story” evolves. The first thing that comes to mind for me is: what if we explored MvcContrib Portable Areas as a potential mechanism for this packaging? This would certainly make things easy since all artifacts (aspx, aspx, images, css, javascript) are all wrapped up into a single assembly. Granted, Orchard does have its own infrastructure for layouts and themes but it seems like integrating portable areas into this pipeline would not be a difficult undertaking. Maybe that’ll be the next research task. :)

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What I've Gained from being a Presenter at Tech Events

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2014/06/12/what-ive-gained-from-being-a-presenter-at-tech-events.aspxI know I fail at blogging lately.  As I've said before life happens and it gets in the way of best laid out plans.  I thought about creating some type of watch with some app that basically dictates using dragon naturally speaking to Wordpress, but alas no time and the write processing capabilities just don't exist yet.So to get to my point Alison Gianotto created this blog post: http://www.snipe.net/2014/06/why-you-should-stop-stalling-and-start-presenting/.  I like the message she has stated in this post and I want to share something personal.It was around 2007 that I was seriously looking into leaving the technology field altogether and going back to school.  I was calling places like Washington University in Saint Louis and University of Missouri Kansas City asking them how I could go about getting into some type of post graduate medical school program.  My entire high school career was based on Medical Explorers and somehow becoming a doctor.  I did not want to take my hobby and continue using it as a career mechanism.  I was unhappy, but I didn't realize why I was unhappy at the time.  It was really a lot of bad things involving the lack of self confidence and self esteem.  Overall I was not in a good place and it took me until 2011 to realize that I still was not in a good place in life.So in about April 2007 or so I started this blog that you guys have been reading or occasionally read.  I kind of started passively stalking people by reading their blogs in the SharePoint and .Net communities.  I also started listening to .Net Rocks & watching videos on their corresponding training for SharePoint, WCF, WF, and a bunch of other technologies.  I wanted more knowledge, so someone suggested I go to a user group.  I've told this story before about how I met Jeff Julian & John Alexander, so that point I will spare you the details.  You know how I got to my first user group presentation and how I started getting involved with events, so I'll also spare those details.The point I want to touch on is that I went out I started speaking and that path I took helped me gain the self confidence and self respect I needed.  When I first moved to NYC I couldn't even ride a subway by myself or walk alone without getting lost.  Now I feel like I can go out and solve any problem someone throws at me.  So you see what Alison states in her blog post is true and I am a great example to that point.  I stood in front of 800 or so people at SharePoint Conference in 2011 and spoke about a topic.  In 2007 I would have hidden or stuttered the entire time.  I have now spoken at over 70 events and user groups.  I am a top 25 influencer in my technology.  I was a most valued professional for years in a row in Microsoft SharePoint.  People are constantly trying to gain my time, so that they can pick my brain for solutions and other life problems.  I went from maybe five or six friends to over hundreds of friends in various cities across the globe.  I'm not saying it's an instant fame and it doesn't take a ton of work, but I have never looked back once at my life and regretted the choice I made in 2007.  It has lead me to a lot of other things in my life, including more positivity and happiness.  If anyone ever wants to contact me and pick my brain on a presentation go ahead.  If you want me to help you find the best meetup that suits you for that presentation I can try to help too (I might be a little more helpful in the Microsoft or iOS arenas though).  The best thing I can state is don't be scared just do it.  If you need an audience I can try to pencil it in my schedule.  I can't promise anything, but if you are in NYC I can at least try.

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >