Search Results

Search found 3150 results on 126 pages for 'administrator'.

Page 116/126 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Calling All Agile Customers-Share Your Stories at the Upcoming PLM Summit

    - by Terri Hiskey
    Now that we've closed the door on another Oracle OpenWorld, planning is in full swing for the next PLM Summit, taking place February 4-6, 2013 in San Francisco, in conjunction with the Oracle Value Chain Summit. This event is a must-attend for all Agile PLM customers. We will be holding five tracks with over forty Agile PLM-focused sessions covering a range of topics and industries. If you'd like to be notified once registration is live for this event, be sure to sign up at www.oracle.com/goto/vcs. CALL FOR PRESENTATIONS: We are looking for some fresh, new customer stories to share with attendees. Read below for descriptions of the five tracks, and the suggested topics that we'd like to hear from customers. If you are interested in presenting at the PLM Summit (and getting a FREE pass to attend if your presentation is accepted!) send me an email at terri.hiskey-AT-oracle.com with: Your proposed session title and the track your session fits into 3-5 bullets of takeaways that attendees will get from your presentation Your complete contact information including name, title, company, telephone number and email The deadline for this call for presentations is Thursday, November 15, so get your submission in soon! PLM Track #1:  Product Insights and Best Practices This track will provide executive attendees and line of business managers with an overview of how Agile PLM has been deployed and used at customers to enable and manage critical product-related business processes including enterprise quality and supplier management, compliance, product cost management, portfolio management, commercialization and software lifecycle management. These sessions will also provide details around how to manage the development and rollout of the solutions and how to achieve and track value. Possible session topics: Software Lifecycle Management Enterprise Quality Management New Product Development Integrated Business Planning ECO effectivity planning Rapid Commercialization             Manage the Design to Release Process for Complex Configured Products PLM for Life Sciences Companies I (Compliant Data Set) PLM for Life Sciences Companies II (eMDR, UDI) Discrete CPG – Private Label Mgmt Cost Management and Strategic Sourcing IP Mgmt in the Semiconductor Industry Implementing the Enterprise Training Record using Agile PLM PLM Track #2: Product Deep Dives & Demos This track is aimed at line of business  and IT managers who would like to understand the benefits of expanding their PLM footprint. The sessions in this track will provide attendees with an up-close and in-depth look Agile PLM’s newer and exciting applications, including analytics and innovation management, and will detail features and functionality that are available in the latest version of Agile PLM Possible session topics: Oracle Product Lifecycle Analytics Integrating PLM with Engineering and Supply Chain Systems Streamline PLM Design to Manufacturing Processes with AutoVue Visualization Solutions         Achieve Environmental Compliance (REACH and ROHS) with Agile Product Governance & Compliance PIM Deep Dive Achieving Integrated Change Control with Agile PLM and E-Business Suite Deploying PLM at Small and Midsize Enterprises Enhancing Oracle PQM w/APQP and 8D functionality Advanced Roles and Privileges – Enabling ITAR Model Unit Effectivity Implementing REACH with 9.3.2 Deploying Job Functions, Functional Teams in 9.3.2 to Improve Your Approval Matrix PLM Track #3: Administration & Integrations This track will provide sessions for Agile administrators, managers and daily Agile PLM users who are preparing to upgrade or looking to extend the use of their current PLM implementation through AIA and process extensions. It will include deeper conversation about Agile PLM features and best practices on managing an Agile PLM infrastructure. Possible session topics: Expand the Value of your Agile Investment with Innovative Process Extension Ideas Ensuring Implementation & Upgrade Success Ensure the Integrity and Accuracy of Product Data Across the Enterprise              Maximize the Benefits of an Integrated Architecture with AIA Integrating your PLM Implementation with ERP               Infrastructure Optimization Expanding Your PLM Implementation PLM Administrator Open Forum Q&A/Discussion FDA Validation Best Practices Best Practices for Managing a large Agile Deployment: Clustering, Load Balancing and Firewalls PLM Track #4: Agile PLM for Process This track is aimed at attendees interested in or currently using Agile PLM for Process. The sessions in this track will go over new features and functionality available in the newest version of PLM for Process and will give attendees an overview on how PLM for Process is being used to manage critical business processes such as formulation, recipe and specification management Possible session topics: PLM for Process Strategy, Roadmap and Update New Product Development and Introduction Effective Product Supplier Collaboration             Leverage Agile Formulation and Compliance to Manage Cost, Compliance, Quality, Labeling and Nutrition Menu Management Innovation Data Management Food Safety/ Introduction of P4P Quality Mgmt PLM Track #5: Agile PLM and Innovation Management This track consists of five sessions, and is for attendees interested in learning more about Oracle’s Agile Innovation Management, an exciting new addition to the Agile PLM application family that redefines the industry’s scope of product lifecycle management. Oracle’s innovation solutions enable companies to collaborate in a focused way among various functional groups (marketing, sales, operations, engineering/R&D and sourcing), combining insights of customer needs/requirements, competition, available technologies, alternate design scenarios and portfolio constraints to deliver what customers truly value. The results are better products, higher margins, greater efficiencies, more satisfied customers and the increased ability to continuously innovate. Possible session topics: Product Innovation Management Solution Overview Product Requirements & Ideation Management Concept Design Management Product Lifecycle Portfolio Management Innovation as a Competitive Differentiator

    Read the article

  • [GEEK SCHOOL] Network Security 3: Windows Defender and a Malware-Free System

    - by Ciprian Rusen
    In this second lesson we are going to talk about one of the most confusing security products that are bundled with Windows: Windows Defender. In the past, this product has had a bad reputation and for good reason – it was very limited in its capacity to protect your computer from real-world malware. However, the latest version included in Windows 8.x operating systems is much different than in the past and it provides real protection to its users. The nice thing about Windows Defender in its current incarnation, is that it protects your system from the start, so there are never gaps in coverage. We will start this lesson by explaining what Windows Defender is in Windows 7 and Vista versus what it is in Windows 8, and what product to use if you are using an earlier version. We next will explore how to use Windows Defender, how to improve its default settings, and how to deal with the alerts that it displays. As you will see, Windows Defender will have you using its list of quarantined items a lot more often than other security products. This is why we will explain in detail how to work with it and remove malware for good or restore those items that are only false alarms. Lastly, you will learn how to turn off Windows Defender if you no longer want to use it and you prefer a third-party security product in its place and then how to enable it back, if you have changed your mind about using it. Upon completion, you should have a thorough understanding of your system’s default anti-malware options, or how to protect your system expeditiously. What is Windows Defender? Unfortunately there is no one clear answer to this question because of the confusing way Microsoft has chosen to name its security products. Windows Defender is a different product, depending on the Windows operating system you are using. If you use Windows Vista or Windows 7, then Windows Defender is a security tool that protects your computer from spyware. This but one form of malware made out of tools and applications that monitor your movements on the Internet or the activities you make on your computer. Spyware tends to send the information that is collected to a remote server and it is later used in all kinds of malicious purposes, from displaying advertising you don’t want, to using your personal data, etc. However, there are many other types of malware on the Internet and this version of Windows Defender is not able to protect users from any of them. That’s why, if you are using Windows 7 or earlier, we strongly recommend that you disable Windows Defender and install a more complete security product like Microsoft Security Essentials, or third-party security products from specialized security vendors. If you use Windows 8.x operating systems, then Windows Defender is the same thing as Microsoft Security Essentials: a decent security product that protects your computer in-real time from viruses and spyware. The fact that this product protects your computer also from viruses, not just from spyware, makes a huge difference. If you don’t want to pay for security products, Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) are good alternatives. Windows Defender in Windows 8.x and Microsoft Security Essentials are the same product, only their name is different. In this lesson, we will use the Windows Defender version from Windows 8.x but our instructions apply also to Microsoft Security Essentials (MSE) in Windows 7 and Windows Vista. If you want to download Microsoft Security Essentials and try it out, we recommend you to use this page: Download Microsoft Security Essentials. There you will find both 32-bit and 64-bit editions of this product as well versions in multiple languages. How to Use and Configure Windows Defender Using Windows Defender (MSE) is very easy to use. To start, search for “defender” on the Windows 8.x Start screen and click or tap the “Windows Defender” search result. In Windows 7, search for “security” in the Start Menu search box and click “Microsoft Security Essentials”. Windows Defender has four tabs which give you access to the following tools and options: Home – here you can view the security status of your system. If everything is alright, then it will be colored in green. If there are some warnings to consider, then it will be colored in yellow, and if there are threats that must be dealt with, everything will be colored in red. On the right side of the “Home” tab you will find options for scanning your computer for viruses and spyware. On the bottom of the tab you will find information about when the last scan was performed and what type of scan it was. Update – here you will find information on whether this product is up-to-date. You will learn when it was last updated and the versions of the definitions it is using. You can also trigger a manual update. History – here you can access quarantined items, see which items you’ve allowed to run on your PC even if they were identified as malware by Windows Defender, and view a complete list with all the malicious items Windows Defender has detected on your PC. In order to access all these lists and work with them, you need to be signed in as an administrator. Settings – this is the tab where you can turn on the real-time protection service, exclude files, file types, processes, and locations from its scans as well as access a couple of more advanced settings. The only difference between Windows Defender in Windows 8.x and Microsoft Security Essentials (in Windows 7 or earlier) is that, in the “Settings” tab, Microsoft Security Essentials allows you to set when to run scheduled scans while Windows Defender lacks this option.

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • Oracle Social Network Developer Challenge: Bezzotech

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I’ve covered all the entries we had for the Oracle Social Network Developer Challenge, the winners, Dimitri and Martin, HarQen, TEAM Informatics and John Sim from Fishbowl Solutions, and today, I’m giving you bonus coverage. Friend of the ‘Lab, Bex Huff (@bex) from Bezzotech (@bezzotech), had an interesting OpenWorld. He rebounded from an allergic reaction to finish his entry, Honey Badger, only to have his other OpenWorld commitments make him unable to present his work. Still, he did a bunch of work, and I want to make sure everyone knows about the Honey Badger. If you’re wondering about the name, it’s a meme; “honey badger don’t care.” Bex tackled a common problem with social tools by adding game mechanics to create an incentive for people to keep their profiles updated. He used a Hot-or-Not style comparison app that poses expertise questions and awards a badge to the winner. Questions are based on whatever attributes the business wants to emphasize. The goal is to find the mavens in an organization, give them praise and recognition, ideally creating incentive for everyone to raise their games. In his own words: There is a real information quality problem in social networks. In last year’s keynote, Larry Elison demonstrated how to use the social network to track down resources that have the skill sets needed for specific projects. But how well would that work in real life? People usually update that information with the basic profile information, but they rarely update their profiles with latest news items, projects, customers, or skills. It’s a pain. Or, put another way, when was the last time you updated your LinkedIn profile? Enter the Honey Badger! This is a example of a comparator app that gamifies the way people keep their profiles updated, which ensures higher quality data in the social network. An administrator comes up with a series of important questions: Who is a better communicator? Who is a better Java programmer? Who is a better team player? And people would have a space in their profile to give a justification as to why they have these skills. The second part of the app is the comparator. It randomly shows two people, their names, and their justification for why they have these skills. You will click on one of them to “vote” for them, then on the next page you will see the results from the previous match, and get 2 new people to vote on. Anybody with a winning score wins a “Honey Badge” to be displayed on their profile page, which proudly states that their peers agree that this person has those skills. Once a badge is won, it will be jealously guarded. The longer your go without updating your profile, the more likely it is that you will lose your badge. This “loss aversion” is well known in psychology, and is a strong incentive for people to keep their profiles up to date. If a user sees their rank drop from 90% to 60%, they will find the time to update their justification! Unfortunately, during the hackathon we were not allowed to modify the schema to allow for additional fields such as “justification.” So this hack is limited to just the one basic question: who is the bigger Honey Badger? Here are some shots of the Honey Badger application: #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } Thanks to Bex and everyone for participating in our challenge. Despite very little time to promote this event, we had a great turnout and creative and useful entries. The amount of work required to put together these final entries was significant, especially during a conference, and the judges and all of us involved were impressed at how much work everyone was able to do. Congrats to everyone, pat yourselves on the back. Stay tuned if you’re interested in challenges like these. We’ll likely be running similar events in the not-so-distant future.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by Simon Cooper
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be a ***'. If you act like a ***, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook!

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #001

    - by pinaldave
    I am introducing a new series today.  This series is called “Memory Lane.”  From the last six years and 2,300 articles, there are fantastic articles I keep revisiting.  Sometimes when I read old blog posts I think I should have included something or added a bit more to the topic.  But for many articles, I still feel they are fantastic (even after six years) and could be read again and again. I have also found that after six years of blogging, readers will write to me and say “Pinal, why don’t you write about X, Y or Z.”  The answer is: I already did!  It is here on the blog, or in the comments, or possibly in one of my books.  The solution has always been there, it is simply a matter of finding it and presenting it again.  That is why I have created Memory Lane.  I will be listing the best articles from the same week of the past six years.  You will find plenty of reading material every Saturday from articles of SQLAuthority past. Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Display Foreign Key Relationships and Name of the Constraint for Each Table in Database My blogging journey began with this blog post. As many of you know my journey began with creating a repository of my scripts. This was very first script which I had written to find out foreign key relationship and constraints. The same query was updated later on using the new SYS schema modification in SQL Server. Version 1: Using sys.schema Version 2: Using sys.schema and additional columns 2007 Milestone Posts – 1 Year (365 blogs) and 1 Million Views When I reached 1st week of Nov in 2007 SQLAuthority.com blog had around 365 blog posts and 1 Million Views. I was not obsessed with the statistics before but this was indeed an interesting moment for me as I was blogging for myself and did not realize that so many people are reading my blog. In year 2006 there were not many bloggers so blogging was new to me as well. I was learning it as I go. 2008 Stored Procedure WITH ENCRYPTION and Execution Plan If you have stored procedure and its code is encrypted when you execute it what will be displayed in the execution plan. There are two kinds of execution plans 1) Estimated and 2) Actual. It will be indeed interesting to know what is displayed in both the cases when Stored Procedure is encrypted. What is your guess? Now go ahead and click on here and figure out your answer. If the user is not able to login into SQL Server due to any error or issues there were two different blog post addresses the same issue here and here. 2009 It seems like Nov is the month of SQLPASS month. In 2009 on the same week I was in USA attending SQLPASS event. I had a fantastic experience attending the event. Here are the blog posts covering the subject Day 1, Day 2, Day 3, Day 4 2010 Finding the last backup time for all the databases This little script is very powerful and instantly gives details when was the last time your database backup performed. If you are reading this blog post – I say just go ahead and check if everything is alright on your server and you have all the necessary latest backup. It is better to be safe than sorrow. Version 1: Above script was improved to get more details about the database Version 2: This version of the script will include pretty much have all the backup related information in a single script. Do not miss to save it for future use. Are you a Database Administrator or a Database Developer? Three years ago I created a very small survey and the results which I have received are very interesting. The question was asking what is the profile of the visitor of that blog post and I noticed that DBA and Developers have balanced with little inclination towards Developers. Have you voted so far? If not, go ahead! 2011 New Book Released – SQL Server Interview Questions And Answers One year ago, on November 3, 2011 I published my book SQL Server Interview Questions and Answers.  The book has a lot of great reviews, and we have even received emails telling us this book was a life changer because it helped get them a great new job.  I don’t think anyone can get a job just from my book.  It was the individual who studied hard and took it seriously, and was determined to learn something new.  The book might have helped guide them and show them the topics to study, but they spent their own energy on it.  It was their own skills that helped them pass the exam. So, in this very first installment, I would like to thank the readers for accepting our book, for giving it great reviews and for using it and sharing it.  Our goal in writing this book was to help others, and it seems like we succeeded. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • why installing lame it is getting failed

    - by Rahul Mehta
    I want to install ffmpeg with mp3lame enabled for this m following this tutorial , http://ubuntuforums.org/showpost.php?p=9868359&postcount=1289 and in step 2 error is libfaac is not found ? and in step 5 installing lame is giving this error , why it is getting failed , please advised what to do ? reach121@youngib:~/lame-3.98.4$ sudo checkinstall --pkgname=lame-ffmpeg --pkgversion="3.98.4" --backup=no --default --deldoc=yes checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL. ***************************************** **** Debian package creation selected *** ***************************************** This package will be built according to these values: 0 - Maintainer: [ root@youngib ] 1 - Summary: [ Package created with checkinstall 1.6.2 ] 2 - Name: [ lame-ffmpeg ] 3 - Version: [ 3.98.4 ] 4 - Release: [ 1 ] 5 - License: [ GPL ] 6 - Group: [ checkinstall ] 7 - Architecture: [ amd64 ] 8 - Source location: [ lame-3.98.4 ] 9 - Alternate source location: [ ] 10 - Requires: [ ] 11 - Provides: [ lame-ffmpeg ] 12 - Conflicts: [ ] 13 - Replaces: [ ] Enter a number to change any of them or press ENTER to continue: Installing with make install... ========================= Installation results =========================== Making install in mpglib make[1]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' make[1]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' Making install in libmp3lame make[1]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in i386 make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' Making install in vector make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib" /bin/bash ../libtool --mode=install /usr/bin/install -c 'libmp3lame.la' '/usr/local/lib/libmp3lame.la' /usr/bin/install -c .libs/libmp3lame.lai /usr/local/lib/libmp3lame.la /usr/bin/install -c .libs/libmp3lame.a /usr/local/lib/libmp3lame.a chmod 644 /usr/local/lib/libmp3lame.a ranlib /usr/local/lib/libmp3lame.a PATH="$PATH:/sbin" ldconfig -n /usr/local/lib ---------------------------------------------------------------------- Libraries have been installed in: /usr/local/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[1]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in frontend make[1]: Entering directory `/home/reach121/lame-3.98.4/frontend' make[2]: Entering directory `/home/reach121/lame-3.98.4/frontend' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'lame' '/usr/local/bin/lame' /usr/bin/install -c lame /usr/local/bin/lame make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/frontend' make[1]: Leaving directory `/home/reach121/lame-3.98.4/frontend' Making install in Dll make[1]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/Dll' make[1]: Leaving directory `/home/reach121/lame-3.98.4/Dll' Making install in debian make[1]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/debian' make[1]: Leaving directory `/home/reach121/lame-3.98.4/debian' Making install in doc make[1]: Entering directory `/home/reach121/lame-3.98.4/doc' Making install in html make[2]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/doc/lame/html" || /bin/mkdir -p "/usr/local/share/doc/lame/html" /bin/mkdir: cannot create directory `/usr/local/share/doc': No such file or directory make[3]: *** [install-pkghtmlDATA] Error 1 make[3]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/home/reach121/lame-3.98.4/doc' make: *** [install-recursive] Error 1 **** Installation failed. Aborting package creation. Cleaning up...OK Bye. reach121@youngib:~/lame-3.98.4$

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • Certify August Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We have added some release and platform certifications to MOS Certify. Applications : Oracle Demantra Demand Management 7.3.1.5, Oracle Demantra Predictive Trade Planning 7.3.1.5, Oracle Demantra Sales and Operations Planning 7.3.1.5 Database: Oracle Database Client 12.1.0.1.0 11.2.0.4.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.1.3, Oracle E-Business Suite 12.1.2, Oracle E-Business Suite 12.1.1, Oracle E-Business Suite 12.0.6, Oracle E-Business Suite 11.5.10.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Oracle Application Management Pack for Oracle E-Business Suite 12.1.0.1.0 Fusion Middleware: Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Forms Builder 11.1.1.6.0, Oracle Application Development Framework 11.1.1.6.0, Oracle Application Development Runtime 11.1.1.6.0, Oracle Business Intelligence Publisher 11.1.1.6.0, Oracle Directory Services Manager 11.1.1.6.0, Oracle Forms 11.1.1.6.0, Oracle GoldenGate 11.1.1.1.0, 11.1.1.1.2, 11.1.1.1.1, Oracle GoldenGate Application Adapters 11.1.1.1.1, Oracle Identity and Access Management 11.1.2.0.0, 11.1.2.1.0, Oracle Identity Federation 11.1.1.6.0, Oracle Real-Time Decision Load Generator 11.1.1.7.0, Oracle Real-Time Decision Studio 11.1.1.7.0, Oracle Real-Time Decisions 11.1.1.6.0, Oracle Reports 11.1.1.6.0, Oracle Segmentation Server 11.1.1.6.0, Oracle Virtual Directory 11.1.1.6.0, Oracle Web Cache 11.1.1.6.0, Oracle WebCenter Content Imaging 11.1.1.8.0, Oracle WebCenter Content Inbound Refinery Server 11.1.1.8.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Enterprise Capture 11.1.1.8.0, Oracle WebCenter Portal 11.1.1.8.0, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: Explorer 11.1.1.8.0, Oracle WebCenter Universal Content Management 11.1.1.8.0, Reports Builder 11.1.1.6.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Sites: Developer Tools 11.1.1.8.0 FSGBU Insurance Group : Oracle Health Insurance Claims 2.13.3.0.0, 2.13.2.0.0, 2.13.1.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Tools 9.1.3.0, 9.1.2.0, 9.1.0.0 JD Edwards World: JD Edwards World Service Enablement A93SE, A931SE PeopleSoft: PeopleSoft PeopleTools 8.52 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel CRM Desktop Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel HI Web Client 8.2.2.2.0, 8.1.1.9.0, Siebel Gateway Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Outlook Add-in Client 8.2.2.2.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Web Server Extension 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • BPM 11g - Dynamic Task Assignment with Multi-level Organization Units

    - by Mark Foster
    I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process. Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit. The Use-Case Task assignment is to an approval group containing several users. At runtime, a location value in the input data determines which of the particular users the task is ultimately assigned to. In this case we use the Demo Community referenced in the SOA Admin Guide, and specifically the "LoanAnalyticGroup" which contains three users; "szweig", "mmitch" & "fkafka". In our scenario we would like to assign a task to "szweig" if the input data specifies that the location is "JapanCentral", to "fkafka" if the location is "JapanNorth" and to "mmitch" if "JapanSouth", and to all of them if the location is "Japan" i.e....   The Process Simple one human task process.... In the output data association of the "Start" activity we need to set the value of the "Organization Unit" predefined variable based on the input data (note that the  predefined variables can only be set on output data associations)....  ...and in the output data association of the human activity we will reset the "Organization Unit" to empty, always good practice to ensure that the Organization Unit will not be used for any subsequent human activities for which we do not require it.... Set Up the Organization Unit  Log in to the BPM Workspace with an administrator user (weblogic/welcome1 in our case) and choose the "Administration" option. Within "Roles" assign the "ProcessOwner" swim-lane for our process to "LoanAnalyticGroup".... Within "Organization Units" we can model our organization.... "Root Organization Unit" as "Japan" and "Child Organization Unit" as "Central", "South" & "North" as shown. As described previously, add user "szweig" to "Central", "mmitch" to "South" and "fkafka" to "North"....   Test the Process Invalid Data  First let us test with invalid data in the input to see what the consequences are, here we use "X" as input.... ...and looking at the instance we can see it has errored.... Organization Unit Root Level Assignment  Now let us see what happens if we have "Japan" in the input data.... ...looking in the "flow trace" we can see that the task has been assigned....  ... but who has the task been assigned to ? Let us look in the BPM Workspace for user "szweig"....  ...and for "mmitch"....  ... and for "fkafka"....  ...so we can see that with an Organization Unit at "Root" level we have successfully assigned the task to all users. Organization Unit Child Level Assignment  Now let us test with "Japan/North" in the input data.... ...and looking in "fkafka" workspace we see the task has been assigned, remember, he was associated with "JapanNorth"....   ... but what about the workspace of "szweig"....  ...no tasks assigned, neither has "mmitch", just as we expected. Summary  We have seen in this blog how to easily implement multi-level dynamic task routing using Organization Units, a common use-case and a simpler solution than Parametric Roles. 

    Read the article

  • Mobile Apps for Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Many things have changed in the mobile space over the last few years. Here's an update on our strategy for mobile apps for the E-Business Suite. Mobile app strategy We're building our family of mobile apps for the E-Business Suite using Oracle Mobile Application Framework.  This framework allows us to write a single application that can be run on Apple iOS and Google Android platforms. Mobile apps for the E-Business Suite will share a common look-and-feel. The E-Business Suite is a suite of over 200 product modules spanning Financials, Supply Chain, Human Resources, and many other areas. Our mobile app strategy is to release standalone apps for specific product modules.  Our Oracle Timecards app, which allows users to create and submit timecards, is an example of a standalone app. Some common functions that span multiple product areas will have dedicated apps, too. An example of this is our Oracle Approvals app, which allows users to review and approve requests for expenses, requisitions, purchase orders, recruitment vacancies and offers, and more. You can read more about our Oracle Mobile Approvals app here: Now Available: Oracle Mobile Approvals for iOS Our goal is to support smaller screen (e.g. smartphones) as well as larger screens (e.g. tablets), with the smaller screen versions generally delivered first.  Where possible, we will deliver these as universal apps.  An example is our Oracle Mobile Field Service app, which allows field service technicians to remotely access customer, product, service request, and task-related information.  This app can run on a smartphone, while providing a richer experience for tablets. Deploying EBS mobile apps The mobile apps, themselves (i.e. client-side components) can be downloaded by end-users from the Apple iTunes today.  Android versions will be available from Google play. You can monitor this blog for Android-related updates. Where possible, our mobile apps should be deployable with a minimum of server-side changes.  These changes will generally involve a consolidated server-side patch for technology-stack components, and possibly a server-side patch for the functional product module. Updates to existing mobile apps may require new server-side components to enable all of the latest mobile functionality. All EBS product modules are certified for internal intranet deployments (i.e. used by employees within an organization's firewall).  Only a subset of EBS products such as iRecruitment are certified to be deployed externally (i.e. used by non-employees outside of an organization's firewall).  Today, many organizations running the E-Business Suite do not expose their EBS environment externally and all of the mobile apps that we're building are intended for internal employee use.  Recognizing this, our mobile apps are currently designed for users who are connected to the organization's intranet via VPN.  We expect that this may change in future updates to our mobile apps. Mobile apps and internationalization The initial releases of our mobile apps will be in English.  Later updates will include translations for all left-to-right languages supported by the E-Business Suite.  Right-to-left languages will not be translated. Customizing apps for enterprise deployments The current generation of mobile apps for Oracle E-Business Suite cannot be customized. We are evaluating options for limited customizations, including corporate branding with logos, corporate color schemes, and others. This is a potentially-complex area with many tricky implications for deployment and maintenance.  We would be interested in hearing your requirements for customizations in enterprise deployments.Prerequisites Apple iOS 7 and higher Android 4.1 (API level 16) and higher, with minimum CPU/memory configurations listed here EBS 12.1: EBS 12.1.3 Family Packs for the related product module EBS 12.2.3 References Oracle E-Business Suite Mobile Apps, Release 12.1 and 12.2 Documentation (Note 1641772.1) Oracle E-Business Suite Mobile Apps Administrator's Guide, Release 12.1 and 12.2 (Note 1642431.1) Related Articles Using Mobile Devices with Oracle E-Business Suite Apple iPads Certified with Oracle E-Business Suite 12.1 Now Available: Oracle Mobile Approvals for iOS The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Automating custom software installation in a zone

    - by mgerdts
    In Solaris 11, the internals of zone installation are quite different than they were in Solaris 10.  This difference allows the administrator far greater control of what software is installed in a zone.  The rules in Solaris 10 are simple and inflexible: if it is installed in the global zone and is not specifically excluded by package metadata from being installed in a zone, it is installed in the zone.  In Solaris 11, the rules are still simple, but are much more flexible:  the packages you tell it to install and the packages on which they depend will be installed. So, where does the default list of packages come from?  From the AI (auto installer) manifest, of course.  The default AI manifest is /usr/share/auto_install/manifest/zone_default.xml.  Within that file you will find:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data> So, the default installation will install pkg:/group/system/solaris-small-server.  Cool.  What is that?  You can figure out what is in the package by looking for it in the repository with your web browser (click the manifest link), or use pkg(1).  In this case, it is a group package (pkg:/group/), so we know that it just has a bunch of dependencies to name the packages that really wants installed. $ pkg contents -t depend -o fmri -s fmri -r solaris-small-server FMRI compress/bzip2 compress/gzip compress/p7zip ... terminal/luit terminal/resize text/doctools text/doctools/ja text/less text/spelling-utilities web/wget If you would like to see the entire manifest from the command line, use pkg contents -r -m solaris-small-server. Let's suppose that you want to install a zone that also has mercurial and a full-fledged installation of vim rather than just the minimal vim-core that is part of solaris-small-server.  That's pretty easy. First, copy the default AI manifest somewhere where you will edit it and make it writable. # cp /usr/share/auto_install/manifest/zone_default.xml ~/myzone-ai.xml # chmod 644 ~/myzone-ai.xml Next, edit the file, changing the software_data section as follows:             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>                 <name>pkg:/developer/versioning/mercurial</name>                <name>pkg:/editor/vim</name>             </software_data> To figure out  the names of the packages, either search the repository using your browser, or use a command like pkg search hg. Now we are all ready to install the zone.  If it has not yet been configured, that must be done as well. # zonecfg -z myzone 'create; set zonepath=/zones/myzone' # zoneadm -z myzone install -m ~/myzone-ai.xml A ZFS file system has been created for this zone. Progress being logged to /var/log/zones/zoneadm.20111113T004303Z.myzone.install Image: Preparing at /zones/myzone/root. Install Log: /system/volatile/install.15496/install_log AI Manifest: /tmp/manifest.xml.XfaWpE SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting ... Creating IPS image Installing packages from: solaris origin: http://localhost:1008/solaris/54453f3545de891d4daa841ddb3c844fe8804f55/ DOWNLOAD PKGS FILES XFER (MB) Completed 169/169 34047/34047 185.6/185.6 PHASE ACTIONS Install Phase 46498/46498 PHASE ITEMS Package State Update Phase 169/169 Image State Update Phase 2/2 Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 531.813 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20111113T004303Z.myzone.install Now, for a few things that I've seen people trip over: Ignore that bit about man pages - it's wrong.  Man pages are already installed so long as the right facet is set properly.  And that's a topic for another blog entry. If you boot the zone then just use zlogin myzone, you will see that services you care about haven't started and that svc:/milestone/config:default is starting.  That is because you have not yet logged into the console with zlogin -C myzone. If the zone has been booted for more than a very short while when you first connect to the zone console, it will seem like the console is hung.  That's not really the case - hit ^L (control-L) to refresh the sysconfig(1M) screen that is prompting you for information.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • Mobile Apps for Oracle E-Business Suite

    - by Carlos Chang
    Crosspost from the mobile apps blog.  TL;DR Oracle E-Business Suite is now building mobile apps with Oracle Mobile Application Framework (MAF). Believe it! Build iOS and Android apps with once code base and get it done! By Steven Chan (Oracle Development)  Many things have changed in the mobile space over the last few years. Here's an update on our strategy for mobile apps for the E-Business Suite. Mobile app strategy We're building our family of mobile apps for the E-Business Suite using Oracle Mobile Application Framework.  This framework allows us to write a single application that can be run on Apple iOS and Google Android platforms. Mobile apps for the E-Business Suite will share a common look-and-feel. The E-Business Suite is a suite of over 200 product modules spanning Financials, Supply Chain, Human Resources, and many other areas. Our mobile app strategy is to release standalone apps for specific product modules.  Our Oracle Timecards app, which allows users to create and submit timecards, is an example of a standalone app. Some common functions that span multiple product areas will have dedicated apps, too. An example of this is ourOracle Approvals app, which allows users to review and approve requests for expenses, requisitions, purchase orders, recruitment vacancies and offers, and more. You can read more about our Oracle Mobile Approvals app here: Now Available: Oracle Mobile Approvals for iOS Our goal is to support smaller screen (e.g. smartphones) as well as larger screens (e.g. tablets), with the smaller screen versions generally delivered first.  Where possible, we will deliver these as universal apps.  An example is our Oracle Mobile Field Service app, which allows field service technicians to remotely access customer, product, service request, and task-related information.  This app can run on a smartphone, while providing a richer experience for tablets. Deploying EBS mobile apps The mobile apps, themselves (i.e. client-side components) can be downloaded by end-users from the Apple iTunes today.  Android versions will be available from Google play. You can monitor this blog for Android-related updates. Where possible, our mobile apps should be deployable with a minimum of server-side changes.  These changes will generally involve a consolidated server-side patch for technology-stack components, and possibly a server-side patch for the functional product module. Updates to existing mobile apps may require new server-side components to enable all of the latest mobile functionality. All EBS product modules are certified for internal intranet deployments (i.e. used by employees within an organization's firewall).  Only a subset of EBS products such as iRecruitment are certified to be deployed externally (i.e. used by non-employees outside of an organization's firewall).  Today, many organizations running the E-Business Suite do not expose their EBS environment externally and all of the mobile apps that we're building are intended for internal employee use.  Recognizing this, our mobile apps are currently designed for users who are connected to the organization's intranet via VPN.  We expect that this may change in future updates to our mobile apps. Mobile apps and internationalization The initial releases of our mobile apps will be in English.  Later updates will include translations for all left-to-right languages supported by the E-Business Suite.  Right-to-left languages will not be translated. Customizing apps for enterprise deployments The current generation of mobile apps for Oracle E-Business Suite cannot be customized. We are evaluating options for limited customizations, including corporate branding with logos, corporate color schemes, and others. This is a potentially-complex area with many tricky implications for deployment and maintenance.  We would be interested in hearing your requirements for customizations in enterprise deployments.Prerequisites Apple iOS 7 and higher Android 4.1 (API level 16) and higher, with minimum CPU/memory configurations listed here EBS 12.1: EBS 12.1.3 Family Packs for the related product module EBS 12.2.3 References Oracle E-Business Suite Mobile Apps, Release 12.1 and 12.2 Documentation (Note 1641772.1) Oracle E-Business Suite Mobile Apps Administrator's Guide, Release 12.1 and 12.2 (Note 1642431.1) Follow @OracleMobile on Twitter Oracle Mobile Blog is here. 

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • Context is Everything

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Context is Everything How many times have you have you asked a question only to hear an answer like “Well, it depends. What exactly are you trying to do?”.  There are times that raw information can’t tell us what we need to know without putting it in a larger context. Let's take a real world example.  If I'm a maintenance planner trying to figure out which assets should be replaced during my next maintenance window, I'm going to go to my Asset Management System.  I can get it to spit out a list of assets that have failed several times over the last year.  But what are these assets connected to?  Is there any safety consequences to shutting off this pipeline to do the work?  Is some other work that's planned going to conflict with replacing this asset?  Several of these questions can't be answered by simply spitting out a list of asset IDs.  The maintenance planner will have to reference a diagram of the plant to answer several of these questions. This is precisely the idea behind Augmented Business Visualization. An Augmented Business Visualization (ABV) solution is one where your structured data (enterprise application data) and your unstructured data (documents, contracts, floor plans, designs, etc.) come together to allow you to make better decisions.  Essentially we're showing your business data into its context. AutoVue allows you to create ABV solutions by integrating your enterprise application with AutoVue’s hotspot framework. Hotspots can be defined for your document. Users can click these hotspots to trigger actions in your enterprise app. Similarly, the enterprise app can highlight the hotspots in your document based on its business data, creating a visual dashboard of your business data in the context of your document. ABV is not new. We introduced the hotspot framework in AutoVue 20.1 with text hotspots. Any text in a PDF or 2D CAD drawing could be turned into a hotspot. In 20.2 we have enhanced this to include 2 new types of hotspots: 3D and regional hotspots. 3D hotspots allow you to turn 3D parts into hotspots. Hotspots can be defined based on the attributes of the part, so you can create hotspots based on part numbers, material, date of delivery, etc.  Regional hotspots allow an administrator to define rectangular regions on any PDF, image, or 2D CAD drawing. This is perfect for cases where the document you’re using either doesn’t have text in it (a JPG or TIFF for example) or if you want to define hotspots that don’t correspond to the text in the document. There are lots of possible uses for AutoVue hotspots.  A great demonstration of how our hotspot capabilities can help add context to enterprise data in the Energy sector can be found in the following AutoVue movies: Maintenance Planning in the Energy Sector - Watch it Now Capital Construction Project Management in the Energy Sector  -  Watch it Now Commissioning and Handover Process for the Energy Sector  -  Watch it Now

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >