Search Results

Search found 15383 results on 616 pages for 'home automation'.

Page 579/616 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • Oracle ATG Web Commerce 10 Implementation Developer Boot Camp - Reading (UK) - October 1-12, 2012

    - by Richard Lefebvre
    REGISTER NOW: Oracle ATG Web Commerce 10 Implementation Developer Boot Camp Reading, UK, October 1-12, 2012! OPN invites you to join us for a 10-day implementation bootcamp on Oracle ATG Web Commerce in Reading, UK from October 1-12, 2012.This 10-day boot camp is designed to provide partners with hands-on experience and technical training to successfully build and deploy Oracle ATG Web Commerce 10 Applications. This particular boot camp is focused on helping partners develop the essential skills needed to implement every aspect of an ATG Commerce Application from scratch, (not CRS-based), with a specific goal of enabling experienced Java/J2EE developers with a path towards becoming functional, effective, and contributing members of an ATG implementation team. Built for both new and experienced ATG developers alike, the collaborative nature of this program and its exercises, have proven to be highly effective and extremely valuable in learning the best practices for implementing ATG solutions. Though not required, this bootcamp provides a structured path to earning a Certified Oracle ATG Web Commerce 10 Specialization! What Is Covered: This boot camp is for Application Developers and Software Architects wanting to gain valuable insight into ATG application development best practices, as well as relevant and applicable implementation experience on projects modeled after four of the most common types of applications built on the ATG platform. The following learning objectives are all critical, and are of equal priority in enabling this role to succeed. This learning boot camp will help with: Building a basic functional transaction-ready ATG Web Commerce 10 Application. Utilizing ATG’s platform features such as scenarios, slots, targeters, user profiles and segments, to create a personalized user experience. Building Nucleus components to support and/or extend application functionality. Understanding the intricacies of ATG order checkout and fulfillment. Specifying, designing and implementing new commerce features in ATG 10. Building a functional commerce application modeled after four of the most common types of applications built on the ATG platform, within an agile-based project team environment and under simulated real-world project conditions. Duration: The Oracle ATG Web Commerce 10 Implementation Developer Boot Camp is an instructor-led workshop spanning 10 days. Audience: Application Developers Software Architects Prerequisite Training and Environment Requirements: Programming and Markup Experience with Java J2EE, JavaScript, XML, HTML and CSS Completion of Oracle ATG Web Commerce 10 Implementation Specialist Development Guided Learning Path modules Participants will be required to bring their own laptop that meets the minimum specifications:   64-bit PC and OS (e.g. Windows 7 64-bit) 4GB RAM or more 40GB Hard Disk Space Laptops will require access to the Internet through Remote Desktop via Windows. Agenda Topics: Week 1 – Day 1 through 5 Build a Basic Commerce Application In week one of the boot camp training, we will apply knowledge learned from the ATG Web Commerce 10 Implementation Developer Guided Learning Path modules, towards building a basic transaction-ready commerce application. There will be little to no lectures delivered in this boot camp, as developers will be fully engaged in ATG Application Development activities and best practices. Developers will work independently on the following lab assignments from day's 1 through 5: Lab Assignments  1 Environment Setup 2 Build a dynamic Home Page 3 Site Authentication 4 Build Customer Registration 5 Display Top Level Categories 6 Display Product Sub-Categories 7 Display Product List Page 8 Display Product Detail Page 9 ATG Inventory 10 Build “Add to Cart” Functionality 11 Build Shopping Cart 12 Build Checkout Page  13 Build Checkout Review Page 14 Create an Order and Build Order Confirmation Page 15 Implement Slots and Targeters for Personalization 16 Implement Pricing and Promotions 17 Order Fulfillment Back to top Week 2 – Day 6 through 10 Team-based Case Project In the second week of the boot camp training, participants will be asked to join a project team that will select a case project for the team to implement. Teams will be able to choose from four of the most common application types developed and deployed on the ATG platform. They are as follows: Hard goods with physical fulfillment, Soft goods with electronic fulfillment, a Service or subscription case example, a Course/Event registration case example. Team projects will have approximately 160 hours of use cases/stories for each team to build (40 hours per developer). Each day's Use Cases/Stories will build upon the prior day's work, and therefore must be fully completed at the end of each day. Please note that this boot camp intends to simulate real-world project conditions, and as such will likely require the need for project teams to possibly work beyond normal business hours. To promote further collaboration and group learning, each team will be asked to present their work and share the methodologies and solutions that they've applied to their cases at the end of each day. Location: Oracle Reading CVC TPC510 Room: Wraysbury Reading, UK 9:00 AM – 5:00 PM  Registration Fee (10 Days): US $3,375 Please click on the following link to REGISTER or  visit the Oracle ATG Web Commerce 10 Implementation Developer Boot Camp page for more information. Questions: Patrick Ty Partner Enablement, Oracle Commerce Phone: 310.343.7687 Mobile: 310.633.1013 Email: [email protected]

    Read the article

  • How to Share Files Between User Accounts on Windows, Linux, or OS X

    - by Chris Hoffman
    Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts. This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems. Windows On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer. Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library. Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this. These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel. You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this. Linux This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network. You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too. Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions. Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it. Mac OS X Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared. To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac. These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

    Read the article

  • Center Pictures and Other Objects in Office 2007 & 2010

    - by Matthew Guay
    Sometimes it can be difficult to center a picture in a document just by dragging it dragging it around. Today we show you how to center pictures, images, and other objects perfectly in Word and PowerPoint. Note: For this tutorial we’re using Office 2010, but the steps are nearly identical in 2007. Centering a Picture in Word First let’s insert a picture into our document.  Click the Insert tab, and then click Picture. Once you select the picture you want, it will be added to your document.  Usually, pictures are added wherever your curser was in the document, so in a blank document it will be added at the top left. Also notice Picture Tools show up in the Ribbon after inserting an image. Note: The following menu items are available in Picture Tools Format tab which is displayed when you select the object or image you’re working with. How do we align the picture just like we want?  Click Position to get some quick placement options, including centered in the middle of the document or on the top.    However, for more advanced placement, we can use the Align tool.  If Word isn’t maximized, you may only see the icon without the “Align” label. Notice the tools were grayed out in the menu by default.  To be able to change the Alignment, we need to first change the text wrap settings. Click the Wrap Text button, and any option other than “In Line with Text”.  Your choice will depend on the document you’re writing, just choose the option that works best in the document.   Now, select the Align tools again.  You can now position your image precisely with these options. Align Center will position your picture in the center of the page widthwise. Align Middle will put the picture in the middle of the page height-wise. This works the same with textboxes.  Simply click the Align button in the Format tab, and you can center it in the page. And if you’d like to align several objects together, simply select them all, click Group, and then select Group from the menu.   Now, in the align tools, you can center the whole group on your page for a heading, or whatever you want to use the pictures for. These steps also work the same with Office 2007. Center objects in PowerPoint This works similar in PowerPoint, except that pictures are automatically set for square wrapping automatically, so you don’t have to change anything.  Simply insert the picture or other object of your choice, click Align, and choose the option you want. Additionally, if one object is already aligned like you want, drag another object near it and you will see a Smart Guide to help you align or center the second object with the first.  This only works with shapes in PowerPoint 2010 beta, but will work with pictures, textboxes, and media in the final release this summer. Conclusion These are good methods for centering images and objects in Word and PowerPoint.  From designing perfect headers to emphasizing your message in a PowerPoint presentation, this is something we’ve found useful and hope you will too. Since we’re talking about Office here, it’s worth mentioning that Microsoft has announced the Technology Guarantee Program for Office 2010. Essentially what this means is, if you purchase a version of Office 2007 between March 5th and September 30th of this year, when Office 2010 is released you’ll be able to upgrade to it for free! Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteAdd More Functions To Office 2007 By Installing Add-InsCustomize Your Welcome Picture Choices in Windows VistaEasily Rotate Pictures In Word 2007Add Effects To Your Pictures in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox)

    Read the article

  • SQLAuthority News – #TechEdIn – TechEd India 2012 Memories and Photos

    - by pinaldave
    TechEd India 2012 was held in Bangalore last March 21 to 23, 2012. Just like every year, this event is bigger, grander and inspiring. Pinal Dave at TechEd India 2012 Family Event Every single year, TechEd is a special affair for my entire family.  Four months before the start of TechEd, I usually start to build the mental image of the event. I start to think  about various things. For the most part, what excites me most is presenting a session and meeting friends. Seriously, I start thinking about presenting my session 4 months earlier than the event!  I work on my presentation day and night. I want to make sure that what I present is accurate and that I have experienced it firsthand. My wife and my daughter also contribute to my efforts. For us, TechEd is a family event, and the two of them feel equally responsible as well. They give up their family time so I can bring out the best content for the Community. Pinal, Shaivi and Nupur at TechEd India 2012 Guinea Pigs (My Experiment Victims) I do not rehearse my session, ever. However, I test my demo almost every single day till the last moment that I have to present it already. I sometimes go over the demo more than 2-3 times a day even though the event is more than a month away. I have two “guinea pigs”: 1) Nupur Dave and 2) Vinod Kumar. When I am at home, I present my demos to my wife Nupur. At times I feel that people often backup their demo, but in my case, I have backup demo presenters. In the office during lunch time, I present the demos to Vinod. I am sure he can walk my demos easily with eyes closed. Pinal and Vinod at TechEd India 2012 My Sessions I’ve been determined to present my sessions in a real and practical manner. I prefer to present the subject that I myself would be eager to attend to and sit through if I were an audience. Just keeping that principle in mind, I have created two sessions this year. SQL Server Misconception and Resolution Pinal and Vinod at TechEd India 2012 We believe all kinds of stuff – that the earth is flat, or that the forbidden fruit is apple, or that the big bang theory explains the origin of the universe, and so many other things. Just like these, we have plenty of misconceptions in SQL Server as well. I have had this dream of co-presenting a session with Vinod Kumar for the past 3 years. I have been asking him every year if we could present a session together, but we never got it to work out, until this year came. Fortunately, we got a chance to stand on the same stage and present a single subject.  I believe that Vinod Kumar and I have an excellent synergy when we are working together. We know each other’s strengths and weakness. We know when the other person will speak and when he will keep quiet. The reason behind this synergy is that we have worked on 2 Video Learning Courses (SQL Server Indexes and SQL Server Questions and Answers) and authored 1 book (SQL Server Questions and Answers) together. Crowd Outside Session Hall This session was inspired from the “Laurel and Hardy” show so we performed a role-playing of those famous characters. We had an excellent time at the stage and, for sure, the audience had a wonderful time, too. We had an extremely large audience for this session and had a great time interacting with them. Speed Up! – Parallel Processes and Unparalleled Performance Pinal Dave at TechEd India 2012 I wanted to approach this session at level 400 and I was very determined to do so. The biggest challenge I had was that this was a total of 60 minutes of session and the audience profile was very generic. I had to present at level 100 as well at 400. I worked hard to tune up these demos. I wanted to make sure that my messages would land perfectly to the minds of the attendees, and when they walk out of the session, they could use the knowledge I shared on their servers. After the session, I felt an extreme satisfaction as I received lots of positive feedback at the event. At one point, so many people rushed towards me that I was a bit scared that the stage might break and someone would get injured. Fortunately, nothing like that happened and I was able to shake hands with everybody. Pinal Dave at TechEd India 2012 Crowd rushing to Pinal at TechEd India 2012 Networking This is one of the primary reasons many of us visit the annual TechEd event. I had a fantastic time meeting SQL Server enthusiasts. Well, it was a terrific time meeting old friends, user group members, MVPs and SQL Enthusiasts. I have taken many photographs with lots of people, but I have received a very few back. If you are reading this blog and have a photo of us at the event, would you please send it to me so I could keep it in my memory lane? SQL Track Speaker: Jacob and Pinal at TechEd India 2012 SQL Community: Pinal, Tejas, Nakul, Jacob, Balmukund, Manas, Sudeepta, Sahal at TechEd India 2012 Star Speakers: Amit and Balmukund at TechEd India 2012 TechED Rockstars: Nakul, Tejas and Pinal at TechEd India 2012 I guess TechEd is a mix of family affair and culture for me! Hamara TechEd (Our TechEd) Please tell me which photo you like the most! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Superpower Your Touchpad Computer with Scrybe

    - by Matthew Guay
    Are you looking for a way to help your Touchpad computer make you more productive?  Here’s a quick look at Scrybe, a new application from Synaptics that lets you superpower it. Touchpad devices have become increasingly more interesting as they’ve included support for multi-touch gestures.  Scrybe takes it to the next level and lets you use your touchpad as an application launcher.  You can launch any application, website, or complete many common commands on your computer with a simple gesture.  Scrybe works with most modern Synaptics touchpads, which are standard on most laptops and netbooks.  It is optimized for newer multi-touch touchpads, but can also work with standard single-touch touchpads.  It works on Windows 7, Vista, and XP, so chances are it will work with your laptop or netbook. Get Started With Scrybe Head over to the Scrybe website and download the latest version (link below).  You are asked to enter your email address, name, and information about your computer…but you actually only have to enter your email address.  Click Download when finished. Run the installer when it’s download.  It will automatically download the latest Synaptics driver for your touchpad and any other components needed for Scrybe.  Note that the Scrybe installer will ask to install the Yahoo! toolbar, so uncheck this to avoid adding this worthless browser toolbar. Using Scrybe To open an application or website with a gesture, press 3 fingers on your touchpad at once, or if your touchpad doesn’t support multi-touch gestures, then press Ctrl+Alt and press 1 finger on your touchpad.  This will open the Scrype input pane; start drawing a gesture, and you’ll see it on the grey square.  The input pane shows some default gestures you can try. Here we drew an “M”, which opens our default Music player.  As soon as you finish the gesture and lift up your finger, Scrybe will open the application or website you selected. A notification balloon will let you know what gesture was preformed. When you’re entering your gesture, the input pane will show white “ink”.  The “ink” will turn blue if the command is recognized, but will turn red if it isn’t.  If Scrybe doesn’t recognize your command, press 3 fingers and try again. Scrybe Control Panel You can open the Scrybe Control panel to enter or change commands by entering a box-like gesture, or right-clicking the Scrybe icon in your system tray and selecting “Scrybe Control Panel”. Scrybe has many pre-configured gestures that you can preview and even practice. All of the gestures in the Popular tab are preset and cannot be changed.  However, the ones in the favorites tab can be edited.  Select the gesture you wish to edit, and click the gear icon to change it.  Here we changed the email gesture to open Hotmail instead of the default Yahoo Mail. Scrybe can also help you perform many common Windows commands such as Copy and Undo.  Select the Tools tab to see all of these commands.   Scrybe has many settings you may wish to change.  Select the Preferences button in the Control Panel to change these.  Here’s some of the settings we changed. Uncheck “Display a message” to turn off the tooltip notifications when you enter a gesture Uncheck “Show symbol hints” to turn off the sidebar on the input pane Select the search engine you want to open with the Search Gesture.  The default is Yahoo, but you can choose your favorite. Adding a new Scrybe Gesture The default Scrybe options are useful, but the best part is that you can assign gestures to your own programs or websites.  Open the Scrybe control panel, and click the plus sign on the bottom left corner.  Enter a name for your gesture, and then choose if it is for a website or an application. If you want the gesture to open a website, enter the address in the box. Alternately, if you want your gesture to open an application, select Launch Application and then either enter the path to the application, or click the button beside the Launch field and browse to it. Now click the down arrow on the blue box and choose one of the gestures for your application or website. Your new gesture will show up under the Favorites tab in the Scrybe control panel, and you can use it whenever you want from Scrybe, or practice the gesture by selecting the Practice button. Conclusion If you enjoy multi-touch gestures, you may find Scrybe very useful on your laptop or netbook.  Scrybe recognizes gestures fairly easily, even if you don’t enter them perfectly correctly.  Just like pinch-to-zoom and two-finger scroll, Scrybe can quickly become something you miss on other laptops. Download Scrybe (registration required) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadRemove Synaptics Touchpad Icon from System TrayRoll Back Troublesome Device Drivers in Windows VistaChange Your Computer Name in Windows 7 or VistaLet Somebody Use Your Computer Without Logging Off in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Execute TSQL statement with ExecuteStoreQuery in entity framework 4.0

    - by Jalpesh P. Vadgama
    I was playing with entity framework in recent days and I was searching something that how we can execute TSQL statement in entity framework. And I have found one great way to do that with entity framework ‘ExecuteStoreQuery’ method. It’s executes a TSQL statement against data source given enity framework context and returns strongly typed result. You can find more information about ExcuteStoreQuery from following link. http://msdn.microsoft.com/en-us/library/dd487208.aspx So let’s examine how it works. So Let’s first create a table against which we are going to execute TSQL statement. So I have added a SQL Express database as following. Now once we are done with adding a database let’s add a table called Client like following. Here you can see above Client table is very simple. There are only two fields ClientId and ClientName where ClientId is primary key and ClientName is field where we are going to store client name. Now it’s time to add some data to the table. So I have added some test data like following. Now it’s time to add entity framework model class. So right click project->Add new item and select ADO.NET entity model as following. After clicking on add button a wizard will start it will ask whether we need to create model classes from database or not but we already have our client table ready so I have selected generate from database as following. Once you process further in wizard it will be presented a screen where we can select the our table like following. Now once you click finish it will create model classes with for us. Now we need a gridview control where we need to display those data. So in Default.aspx page I have added a grid control like following. <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="EntityFramework._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to ASP.NET! </h2> <p> To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. </p> <p> You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. <asp:GridView ID="grdClient" runat="server"> </asp:GridView> </p> </asp:Content> Now once we are done with adding Gridview its time to write code for server side. So I have written following code in Page_load event of default.aspx page. protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { using (var context = new EntityFramework.TestEntities()) { ObjectResult<Client> result = context.ExecuteStoreQuery<Client>("Select * from Client"); grdClient.DataSource = result; grdClient.DataBind(); } } } Here in the above code you can see that I have written create a object of our entity model and then with the help of the ExecuteStoreQuery method I have execute a simple select TSQL statement which will return a object result. I have bind that object result with gridview to display data. So now we are done with coding.So let’s run application in browser. Following is output as expected. That’s it. Hope you like it. Stay tuned for more..Till then happy programming.

    Read the article

  • How to UEFI install Ubuntu 12.10?

    - by Geezanansa
    Running a newer FM1 motherboard which is using an AMD 3870k APU with a new 1TB HDD. Following the advice in the motherboard manual and https://help.ubuntu.com/community/UEFI have now got to grub option screen for UEFI install. see http://imgur.com/VW5vz The dvd.iso being used is Ubuntu 12.10 desktop amd64 from ubuntu .com. The hdd has had a gpt partition table made for, by using gparted when in a live desktop session when booted in bios mode. (*edit/update: Although the old cd updates on running it is an old kernel and it did make a gpt but that version of gparted uses fdisk whereas gdisk is required to make gpt. Think am going to have to spend more time here http://www.dedoimedo.com/computers/gparted.html lol Using the gparted from 12.10 live session to make partitions; following the guidance regarding this at https://help.ubuntu.com/community/UEFI#Creating_an_EFI_partition, but can only boot to grub option screen http://imgur.com/VW5vz when 12.10 options to "try ubuntu" or "install ubuntu" are selected they give errors as described below*) but after making the gpt decided to leave it unformatted/unallocated space with the intention of using installer to set up partitions. update-originally but gparted now sees hdd as http://imgur.com/hFIvm as described above. *Booting live dvd in EFI mode gives "Secure Boot not installed" just before grub kernel option list with the option to "install ubuntu" but get "can not read cd/0" and "the kernel must be loaded first" errors; when that option is selected. Any pointers on how to get installer going for UEFI install would be good. Thanks in advance. update: Hopefully these screenshots can help better highlight where i am going wrong or if there is something else going on http://imgur.com/g30RB, http://imgur.com/VW5vz, http://imgur.com/31E0q, http://imgur.com/bnuaG, http://imgur.com/y4KGu, http://imgur.com/3u2QE, http://imgur.com/n9lN3, http://imgur.com/FEKvz, http://imgur.com/hFIvm, update: Thank you fernando garcia for pointing me in the right direction to start the process of elimantion. What i have done since asking question is a little home work starting here http://askubuntu.com/faq#bounty and here http://askubuntu.com/questions/how-to-ask. Looking at other similar questions was good fun and found this 12.10 UEFI Secure Boot install the most relative in helping getting ubuntu to uefi install on my system. In response to wolverine's question this article was referred to http://web.dodds.net/~vorlon/wiki/blog/SecureBoot_in_Ubuntu_12.10/ This article in the first sentence gives a link to http://www.ubuntu.com/download which is where i downloaded the 12.10 desktop amd64 .iso(and others) but have been unable to do a efi install of ubuntu on this system and as this is a new system have ended up just going with bios installer running which at least puts my mind at ease that i have not bricked my new mobo.(had to do a clrcmos and flash to latest bios version) So it possibly could be the bios settings or the bios version being used that is problem. To try and eliminate bios version i can not get to post screen in order to id bios version being used. Pressing tab to show post instead of logo and trying to pausebreak to catch post is proving difficult. If logo screen in bios is disabled just get black screen no post shown and pressing tab does not show post. Appreciate using appropriate bios settings and latest 12.10 release should simply get uefi installer running when selected from the grub list (nice graphic details in Identifying if computer boots the cd in efi mode section at https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_CD_in_EFI_mode) And to confirm the hdd is booting in efi mode https://help.ubuntu.com/community/UEFI#Identifying_if_the_computer_boots_the_HDD_in_EFI_mode running the command [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD" gave Legacy boot on HDD This is as expected because i allowed the bios installer (which was 12.04 desktop amd64 after trying 12.10 desktop amd64 in efi mode) to run to get a working installation. Which is not what was intended or wished for but wanted to get a working os to bench test new mobo i.e. prove it is working. There are other options as in installing other bootmanagers/loaders but do not wish to do so as shim should get grub2 going that is after secure boot has been signed.(Now got rough idea what should happen just it aint happening. Is it possible ahci drivers are required?) Will post boot info script url of the updated config/setup. The original question asked seems irrelevant to what is being said in this update but as the problem is not resolved will keep on trying efi installing! i.e the problem is same as when question asked just trying to update. Have tried to edit and update the best i can!

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Justifiable Perks.

    - by Phil Factor
        I was once the director of a start-up IT Company, and had the task of recruiting a proportion of the management team. As my background was in IT management, I was rather more familiar with recruiting Geeks for technology jobs, but here, one of my early tasks was interviewing a Marketing Director.  The small group of financiers had suggested a rather strange Irishman called  Halleran.  From my background in City of London dealing-rooms, I was slightly unprepared for the experience of interviewing anyone wearing a pink suit. Many of my older City colleagues would have required resuscitation after seeing his white leather shoes. However, nobody will accuse me of prejudging an interviewee. After all, many Linux experts who I’ve come to rely on have appeared for interview dressed as hobbits. In fact, the interview went well, and we had even settled his salary.  I was somewhat unprepared for the coda.    ‘And I will need to be provided with a Ferrari  by the company.’    ‘Hmm. That seems reasonable.’    Initially, he looked startled, and then a slow smile of victory spread across his face.    ‘What colour would you like?’ I asked genially.    ‘It has to be red.’ He looked very earnest on this point.    ‘Fine. I have to go past Hamleys on the way home this evening, so I’ll pick one up then for you.’    ‘Er.. Hamley’s is a toyshop, not a Ferrari Dealership.’    I stared at him in bafflement for a few seconds. ‘You’re not seriously asking for a real Ferrari are you?’     ‘Well, yes. Not for my own sake, you understand. I’d much prefer a simple run-about, but my position demands it. How could I maintain the necessary status in the office without one? How could I do my job in marketing when my grey Datsun was all too visible in the car Park? It is a tool of the job.’    ‘Excuse me a moment, but I must confer with the MD’    I popped out to see Chris, the MD. ‘Chris, I’m interviewing a lunatic in a pink suit who is trying to demand that a Ferrari is a precondition of his employment. I tried the ‘misunderstanding trick’ but it didn’t faze him.’     ‘Sorry, Phil, but we’ve got to hire him. The VCs insist on it. You’ve got to think of something that doesn’t involve committing to the purchase of a Ferrari. Current funding barely covers the rent for the building.’    ‘OK boss. Leave it to me.’    On return, I slapped O’Halleran’s file on the table with a genial, paternalistic smile. ‘Of course you should have a Ferrari. The only trouble is that it will require a justification document that can be presented to the board. I’m sure you’ll have no problem in preparing this document in the required format.’ The initial look of despair was quickly followed by a bland look of acquiescence. He had, earlier in the interview, argued with great eloquence his skill in preparing the tiresome documents that underpin the essential corporate and government deals that were vital to the success of this new enterprise. The justification of a Ferrari should be a doddle.     After the interview, Chris nervously asked how I’d fared.     ‘I think it is all solved.’    ‘… without promising a Ferrari, I hope.’    ‘Well, I did actually; on condition he justified it in writing.’    Chris issued a stream of invective. The strain of juggling the resources in an underfunded startup was beginning to show.    ‘Don’t worry. In the unlikely event of him coming back with the required document, I’ll give him mine.’    ‘Yours?’ He strode over to the window to stare down at the car park.    He needn’t have worried: I knew that his breed of marketing man could more easily lay an ostrich egg than to prepare a decent justification document. My Ferrari is still there at the back of my garage. Few know of the Ferrari cultivator, a simple inexpensive motorized device designed for the subsistence farmers of southern Italy. It is the very devil to start, but it creates a perfect tilth for the seedbed.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by Simon Cooper
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be a ***'. If you act like a ***, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook!

    Read the article

  • Rip and Convert DVD’s to an ISO Image

    - by Mysticgeek
    If you own a lot of DVD’s, you might want to convert them to an ISO image for backup and easily playing them on your media center. Today we take a look at ripping your discs using DVDFab, then using ImgBurn to create an ISO image of the ripped DVD files. Rip DVD with DVDFab6 DVDFab will remove copy protection and rip the DVD files for free. Other components in the suite require you to purchase a license after the 30 day trial, but you’ll still be able to rip DVD’s after the trial. Install DVDFab by accepting the defaults (link below)…a system restart is required to complete the install process. The first time you run it, a welcome screen is displayed. If you don’t want to see it again check the box Do not show again, then Start DVDFab.  Pop the DVD in your drive and click Next. Now select your region and check Do not show again, then OK. It will then open the DVD and begin to scan it. Under DVD to DVD you can select either Full Disc or Main Movie depending on what you want to rip. If you want to burn the DVD to a disc after it’s created select the Full Disc option. Now click the Start button to begin the ripping process. After the ripping process has completed, you’ll get a message telling you it’s waiting for you to put in a blank DVD. Since we aren’t burning the disc, just cancel the message. Click Finish and close out of DVDFab or just minimize it if you’re going to keep using it to rip another DVD. By default the temporary directory is in My Documents \ DVDFab \ Temp…however you can change it in settings. If you go to the Temp directory you’ll see the DVD files listed there… Convert Files to ISO with ImgBurn Now that we have the files ripped from the DVD, we need to convert them to an ISO image using ImgBurn (link below). Open it up and from the main menu click on Create image file from files/folders. Click on the folder icon to browse to the location of the ripped DVD files. Browse to the DVDFab temp directory and the VIDEO_TS folder for the source and click Ok. Then choose a destination directory, give the ISO a name, and click Save. In this case we ripped the Unbreakable DVD, so named it that.   So now in ImgBurn you have the source being the ripped DVD files, and the destination for the ISO…then click the Build button. If you don’t create a volume label, ImgBurn is kind enough to create on for you. If everything looks correct, click Ok. Now wait while ImgBurn goes through the process of converting the ripped DVD files to an ISO image. The process has successfully completed. The ISO image of the DVD will be in the output directory you selected earlier. Now you can burn the ISO image to a blank DVD or store it on an external hard drive for safe keeping. When you’re done, you’ll probably want to go into the temp DVDFab folder and delete the VOB and other files in the Video_TS folder as they will take up a lot of space on your hard drive.   Conclusion Although this method requires two programs to make an ISO out of a DVD, it’s extremely quick. When burning DVD’s of various lengths, it took less than 30 minutes to get the final ISO. Now, you’ll have your DVD movies backed up in case something were to happen to the discs and are no longer playable. If you use Windows Media Center to watch your movies, check out our article on how to automatically mount and view ISO files in Windows 7 Media Center. With DVDFab, you get a 30 day fully functional trial for all of its features. You’ll still be able rip DVD’s even after the 30 day trial has ended. The more we’ve been using DVDFab, the more impressed we are with its capabilities, so after the 30 day trial you should consider purchasing a license. We will have a full review of the of it to share with you soon.  Download DVDFab Download ImgBurn Similar Articles Productive Geek Tips How To Rip DVDs with VLCCalculate with Qalculate on LinuxConvert a Row to a Column in Excel the Easy WayEnjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser

    Read the article

  • using isight camera in macbookpro(8,2) on ubuntu 12.04 virtualbox VM

    - by Kurt Spindler
    I'm having a lot of trouble using the built-in isight camera on my macbookpro8,2 (early 2011) from an ubuntu 12.04 virtual machine, run inside VirtualBox. The following is the log I get when I try to run guvcview ubuntu@ubuntu:~$ guvcview guvcview 1.5.3 ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.surround71 ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib setup.c:565:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started video device: /dev/video0 Init. FaceTime HD Camera (Built-in) (location: usb-0000:00:0b.0-1) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 1/10, { pixelformat = 'MJPG', description = 'MJPEG' } { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'RGB3', description = 'RGB3' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'BGR3', description = 'BGR3' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'YU12', description = 'YU12' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { pixelformat = 'YV12', description = 'YV12' } { discrete: width = 160, height = 120 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 176, height = 144 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 320, height = 240 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 352, height = 288 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 640, height = 480 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1280, height = 720 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 960, height = 540 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, { discrete: width = 1024, height = 576 } Time interval between frame: 100/2997, 1/25, 1/24, 1/15, vid:05ac pid:8509 driver:uvcvideo checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls no codec detected for H264 no codec detected for MP3 - (lavc) Checking video mode 960x540@32bpp : OK Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable Could not grab image (select timeout): Resource temporarily unavailable write /home/ubuntu/.guvcviewrc OK free controls cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK ubuntu@ubuntu:~$ Any help would be greatly appreciated. Only clue I have is that I initially was having problems, tried using the old method of fixing isights (involving installing isight-firmware-tools) before realizing that I just hadn't turned on the VM setting to allow the VM to access the webcam. :) Anyway, I wonder if installing that messed something up. However, I think this is a red herring because I've: shut down and turned back on the Mac, restarted the VM, tried a different VM (for which I never installed isight-firmware-tools, and created an entirely new ubuntu vm. All instances have had this problem. Similarly, other viewers, such as cheese, avplay, avconv have had all various kinds of errors.

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • Downloading stuff from Oracle: an example

    - by user12587121
    Introduction Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.Firstly, a couple of preliminary points: Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website. This Oracle document is a comprehensive description of the Oracle FMW download process and the licensing that applies to downloaded software.   Downloading Oracle Identity Manager 11.1.1.5     To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link. Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following: jdk-6u29-linux-x64.bin                    (Java JDK)V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)wls1035_generic.jar                       (the Weblogic Application Server)ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip jdevstudio11115install.exe                (optional: JDeveloper IDE)soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper) Downloading the bits 1.    Download the Java JDK, 64 bit version 1.6.0_24+.2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require: a.    Go to edelivery: https://edelivery.oracle.com b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’ c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’ d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip) 3.    Download the Weblogic Application Server: WLS 10.3.54.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation: a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1. 5.    Download the SOA bits: a.    Go to the edelivery area as for the RCU and download: i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2) ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2) 6.    You will want to download some development tooling (for plugins or BPEL workflow development): a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using) b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5 That’s it, you may proceed to the installation. 

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • Virtual host in Apache Zend

    - by llocani
    I'd like to ask you if you can tell me why I can't get Vhost in Apache to work my Vhostconf is: NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] DocumentRoot "E:/Archivos de programa/Zend/Apache2/htdocs" ServerName localhost <Directory "E:/Archivos de programa/Zend/Apache2/htdocs"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands" ServerName healinghands.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/access.log" common #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects" ServerName dev.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/access.log" common #AllowOveride all </VirtualHost> My httpd.conf is: ServerRoot "E:\Archivos de programa\Zend\Apache2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule filter_module modules/mod_filter.so LoadModule headers_module modules/mod_headers.so LoadModule imagemap_module modules/mod_imagemap.so LoadModule include_module modules/mod_include.so LoadModule info_module modules/mod_info.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule ssl_module modules/mod_ssl.so LoadModule status_module modules/mod_status.so LoadModule userdir_module modules/mod_userdir.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] DocumentRoot "E:\Archivos de programa\Zend\Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride all Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.php index.html home.php </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> Alias /NetBeansProjects "E:\Documents and Settings\dvieira\Mis documentos\NetBeansProjects" ScriptAlias /cgi-bin/ "E:\Archivos de programa\Zend\Apache2/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "E:\Archivos de programa\Zend\Apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-vhosts.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "conf/zend.conf" NameVirtualHost *:80 <VirtualHost *:80> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/zend-default-vhost-80.conf" </VirtualHost> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/globals-*.conf" Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/vhost_*.conf" And my host in Windows: 127.0.0.1 localhost 127.0.0.1 healinghands.loc 127.0.0.1 dev.loc And I can't get any of the browser to recognize dev.loc or healinghands.loc but a ping does it. Localhost is working fine. I've spent 3 days now traying to solve this for my one but I finally quit and have to ask. The error should be this Error Code 11002: host not found. Background: this error indicates that the gateway could not find an authoritative DNS server for the website you are trying to access. Date: 5/20/2013 5:51:03 PM Server: Source: DNS problem. i'd like to add this ping Haciendo ping a healinghands.loc [127.0.0.1] con 32 bytes de datos: Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Estadísticas de ping para 127.0.0.1: Paquetes: enviados = 4, recibidos = 4, perdidos = 0 (0% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms Today i've tryed something: i've add this domains into the exceptions of mi ie proxy config. This worked for healinghands.loc but not for dev.loc i really do not understand why, both config are exactly the same except for de documentroot. I will continue searching

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Deciding On Features For Open Source

    - by Robz / Fervent Coder
    Open source feature selection is subjective. An interesting question was posed to me recently at a presentation - “How do you decide what features to include in the [open source] projects you manage?” Is It Objective? I’d like to say that it’s really objective and that we vote on features and look at what carries the most interest of the populace. Actually no I wouldn’t. I don’t think I would enjoy working on open source (OSS) as much if it someone else decided on what features I should include. It already works that way at work. I don’t want to come home from work and work on things that others decide for me unless they are paying me for those features. So how do I decide on features for our open source projects? I think there are at least three paths to feature selection and they are not necessarily mutually exclusive. Feature Selection IS the Set of Features For the Domain Your product, in whatever domain it is in, needs to have the basic set of features that make it answer the needs of that domain. That is different for every product, but if you take for example a build tool, at the very least it needs to be able to compile source. And these basic needed features are not always objective either. Two people could completely disagree what makes for a required feature to meet a domain need for a product. Even one person may disagree with himself/herself about what features are needed based on different timeframes. So that leads us down to subjective. Feature Selection IS An Answer To Competition Some features go in because the competition adds a feature that may draw others away from your product offering. With OSS, there are all free alternatives, so if your competition adds a killer feature and you don’t, there isn’t much other than learning (how to use the other product) to move your customers off to the competition. If you want to keep your customers, you need to be ready to answer the questions of adding the features your competition has added.  Sometimes it’s about adding a feature that your competition charges for, but you add it for free. That draws people to the free alternative – so sometimes that adds a motivation to select a feature. Sometimes it’s because you want those features in your product, either to learn how you can answer the question of how to do something and/or because you have a need for that feature and you want it in your product. That also leads us down the road to subjective. Feature Selection IS Subjective I decide on features based on what I want to see in the product I am working on. Things I am interested in or have the biggest need for usually get picked first, with things that do not interest me either coming later or not at all. Most people get interested in an area of OSS because it solves a need for them and/or they find it interesting. If one of these two things is not happening and they are not being paid, it’s likely that person will move on to something else they find interesting or just stop OSS altogether. OSS feature selection is just that – subjective. If it wasn’t, it wouldn’t be opinionated and it wouldn’t have a personality about it. Most people like certain OSS because they like where the product is going or the personalities behind the product. For me, I want my products to be easy to use and solve an important problem. If it takes you more than 5-10 minutes to learn how to use my product, I know you are probably going somewhere else. So I pick features that make the product easy to use and learn, and those are not always the simplest features to work on. I work for conventions and make the product opinionated, because I think that is what makes using a product easier, if it already works with little setup. And I like to provide the ability for power users to get in and change the conventions to suit their needs. So those are required features for me above and beyond the domain features. I like to think I do a pretty good job at this. Usually when I present on something I’ve created, I like seeing people’s eyes light up when they see how simple it is to set up a powerful product like UppercuT. Patches And/Or Donations But remember before you say I’m a bad person or won’t use my product, I’ll always accept patches or I might like the feature that you suggest. If you like using the products I provide and they solve a problem for you the two biggest compliments you can provide are either a patch or a donation.  If you think the product is great, but if it could do this one other thing, it would be awesome(!), then consider contacting me and providing a patch, or consider contacting me with a donation and a request to put the feature in. And alternatively, if it’s a big feature, you could hire me to work on the product to make it even better. What If There Are Multiple Committers? In the question of multiple committers, I choose that someone always makes the ultimate decision to select whether a feature should be part of a product or not. But for other OSS project maybe this is not the case. If there is not an ultimate decision maker, then there is the possibility of either adding every feature suggested or having a deadlock on two conflicting features.   So let me pose this question. If you work on Open Source, how do you decide on what features to put in your open source projects? How do you decide what doesn’t belong? What do you do when there are conflicting features?

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >