Search Results

Search found 9034 results on 362 pages for 'plan guide'.

Page 36/362 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • What's SimpleEditableListAppDelegate?

    - by Shawn
    I'm trying to follow the TableView programming guide, and I'm copying the code directly from the guide, but I get "SimpleEditableListAppDelegate undeclared" when I try to compile. Google returns nothing but the programming guide. What's SimpleEditableListAppDelegate, and how do I use it?

    Read the article

  • Virtualmin domain name registration php

    - by David Maitland
    in a PHP web page i need to run this following command to create a new domain: virtualmin create-domain --domain DOMAIN --pass PASS --plan 'Standard Package' --limits-from-plan --features-from-plan This is usually executed in a shell but i don't know how to do it from a web page and also i need to take the domain string and pass string from a web form. Can anyone help with the PHP code as my skills are basic and i have already tried a few things that just don't work. Thanks.

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • The Select query I am using is not working.. Can Somebody Guide me to the Correct way?

    - by Parth
    I am using the Select query as SELECT id, ordering FROM `jos_menu` WHERE ordering='".$rec['ordering'] -'1' ."' AND parent = '0' Here I need all the records whose ordering is less than 1 of the selected record's order($rec['ordering'] = getting from other select query ) when I am trying to echo the query I am not getting complete statement but getting only this -1' AND parent = '0' here is the whole snippet $where = ' WHERE (id = ' . implode( ' OR id = ', $cid ) . ')';//Pranav Dave Coded echo $selquery = "SELECT id, ordering FROM `jos_menu`".$where; //Pranav Dave Coded $db->setQuery( $selquery );//Pranav Dave Coded $record = $db->loadAssocList(); //Pranav Dave Coded if ($model->orderItem($id, -1)) { echo "<pre>"; print_r($model); /*exit;*/ //echo $updorderup = mysql_escape_string($model->_db->_sql);//Pranav Dave Coded foreach($record as $rec)//Pranav Dave Coded { echo $aboverow = "SELECT id, ordering FROM `jos_menu` WHERE ordering='".$rec['ordering'] -'1' ."' AND parent = '0'"; $db->setQuery( $aboverow ); $above = $db->loadAssoc(); echo "<pre>"; print_r($above); }//end of foreach }//end of if Please suggest me where I am getting wrong.....

    Read the article

  • Week in Geek: FBI Back Door in OpenBSD Edition

    - by Asian Angel
    This week we learned how to migrate bookmarks from Delicious to Diigo, fix annoying arrows, play old-school DOS games, schedule smart computer shutdowns, use breaks in Microsoft Word to better format documents, check the condition of hard-disks using Linux disk utilities, & what the Linux fstab is and how it works. Photo by Jameson42. Random Geek Links Another week with extra news link goodness to help keep you up to date. Photo by justmakeit. Report of FBI back door roils OpenBSD community Allegations that the FBI surreptitiously placed a back door into the OpenBSD operating system have alarmed the computer security community, prompting calls for an audit of the source code and claims that the charges must be a hoax. Fortinet: Job outlook improving for cybercrooks In an ironic twist in the job market, more positions will open up for developers who can write customized malware packers, people who can break CAPTCHA codes, and distributors who can spread malicious code, according to Fortinet. Enisa: Malware for smartphones is a ’serious risk’ Businesses and consumers are at risk of data breaches through smartphone use, according to the European Network and Information Security Agency. The trick with the f: Google and Microsoft web sites distribute malware Last week, Google’s DoubleClick advertising platform and Microsoft’s rad.msn.com online ad network briefly distributed malware to other web sites in the form of advertising banners. New scam tactic: Fake disk defraggers It would appear that scammers are trying out new programs to see which might best confuse potential victims and evade detection by legitimate antivirus software. Microsoft closes IE and Stuxnet holes As previously announced, Microsoft has released 17 security updates to close 40 security holes. All four Windows holes so far disclosed in connection with Stuxnet have now been closed. Microsoft Offers H.264 Support to Firefox on Windows via Add-On The new HTML5 Extension for Windows Media Player Firefox Plug-in add-on from Microsoft offers users that are running Firefox on Windows 7 H.264 support for HTML5 video playback. Google proclaims Chrome business-ready Google has announced that Chrome is ready for corporate use. Microsoft Tells Exchange Customers to Think Twice Before Opting for Google Message Continuity This week, Microsoft is telling companies still running Exchange 2010’s precursors that they should carefully consider the implications of embracing Google Message Continuity. Who Google has in mind for its Chrome OS users Steven Vaughan-Nichols explains why he feels that Chrome OS will be ideal for either office-workers or people who need a computer, but do not know the first thing about how to use one safely. Oracle takes office suite to the cloud Oracle has introduced Cloud Office 1.0, a cloud-based version of its office suite, which is aimed at web and mobile users. Mozilla pays premiums for reports of vulnerabilities The Mozilla Foundation has followed Google’s example by expanding its rewards program for reports of vulnerabilities in its Web applications. Who bought those 882 Novell patents? Not just Microsoft The mysterious CPTN Holdings — the organization that bought the 882 Novell patents as part of the terms of the Attachmate acquisition of Novell – has been unmasked (Microsoft, Apple, EMC and Oracle). Appeals court: Feds need warrants for e-mail Police must obtain search warrants before perusing Internet users’ e-mail records, a federal appeals court ruled today in a landmark decision that struck down part of a 1986 law allowing warrantless access. Geek Video of the Week What happens when someone plays a wicked prank by shoveling crazy snow paths that lead to dead ends or turn back on themselves? Watch to find out! Photo by CollegeHumor. Janitor Snow Shoveling Prank Random TinyHacker Links The Oatmeal on Cat vs Internet What lengths will our poor neglected kitty hero have to go to in order to get some attention? Guide On Using JoliCloud With Windows JoliCloud is a nifty operating system that’s made for people who need a light-weight OS that’s mostly cloud based. Check this guide on using it with Windows. Use Cameyo to Easily Create Portable Programs Here’s a nifty tool to make portable apps out of programs in Windows. Check out the guide to do it. Better Family Tech Support A nice new site by Google to help members of family understand how computers work. Track Your Stolen Mobile Phone With F-Secure A useful anti-theft tool for your mobile phone. Super User Questions Another week with great answers to popular questions from Super User. What Chrome password manager fits my requirements? What’s the best way to be able to reimage windows computers? Could you suggest feature-rich disk-based personal backup program for linux (and I’ve seen a few)? What is IPv6 and why should I care? Is there any way to find out what programs are trying to connect to Internet on windows? How-To Geek Weekly Article Recap Here are our hottest articles full of geeky goodness from this past week at HTG. 20 OS X Keyboard Shortcuts You Might Not Know Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Is Your Desktop Printer More Expensive Than Printing Services? Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.? The Twelve Days of Geekmas One Year Ago on How-To Geek Enjoy reading through our latest batch of retro-geek goodness from one year ago. Macrium Reflect is a Free and Easy To Use Backup Utility How To Turn a Physical Computer Into A Virtual Machine with Disk2vhd How To Restore Windows 7 from a System Image How To Manage Hard Drive Space Used by Windows 7 Backup and Restore How To Manage Hibernate Mode in Windows 7 The Geek Note That is all we have for you this week, so see you back here again after the holidays! Got a great tip? Send it in to us at [email protected]. Photo by mitjamavsar. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • Ask the Readers: Do You Use the Command Line?

    - by Asian Angel
    Most people have heard of it but not everyone is familiar or comfortable with how to use this bastion of geekdom. This week we would like to know if you use the command line or not. The command line…the bastion of ultimate geekery in many peoples’ eyes. You often hear people referring to doing things using the command line, so there must be something to it, right? For some people using the command line is the best, most efficient, and easiest way to do things on their systems. These are the people that many of us wish we were like. Next you have those who are proficient at using the command line but do not rely on it for everything they do on their systems. Then there are people who know how to perform some tasks or hacks using the command line but may not be as comfortable or knowledgeable as they wish to be using it. Moving on you find those who are interested in learning how to use the command line and just need a small push to get started.  Perhaps you feel too intimidated to learn it and just need the right opportunity to come along. And maybe you do not care one way or the other so long as you get done what you want to do on your system. Or you may prefer to simply use a graphical interface since that is quicker and easier for you (along with being familiar). You can find the whole range of people when it comes to using the command line… This week we would like to know if you use the command line or not. What command line category do you fit into? Power user? Casual usage? Totally lost? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

  • Desktop Fun: Merry Christmas Fonts

    - by Asian Angel
    Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Note: To manage the fonts on your Windows 7, Vista, & XP systems see our article here. Oldchristmas Download Holly Download Christmas Flakes *includes two font types Download Frosty Download Kingthings Christmas Download Candy Time Download BodieMF Holly Download Snowfall Download Snowflake Letters Download Hultog Snowdrift Download AlphaShapes Xmas Trees Download Christmas Tree Download PF Wreath Download Snowy Caps Download PF Snowman *includes three font types Note: Shown in all capital letters here. Download BJF Holly Bells Download Christbaumkugeln Download Xmas Lights Download XmasDings *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download WWFlakes *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download For Christmas Card creating fun and a great way to use your new fonts see our MS Word Christmas Card project series here. Design and Print Your Own Christmas Cards in MS Word, Part 1 Design and Print Your Own Christmas Cards in MS Word, Part 2: How to Print Want more great ways to customize your computer? Then be certain to look through our Desktop Fun section. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Migrating from IBM AIX/DB2 Power systems to Oracle Technologies

    - by zeynep.koch(at)oracle.com
    If you are planning to migrate from  IBM DB2 on AIX Power Systems to more open and better-performing computing environment--one that offers enhanced flexibility, clustering, availability, and security, as well as lower maintenance than download this guide that outlines migrating to Oracle Database 11g and Oracle Linux running on Oracle's Sun Fire X4800 server.This guide shows you how to:Move sample applications with an IBM DB2 on an IBM Power System to Oracle Database 11g Release 2Install Oracle Linux and Oracle Database Release 2 on the Oracle's Sun Fire X4800 serverMigrate user databases from the IBM Power System to Oracle's Sun Fire X4800 serverDownload

    Read the article

  • June Oracle Technology Network NEW Member Benefits - books books and more books!!!

    - by Cassandra Clark
    As we mentioned a few posts ago we are working to bring Oracle Technology Network members NEW benefits each month. Listed below are several discounts on technology books brought to you by Apress, Pearson, CRC Press and Packt Publishing. Happy reading!!! Apress Offers - Get 50% off the eBook below using promo code ORACLEJUNEJCCF. Pro ODP.NET for Oracle Database 11g By Edmund T. Zehoo This book is a comprehensive and easy-to-understand guide for using the Oracle Data Provider (ODP) version 11g on the .NET Framework. It also outlines the core GoF (Gang of Four) design patterns and coding techniques employed to build and deploy high-impact mission-critical applications using advanced Oracle database features through the ODP.NET provider. Pearson Offers - Get 35% off all titles listed below using code OTNMEMBER. SOA Design Patterns | Thomas Earl | ISBN: 0136135161 In cooperation with experts and practitioners throughout the SOA community, best-selling author Thomas Erl brings together the de facto catalog of design patterns for SOA and service-orientation. Oracle Performance Survival Guide | Guy Harrison | ISBN: 9780137011957 The fast, complete, start-to-finish guide to optimizing Oracle performance. Core JavaServer Faces, Third Edition | David Geary and Cay S. Horstmann | ISBN: 9780137012893 Provides everything you need to master the powerful and time-saving features of JSF 2.0? Solaris Security Essentials | ISBN: 9780137012336 A superb guide to deploying and managing secure computer environments.? Effective C#, Second Edition | Bill Wagner | ISBN: 9780321658708 Respected .NET expert Bill Wagner identifies fifty ways you can leverage the full power of the C# 4.0 language to express your designs concisely and clearly. CRC Press Offers - Use 813DA to get 20% off this the title below. Secure and Resilient Software Development This book illustrates all phases of the secure software development life cycle. It details quality software development strategies that stress resilience requirements with precise, actionable, and ground-level inputs. Packt Publishing Offers - Use the promo code "Java35June", to save 35% off of each eBook mentioned below. JSF 2.0 Cookbook By Anghel Leonard ISBN: 978-1-847199-52-2 Packed with fast, practical solutions and techniques for JavaServer Faces developers who want to push past the JSF basics. JavaFX 1.2 Application Development Cookbook By Vladimir Vivien ISBN: 978-1-847198-94-5 Fast, practical solutions and techniques for building powerful, responsive Rich Internet Applications in JavaFX.

    Read the article

  • Installation of LPRng (Ubuntu 13.04)

    - by Poulen
    I have problems with LPRng installation (I am linux beginner). http://lprng.com/LPRng-Reference/LPRng-Reference.html#INSTALLATION - installation guide http://lprng.com/PrintingCookbook/index.html#AEN1563 Could you write me here please, step by step, what I have to do (write into terminal) for succesful installation? I'm trying to do the first step of guide (h4: {4} % gunzip -c LPRng-.tgz | tar xvf -) but unsuccessfuly. (I put the source file to usr/bin, usr/sbin and usr/etc). I'm desperate, help me please :) Thank you and sorry for my english

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • Solaris 11 Resources

    - by user12618891
    .. Oracle Solaris 11 (November, 2011) Oracle Solaris 11 Landing Page Download Oracle Solaris 11 Oracle Solaris 11 Documentation Solaris 11 End-of-Life Notices What's New in Oracle Solaris 11 (blog) Oracle Solaris 11 Feature Demo Videos (blog) Solaris 11 Developer Resources (November, 2011) Oracle Solaris 11 ISV Adoption Guide Oracle Solaris 11 Preflight Checker The IPS System Repository (blog) Packaging and Delivering Software with the Image Packaging System - A Developer's Guide How to Create and Publish packages to an IPS Repository on Solaris 11 Solaris 10 Branded zone VM Templates for Solaris 11 (blog) Oracle Solaris 11 Security: What's New For Developers Optimizing Application with Oracle Solaris Studio Tools and CompilersOther Solaris 11 Technology Spotlights (Landing Page)

    Read the article

  • SQLCruise Alaska was Amazing

    - by AllenMWhite
    You'd think that providing in-depth SQL Server training on a cruise ship would be an excuse for a vacation disguised as a business trip, but you'd be wrong. This past week I traveled with the founders of SQLCruise, Tim Ford and Brent Ozar , along with other top professionals in the SQL Server world - Jeremiah Peschka , Kendra Little , Kevin Kline and Robert Davis - and me. The week began with Brent presenting a session on Plan Cache Analysis, which I plan to start using very soon. After Brent, Kevin...(read more)

    Read the article

  • SQLCruise Alaska was Amazing

    - by AllenMWhite
    You'd think that providing in-depth SQL Server training on a cruise ship would be an excuse for a vacation disguised as a business trip, but you'd be wrong. This past week I traveled with the founders of SQLCruise, Tim Ford and Brent Ozar , along with other top professionals in the SQL Server world - Jeremiah Peschka , Kendra Little , Kevin Kline and Robert Davis - and me. The week began with Brent presenting a session on Plan Cache Analysis, which I plan to start using very soon. After Brent, Kevin...(read more)

    Read the article

  • There once was in Dublin a query

    - by Paul Nielsen
    For 6 months I’ve have been planning a secret trip to London in May as a surprise for my wife (of Irish heritage and accent) (I love how she says, "Aye laddie, kiss me I'm Irish." but that's for another blog.) The plan was to spend a week in London and then top if off with a visit to Dublin to see the Book of Kells (on my bucket list) and stay at Markree Castle at Sligo, Ireland (on her bucket list). The original plan was to have her boss assign mandatory vacation a few days before the trip (her...(read more)

    Read the article

  • How can I save my university's Computer Science & Engineering department? [closed]

    - by Blake
    I'm currently pursuing a B.S. in Computer Engineering at the University of Florida, and we're having a bit of a problem right now... The state recently passed a budget plan that cuts funding for higher education in Florida. The dean of UF's College of Engineering decided that the best way for us to absorb the blow is by executing the following plan: All of the Computer Engineering Degree programs, BS, MS and PhD, would be moved from the Computer & Information Science and Engineering Dept. to the Electrical and Computer Engineering Dept. along with most of the advising staff. Roughly half of the faculty would be offered the opportunity to move to Electrical/Computer Eng., Biomedical Eng., or Industrial/Systems Eng. Staff positions in CISE which are currently supporting research and graduate programs would be eliminated. The activities currently covered by TAs would be reassigned to faculty and the TA budget for CISE would be eliminated. Any faculty member who wishes to stay in CISE may do so, but with a revised assignment focused on teaching and advising. In short: our department (at least as we know it) is being decimated. Computer & Information Sciences & Engineering (one of 9 departments in the College of Engineering) is taking more than 50% of the cuts. If you're interested in reading the full proposal, you can access it here. A vast, VAST majority of the students and faculty in the department are vehemently opposed to this plan, however the dean is already taking measures to implement it. This is the only proposal on the table right now, and she has not entertained our requests for alternatives. She sees it as an obvious (albeit drastic) solution to our budget problem, citing that many other universities have combined Computer and Electrical Engineering departments. I'll bet those universities didn't have to eliminate an established department to get there, though. The budget goes into effect July 1, 2012 (this is non-negotiable), and the dean's proposal is currently set to be finalized some time next week. We don't have much time! My question to everyone here is this: Are we overreacting to this plan, or are we justified? And could you explain why or why not? It's obvious that CISE students will resist any cuts to our department, but I'm curious to see what other people in the field have to say. Any feedback is greatly appreciated. I will select the answer that saves our department. Just kidding, I'll pick the one that best explains why this is a good or bad decision for the dean to make. Please note that anything you say can and will be used to further our cause (and we might track you down if you provide a compelling argument against us).

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organisation's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimise downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist.

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Quick Poll Results: Certifications You Are Pursuing in 2011

    - by Paul Sorensen
    We wanted to to report on the result of our recent Quick Poll regarding certifications that people intend to pursue in 2011: Over 58% of respondents said that they plan to pursue database certification.Over 35% plan to pursue Java tracks.PL/SQL was third, with almost 29% of respondents indicating it in their plans.Almost 7% intend to pursue Solaris certification.Thank you to everyone who participated in our Quick Poll. Watch the Oracle Certification blog for additional opportunities to provide feedback.

    Read the article

  • Install latest ffmpeg for all users in Debina/Ubuntu

    - by Junaid
    I installed ffmpeg on my Ubuntu using the guide: https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide It is good and working. I know that guide is for installing ffmpeg for current user. Can anybody help me to install ffmpeg for all users? To do so I think there required only few changes in the steps specified at https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide . But I am not sure about that. Please help Thank you.

    Read the article

  • WORK PLANS in 10 Minutes

    Most productive people use some form of a work plan to sort out what they need to do and to guide them when they are doing the work. People generally agree that they need a work plan for new work, b... [Author: Dr Neil Miller - Computers and Internet - August 24, 2009]

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >