Search Results

Search found 3147 results on 126 pages for 'drupal alter'.

Page 106/126 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • View Mobile Websites in Windows with Safari 4 Developer Tools

    - by Matthew Guay
    Want to try out mobile websites designed for the iPhone and other mobile devices on your PC?  Safari 4 for Windows lets you do this easily with their developer tools. By default, Safari will show standard desktop websites.  But by making a simple change, you can switch it to work like Safari Mobile on the iPhone or iPod Touch. Getting Started First make sure you have Safari 4 for Windows installed.  You can download Safari directly (link below) and install it as usual.   Or if you already have another Apple program installed, such as QuickTime or iTunes, then you can install it from Apple Software update.  Simply enter apple software update in the Start menu search box. And then select Safari 4 from the list of new software available.  Click Install to automatically download and install Safari. Accept the license Agreement, and then Safari will automatically install. Once this is finished, Safari will be ready to use. View Mobile Sites in Safari First, we need to enable the developer tools.  Click the gear icon on the toolbar, and select Preferences. Click the Advanced tab, and then check the box that says “Show Develop menu in menu bar”. Once you’ve closed your settings box, click the page icon, select Develop, then User Agent, and then choose one of the Mobile Safari settings.  In our test we chose Mobile Safari 3.1.2 – iPhone. To make your browser emulate a mobile device better, you can hide the bookmarks and tab bar to have a more streamlined interface. Click the Gear icon, and select “Hide Bookmarks Bar”, and then repeat and click “Hide Tab Bar”. You can also shrink your window to be closer to the size of a mobile device screen.  Once you’ve done these things, Safari should look similar to this screenshot.  Here we have loaded Google.com, and you can see it in its iPhone-style interface. Simply enter any website into the address bar, and it will load in its mobile interface if it has one.  Here is Google’s other mobile offerings, right inside Windows. Gmail loads messages with the default iPhone interface. One especially interesting mobile site is Apple’s online iPhone User Guide.  When loaded in Safari with the iPhone setting, it loads with a very nice mobile UI that works just like an iPhone app.  In fact, you can even click and drag to scroll, just like you would with your finger on an iPhone. Conclusion Even if you do not have a Smartphone, you can still preview what websites will look like on them with this trick. Not all sites will work of course, but it’s fun to play around with different sites that have mobile versions. Links: Safari 4 Download Apple iPhone online user guide Similar Articles Productive Geek Tips Make Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarSave Screen Space by Hiding the Bookmarks Toolbar in Safari for WindowsEdit Text in a Webpage with Internet Explorer 8Keep Websites From Using Tiny Fonts in Safari TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • Four New Java Champions

    - by Tori Wieldt
    Four luminaries in the Java community have been selected as new Java Champions. The are Agnes Crepet, Lars Vogel, Yara Senger and Martijn Verburg. They were selected for their technical knowledge, leadership, inspiration, and tireless work for the community. Here is how they rock the Java world: Agnes Crepet Agnes Crepet (France) is a passionate technologist with over 11 years of software engineering experience, especially in the Java technologies, as a Developer, Architect, Consultant and Trainer. She has been using Java since 1999, implementing multiple kinds of applications (from 20 days to 10000 men days) for different business fields (banking, retail, and pharmacy). Currently she is a Java EE Architect for a French pharmaceutical company, the homeopathy world leader. She is also the co-founder, with other passionate Java developers, of a software company named Ninja Squad, dedicated to Software Craftsmanship. Agnes is the leader of two Java User Groups (JUG), the Lyon JUG Duchess France and the founder of the Mix-IT Conferenceand theCast-IT Podcast, two projects about Java and Agile Development. She speaks at Java and JUG conferences around the world and regularly writes articles about the Java Ecosystem for the French print Developer magazine Programmez! and for the Duchess Blog. Follow Agnes @agnes_crepet. Lars Vogel Lars Vogel (Germany) is the founder and CEO of the vogella GmbH and works as Java, Eclipse and Android consultant, trainer and book author. He is a regular speaker at international conferences, such as EclipseCon, Devoxx, Droidcon and O'Reilly's Android Open. With more than one million visitors per month, his website vogella.com is one of the central sources for Java, Eclipse and Android programming information. Lars is committer in the Eclipse project and received in 2010 the "Eclipse Top Contributor Award" and 2012 the "Eclipse Top Newcomer Evangelist Award." Follow Lars on Twitter @vogella. Yara Senger Yara Senger (Brazil) has been a tireless Java activist in Brazil for many years. She is President of SouJava and she is an alternate representative of the group on the JCP Executive Committee. Yara has led SouJava in many initiatives, from technical events to social activities. She is co-founder and director of GlobalCode, which trains developers throughout Brazil.  Last year, she was recipient of the Duke Choice's Award, for the JHome embedded environment.  Yara is also an active speaker, giving presentations in many countries, including JavaOne SF, JavaOne Latin Ameria, JavaOne India, JFokus, and JUGs throughout Brazil. Yara is editor of InfoQ Brasil and also frequently posts at http://blog.globalcode.com.br/search/label/Yara. Follow Yara @YaraSenger. Martijn Verburg Martijn Verburg (UK) is the CTO of jClarity (a Java/JVM performance cloud tooling start-up) and has over 12 years experience as a Java/JVM technology professional and OSS mentor in a variety of organisations from start-ups to large enterprises. He is the co-leader of the London Java Community (~2800 developers) and leads the global effort for the Java User Group "Adopt a JSR" and "Adopt OpenJDK" programmes. These programmes encourage day to day Java developer involvement with OpenJDK, Java standards (JSRs), an important relationship for keeping the Java ecosystem relevant to the 9 million Java developers out there today. As a leading expert on technical team optimisation, his talks and presentations are in high demand by major conferences (JavaOne, Devoxx, OSCON, QCon) where you'll often find him challenging the industry status quo via his alter ego "The Diabolical Developer." You can read more in the OTN ariticle "Challenging the Diabolical Developer: A Conversation with JavaOne Rock Star Martijn Verburg." Follow Martijn @karianna. The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. Congratulations to these new Java Champions!

    Read the article

  • Test and Report Add-on Compatibility in Firefox

    - by Asian Angel
    Now that the new version of Firefox is out you probably have a favorite extension or two that has not updated yet. You can get that extension working again, test it, and report back to Mozilla on how well it does with the Add-on Compatibility Reporter extension. Before For our example we chose a great extension that unfortunately has not been updated yet. As you can see here Firefox is refusing to let the extension install. After As soon as you install Add-on Compatibility Reporter you will be presented with an information page on how the extension works and what you can do with it. You should definitely take a moment to read this as it is very helpful. After trying our non-compatible extension again we were able to proceed with the install process. Notice at the bottom that “compatibility checking” has been overridden. Success! As soon as we restarted our browser it was easy to see the “non-compatible icon” in the “Add-ons Manager Window”…but the extension did install though (terrific!). Clicking on the extension’s entry will reveal a new button in the lower right corner. Using the “Compatibility Drop-Down Menu” you can report if the extension is working as well as before or if it is actually having problems. The extension that we used for our example had no problems whatsoever so good news there. Whichever option you choose you will be presented with a small “Report Window” with information about the extension, your browser’s version number, and your operating system. Click “Submit Report” to send it on its’ way. You will see a confirmation message letting you know that your report was successfully submitted. While the extension itself has not been altered in any form at least you have it working again and have helped verify whether it still works well or not. Notice the “notation” present now in place of the “Compatibility Button” that lets you know that you have already taken care of that particular extension. Looking great… Conclusion If you have a favorite extension that you miss using in the newest release of Firefox then this is definitely an extension to add to your browser. Not only will your extension start working again but you can let Mozilla know how well it is working and (hopefully) help get the extension updated. Links Download the Add-on Compatibility Reporter extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsUsing Windows 7 or Vista Compatibility ModeMysticgeek Blog: Generate A System Health Report In VistaCheck Extension Compatibility for Upcoming Firefox ReleasesMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • SQLAuthority News – Amazon Gift Card Raffle for Beta Tester Feedback for NuoDB

    - by pinaldave
    As regular readers know I’ve been spending some time working with the NuoDB beta software. They contacted me last week and asked if I would give you a chance to try their new web-based console for their scalable, SQL-compliant database. They have just put out their final beta release, Beta 9.  It contains a preview of a new web-based “NuoConsole” that will replace and extend the functionality of their current desktop version.  I haven’t spent any time with the new console yet but a really quick look tells me it should make it easier to do deeper monitoring than the older one. It also looks like they have added query-level reporting through the console. I will try to play with it soon. NuoDB is doing a last, big push to get some more feedback from developers before they release their 1.0 product sometime in the next several weeks. Since the console is new, they are especially interested in some quick feedback on it before general availability. For SQLAuthority readers only, NuoDB will raffle off three $50 Amazon gift cards in exchange for your feedback on the NuoConsole preview. Here’s how to Enter Download NuoDBeta 9 here You must build a domain before you can start the console. Launch the Web Console. Windows Code: start java -jar jarnuodbwebconsole.jar Mac, Linux, Solaris, Unix Code: java -jar jar/nuodbwebconsole.jar Access the Web Console: Code: http://localhost:8080 When you have tried it out, go to a short (8 question) survey to enter the raffle Click here for the survey You must complete the survey before midnight EDT on October 17, 2012. Here’s what else they are saying about this last beta before general availability: Beta 9 now supports the Zend PHP framework so that PHP developers can directly integrate web applications with NuoDB. Multi-threaded HDFS support – NuoDB Storage Managers can now be configured to persist data to the high performance Hadoop distributed file system (HDFS). Beta 9 optimizes for multi-thread I/O streams at maximum performance. This enhancement allows users to make Hadoop their core storage with no extra effort which is a pretty cool idea. Improved Performance –On a single transaction node, Beta 9 offers performance comparable with MySQL and MariaDB. As additional nodes are added, NuoDB performance improves significantly at near linear scale. Query & Explain Plan Logging – Beta 9 introduces SQL explain plans for your queries. Qualify queries with the word “EXPLAIN” and NuoDB will respond with the details of the execution plan allowing performance optimization to SQL. Through the NuoConsole, you can now kill hung or long running queries. Java App Server Support – Beta 9 now supports leading Web JEE app servers including JBoss, Tomcat, and ColdFusion. They’ve also reported: Improved PHP/PDO drivers Support for Drupal Faster Ruby on Rails driver The Hibernate Dialect supports version 4.1 And good news for my readers: numerous SQL enhancements They will share the results of the web console feedback with me.  I’ll let you know how it goes. Also the winner of their last contest was Jaime Martínez Lafargue!  Do leave a comment here once you complete the survey.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL Authority Tagged: NuoDB

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • PHP and Ruby: how to leverage both? and, is it worth it?

    - by dukeofgaming
    As you might have noticed from the title, this is not a "PHP or Ruby", or a "PHP vs. Ruby" question. This is a question on how to leverage PHP + Ruby in the same business. I myself am a PHP developer, I love the language because of its convenience and I specially love the ecosystem of resources that surround it: Joomla, Drupal, Wordpress, Symfony2, Doctrine2, etc. However, the language itself can be a little disappointing sometimes. OTOH, Ruby looks like a very beautiful language and —from studying it superficially in several aspects— I could say it is leaner than Python as a language per se. However, from what I've seen there is pretty much only RoR making noise, and I don't like RoR so much (mainly because its model layer). As Co-CEO and CTO at my company I'm trying to think outside of the box since I want to start to focus on the human side of technology and see if its sane to use both PHP and Ruby. Here are some random thoughts: Ruby folk seem to be generally better suited programmers than PHP folk (in terms of averages), I know the previous statement is somewhat baloney because very good and well architected PHP can be written, but I'd say the Ruby programmer culture is better than PHP's. The thing about Ruby is that it seems better suited for rapid development, I don't really know if this is only the case for RoR, but I do know that there are certain practices (perhaps not so good) like monkey patching that let business needs be satified quicker. From a marketing point of view (yep, sometimes you need to leverage the marketing BS for the sake of your company) Ruby seems better while PHP carries some stigmas. PHP 5.4 is bringing traits, and that is better/cleaner than mixins. That could really make PHP as lean as Ruby —or more— for certain stuff. Now, concretely, my questions: Would a PHP programmer want to learn Ruby?, I know I do, but conversely, would a Ruby programmer want to learn PHP?. What kinds of projects or situations would be better suited for Ruby that are not suited for PHP?. What is the actual ecosystem of Ruby?, aside from RoR, I have not seen other hyped technologies/frameworks (I've seen RSpec, but I confess being a total noob on what BDD really consists of and its implications). Supposing there are a certain type of projects ideal for Ruby, would there be a moment that its better to move it to PHP?. I know PHP can handle lots of stuff, but I've read that Ruby has its limitations when scaling (or is that RoR?, or is that baloney for both?). Finally and most importantly, would it be sane to maintain projects in two languages?, or is that just stupid. As I said, it looks like Ruby is leaner on the short term and that can make a project happen and succeed, but I'm not so sure about that on the long run. I'm looking for insights mainly from people that know well the strengths and weaknesses of the languages —preferably both of them— and Ruby's ecosystem in real practice, meaning: frameworks and applications like the ones I quoted from PHP's ecosystem. Best regards and thanks for your time.

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >