Search Results

Search found 11935 results on 478 pages for 'knowledge module'.

Page 68/478 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Do Managers in Python Multiprocessing module lock the shared data?

    - by AnonProcess
    This Question has been asked before: http://stackoverflow.com/questions/2936626/how-to-share-a-dictionary-between-multiple-processes-in-python-without-locking However I have several doubts regarding the program given in the answer: The issue is that the following step isn't atomic d['blah'] += 1 Even with locking the answer provided in that question would lead to random results. Since Process 1 read value of d['blah'] saves it on the stack increments it and again writes it. In Between a Process 2 can read the d['blah'] as well. Locking means that while d['blah'] is being written or read no other process can access it. Can someone clarify my doubts?

    Read the article

  • Is `eval`ing in a CPAN module without localizing $@ a bug?

    - by rassie
    I think I've encountered a bug in Params::Validate, but I'm not sure whether I identified the problematic code piece correctly. The code in question failed to pass exceptions up the chain (using Try::Tiny), so I started debugging and found out that a class used inside the try block has a destructor. This destructor calls object methods which use Params::Validate and looking into Validate.pm source I see an eval without $@ localization, i.e. the global $@ gets overwritten. Now I see two options: Params::Validate should always localize $@ and thus it's a bug that should be reported. The bug is in the class in question, because it shouldn't use Params::Validate in a destructor. Params::Validate can stay as it is now. Which one is it? How I should I handle this situation? PS: I think that CPAN modules should be rock-solid and neither break themselves nor their environment, hence the question title.

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • How to add JS/CSS files to Joomla modules?

    - by Apache Fan
    I am starting out with Joomla and am writing a simple Joomla module. I am using some custom CSS and JS in my module. Now when I distribute this module I need my JS/CSS files to go with the ZIP. I have added my files in my module ZIP file. This is what i need to know - How do I refer to these CSS/JS files in my module so that even if I distribute the module as a zip i would not have to send the css/js files separately? I tried looking at different solutions including http://www.howtojoomla.net/how-tos/development/how-to-add-cssjavascript-to-your-joomla-extension But I was not able to figure out what the URL for the JS/CSS file should be? I am using Joomla 1.7 hosted on a cloud hosting site. Thanks

    Read the article

  • How do i setup Login module in my web application? in Java?

    - by Nitesh Panchal
    Hello, I am making web application in java. My username and passwords as well as roles are stored in database. I could use jdbcRealm, but it forces me to use a specific structure which i dont have in my project. So, how do i do it? I am using Glassfish v3.0 with Netbeans 6.8. I read few tutorials about custom LoginModule but i am not getting anything out of it. How do you all people do it practically in your projects? If you can show me code of custom LoginModule then also it will do. Waiting for replies.

    Read the article

  • Drupal 6: using too many Views module causing site to go down cos of too many mysql connection.

    - by artmania
    Hi friends, I have HostGator Baby Shared Plan . I develop Drupal site on. everything was fine at the beginning, then by the time i go further with development, site started ti work really slow. now it is not working at all. giving my sql errors like TOO many connections, etc... I created so many blocks, pages with View. so it makes my site to so much depend on database. should not I do that? can it be the reason of my site's no working now. appreciate helps!!!!

    Read the article

  • Organize routes in Node.js

    - by NilColor
    Hello! I start to look at Node.js. Aslo I'm using Express. And I have a question - how can I organize web application routes? All examples just put all this app.get/post/put() handlers in app.js and it works just fine. This is good but if I have something more than simple HW Blog? Is it possible to do something like this: var app = express.createServer(); app.get( '/module-a/*', require('./module-a').urls ); app.get( '/module-b/*', require('./module-b').urls ); and // file: module-a.js urls.get('/:id', function(req, res){...}); // <- assuming this is handler for /module-a/1 In other words - I'd like something like Django's URLConf but in Node.js.

    Read the article

  • Is it possible to figure out (approximately) what line of source code a kernel module is hung on, fr

    - by Mike Heinz
    I'm trying to debug what appears to be a completion queue issue: Apr 14 18:39:15 ST2035 kernel: Call Trace: Apr 14 18:39:15 ST2035 kernel: [<ffffffff8049b295>] schedule_timeout+0x1e/0xad Apr 14 18:39:15 ST2035 kernel: [<ffffffff8049a81c>] wait_for_common+0xd5/0x13c Apr 14 18:39:15 ST2035 kernel: [<ffffffffa01ca32b>] ib_unregister_mad_agent+0x376/0x4c9 [ib_mad] Apr 14 18:39:16 ST2035 kernel: [<ffffffffa03058f4>] ib_umad_close+0xbd/0xfd Is it possible to turn those hex numbers into something close to line numbers?

    Read the article

  • I need to move a dived CSS module into the center of the page...

    - by JackMcE
    I'm building a website for someone and they wanted the text and bulk of the information to be centered on the page. Problem is I can get everything contained in a tag and then assign the class, but I can't get the whole thing to center. It always hangs to the left even if I apply centering to the div class. I guess you could say that it is stuck on the left side of the page when I want everything to be centered. I would just make everything format larger but they want some space left in the background for the color and maybe some imagery later on. They haven't made up their minds. If you want to take a look here is the link where I'm building or testing stuff. I know the header and such needs to be re proportioned to fit with everything, but just as a frame of reference. Don't worry about the header, just know that I want the white text information area with the purple border to stay the same size, but just move to the center and if some one could tell me how to do that I would appreciate it greatly.

    Read the article

  • How can I have my python file show its mercurial tag or revision as the module version?

    - by Chris R
    I'd like to add a --version command line option to my python application that will show the right version depending on the tagged status of the command: If the file comes from a version whose short hex ID was abcdef01 that was tagged TAG, --version should show this: MyApp Version TAG (abcdef01) If the file comes from the tip, --version should show this: MyApp (tip) If the file comes from an arbitrary, untagged revision abcdef02, --version should show this: MyApp (development, abcdef02) Is this possible? If so, how?

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • How to get the MSOnlineBackup Cmdlets?

    - by gregpakes
    I am trying to manage the Azure Online Backup from PowerShell. There is a set of Cmdlets called MSOnlineBackup. See technet.microsoft.com/en-us/library/dn249523.aspx. When I try to run: Import-Module MSOnlineBackup I get: The specified module 'MSOnlineBackup' was not loaded because no valid module file was found in any module directory. On the technet page it states that it is included in 4.0.4.0, If I run: $PSVersionTable.PSVersion It returns: Major: 4 Minor: 0 Build: -1 Revision: -1 I am running Windows 8.1. As you can probably tell I am no Powershell expert. I have also tried installed the Azure Backup Agent, but it says it needs a Server operating system. Can anyone tell me how I can get the MSOnlineBackup module on my machine so I can start automating Windows Azure Backup?

    Read the article

  • A Week of DNN – March 19, 2010

    - by Rob Chartier
    DotNetNuke 5.3.0 Released! New Features Templated User Profiles - User profile pages are now publicly viewable, and layout is controlled by the Admin. Photo field in User Profile - Users can upload a photo to their profile.  We also added support for User Specific data storage.  User Messaging - Users can send direct messages to other system users.  This also includes an out-of-the-box asynchronous, provider based, message platform.  You will see more of this in future releases. Search Engine Sitemap Provider - The sitemap now allows module admins to plug in sitemap logic for individual modules. Taxonomy Manager - Administrators can create flat or hierarchical taxonomies that can be shared and used across modules.  Supporting SEO and Social features at the core is an important piece for DotNetNuke moving forward. (Last Minute Update: 5.3.1 will be released with some last minute updates early next week) DotNetNuke as a Scalable Content management System (CMS) Power, Reliability & Feature Richness – DotNetNuke an Open Source Framework How to Search Engine Optimize dotnetnuke dotnetnuke Training Video – Setting DNN Security DotNetNuke Module Template [CS] (Free) XsltDb - DotNetNuke XSLT module with database and ajax support (Free) Create a non-Award Winning DotNetNuke Skin (part 1, part 2, part 3) Test Driven example module nearly refactored to Web Forms MVP Ajax Search v1.0.0 Released! (Live Demo) Tutorials: Backup DNN, Restore DNN, Move DNN from Backup (By Mitchel Sellers) A tag cloud based on the new 5.3 Taxonomy Engage: Tell-a-Friend 1.1 released (FREE module)  549 DotNetNuke Videos: DNN Creative Magazine Issue 54 Out Now  http://www.dotnetnuke.com/Community/Forums/tabid/795/forumid/112/threadid/355615/scope/posts/Default.aspx

    Read the article

  • update-manager crashes Ubuntu 12.04

    - by user205450
    The update-manager crashes with the following error frank@darkstar2:~$ update-manager Traceback (most recent call last): File "/usr/bin/update-manager", line 33, in <module> from UpdateManager.UpdateManager import UpdateManager File "/usr/lib/python2.7/dist-packages/UpdateManager/UpdateManager.py", line 72, in <module> from Core.MyCache import MyCache File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/MyCache.py", line 34, in <module> import DistUpgrade.DistUpgradeCache File "/usr/lib/python2.7/dist-packages/DistUpgrade/DistUpgradeCache.py", line 60, in <module> KERNEL_INITRD_SIZE = _set_kernel_initrd_size() File "/usr/lib/python2.7/dist-packages/DistUpgrade/DistUpgradeCache.py", line 53, in _set_kernel_initrd_size size = estimate_kernel_size_in_boot() File "/usr/lib/python2.7/dist-packages/DistUpgrade/utils.py", line 74, in estimate_kernel_size_in_boot size += os.path.getsize(f) File "/usr/lib/python2.7/genericpath.py", line 49, in getsize return os.stat(filename).st_size OSError: [Errno 5] Input/output error: '/boot/abi-3.2.0-54-generic' I am not sure how to read the error but it seems there is some error in file size. How do I fix it.

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • SQL Authority News – Play by Play with Pinal Dave – A Birthday Gift

    - by Pinal Dave
    Today is my birthday. Personal Note When I was young, I was always looking forward to my birthday as on this day, I used to get gifts from everybody. Now when I am getting old on each of my birthday, I have almost same feeling but the direction is different. Now on each of my birthday, I feel like giving gifts to everybody. I have received lots of support, love and respect from everybody; and now I must return it back.Well, on this birthday, I have very unique gifts for everybody – my latest course on SQL Server. How I Tune Performance I often get questions where I am asked how do I work on a normal day. I am often asked that how do I work when I have performance tuning project is assigned to me. Lots of people have expressed their desire that they want me to explain and demonstrate my own method of solving performance problem when I am facing real world problem. It is a pretty difficult task as in the real world, nothing goes as planned and usually planned demonstrations have no place there. The real world, demands real solutions and in a timely fashion. If a consultant goes to industry and does not demonstrate his/her capabilities in very first few minutes, it does not matter how much fame he/she is, the door is shown to them eventually. It is true and in my early career, I have faced it quite commonly. I have learned the trick to be honest from the start and request absolutely transparent communication from the organization where I am to consult. Play by Play Play by Play is a very unique setup. It is not planned and it is a step by step course. It is like a reality show – a very real encounter to the problem and real problem solving approach. I had a great time doing this course. Geoffrey Grosenbach (VP of Pluralsight) sits down with me to see what a SQL Server Admin does in the real world. This Play-by-Play focuses on SQL Server performance tuning and I go over optimizing queries and fine-tuning the server. The table of content of this course is very simple. Introduction In the introduction I explained my basic strategies when I am approached by a customer for performance tuning. Basic Information Gathering In this module I explain how I do gather various information for performance tuning project. It is very crucial to demonstrate to customers for consultant his capability of solving problem. I attempt to resolve a small problem which gives a big positive impact on performance, consultant have to gather proper information from the start. I demonstrate in this module, how one can collect all the important performance tuning metrics. Removing Performance Bottleneck In this module, I build upon the previous module’s statistics collected. I analysis various performance tuning measures and immediately start implementing various tweaks on the performance, which will start improving the performance of my server. This is a very effective method and it gives immediate return of efforts. Index Optimization Indexes are considered as a silver bullet for performance tuning. However, it is not true always there are plenty of examples where indexes even performs worst after implemented. The key is to understand a few of the basic properties of the index and implement the right things at the right time. In this module, I describe in detail how to do index optimizations and what are right and wrong with Index. If you are a DBA or developer, and if your application is running slow – this is must attend module for you. I have some really interesting stories to tell as well. Optimize Query with Rewrite Every problem has more than one solution, in this module we will see another very famous, but hard to master skills for performance tuning – Query Rewrite. There are few do’s and don’ts for any query rewrites. I take a very simple example and demonstrate how query rewrite can improve the performance of the query at many folds. I also share some real world funny stories in this module. This course is hosted at Pluralsight. You will need a valid login for Pluralsight to watch  Play by Play: Pinal Dave course. You can also sign up for FREE Trial of Pluralsight to watch this course. As today is my birthday – I will give 10 people (randomly) who will express their desire to learn this course, a free code. Please leave your comment and I will send you free code to watch this course for free. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >