Search Results

Search found 4625 results on 185 pages for 'split tunnel'.

Page 58/185 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Is it possible to change the look and feel of remote X applications running under Xming?

    - by Rasive
    I am running Eclipse remotely right now, in Xming on my Windows pc, through an ssh tunnel from my laptop running Ubuntu 11.10. As seen below, it doesn't look that bad, but it seems that my applications defaults to the standard theme when it cannot find any others for GTK+ applications. Is there anything I can do about this? Also it would be nice if I could do something about the font settings to make it more easily readable.

    Read the article

  • Is a VPN a good method for protecting data in an untrusted network? [closed]

    - by john
    I will be connecting my laptop in an untrusted network. If I setup OpenVpn on a server and use a vpn client on the laptop to connect through it, is it enough? Can someone perform a MITM attack or otherwise eavesdrop on my traffic? If someone on the local network port-scans my laptop, will the open ports be accessible to him while I use the VPN tunnel? Is there anything else I should keep in mind?

    Read the article

  • Developer Training – Employee Morals and Ethics – Part 2

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series of posts about Developer Training, you can probably determine where my mind lies in the matter – firmly “pro.”  There are many reasons to think that training is an excellent idea for the company.  In the end, it may seem like the company gets all the benefits and the employee has just wasted a few hours in a dark, stuffy room.  However, don’t let yourself be fooled, this is not the case! Training, Company and YOU! Do not forget, that as an employee, you are your company’s best asset.  Training is meant to benefit the company, of course, but in the end, YOU, the employee, is the one who walks away with a lot of useful knowledge in your head.  This post will discuss what to do with that knowledge, how to acquire it, and who should pay for it. Eternal Question – Who Pays for Training? When the subject of training comes up, money is often the sticky issue.  Some companies will argue that because the employee is the one who benefits the most, he or she should pay for it.  Of course, whenever money is discuss, emotions tend to follow along, and being told you have to pay money for mandatory training often results in very unhappy employees – the opposite result of what the training was supposed to accomplish.  Therefore, many companies will pay for the training.  However, if your company is reluctant to pay for necessary training, or is hesitant to pay for a specific course that is extremely expensive, there is always the art of compromise.  The employee and the company can split the cost of the training – after all, both the company and the employee will be benefiting. [Click on following image to answer important question] Click to Enlarge  This kind of “hybrid” pay scheme can be split any way that is mutually beneficial.  There is the obvious 50/50 split, but for extremely expensive classes or conferences, this still might be prohibitively expensive for the employee.  If you are facing this situation, here are some example solutions you could suggest to your employer:  travel reimbursement, paid leave, payment for only the tuition.  There are even more complex solutions – the company could pay back the employee after the training and project has been completed. Training is not Vacation Once the classes have been settled on, and the question of payment has been answered, it is time to attend your class or travel to your conference!  The first rule is one that your mothers probably instilled in you as well – have a good attitude.  While you might be looking forward to your time off work, going to an interesting class, hopefully with some friends and coworkers, but do not mistake this time as a vacation.  It can be tempting to only have fun, but don’t forget to learn as well.  I call this “attending sincerely.”  Pay attention, have an open mind and good attitude, and don’t forget to take notes!  You might be surprised how many people will want to see what you learned when you go back. Report Back the Learning When you get back to work, those notes will come in handy.  Your supervisor and coworkers might want you to give a short presentation about what you learned.  Attending these classes can make you almost a celebrity.  Don’t be too nervous about these presentations, and don’t feel like they are meant to be a test of your dedication.  Many people will be genuinely curious – and maybe a little jealous that you go to go learn something new.  Be generous with your notes and be willing to pass your learning on to others through mini-training sessions of your own. [Click on following image to answer important question] Click to Enlarge Practice New Learning On top of helping to train others, don’t forget to put your new knowledge to use!  Your notes will come in handy for this, and you can even include your plans for the future in your presentation when you return.  This is a good way to demonstrate to your bosses that the money they paid (hopefully they paid!) is going to be put to good use. Feedback to Manager When you return, be sure to set aside a few minutes to talk about your training with your manager.  Be perfectly honest – your manager wants to know the good and the bad.  If you had a truly miserable time, do not lie and say it was the best experience – you and others may be forced to attend the same training over and over again!  Of course, you do not want to sound like a complainer, so make sure that your summary includes the good news as well.  Your manager may be able to help you understand more of what they wanted you to learn, too. Win-Win Situation In the end, remember that training is supposed to be a benefit to the employer as well as the employee.  Make sure that you share your information and that you give feedback about how you felt the sessions went as well as how you think this training can be implemented at the company immediately. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Edit Media Center TV Recordings with Windows Live Movie Maker

    - by DigitalGeekery
    Have you ever wanted to take a TV program you’ve recorded in Media Center and remove the commercials or save clips of favorite scenes? Today we’ll take a look at editing WTV and DVR-MS files with Windows Live Movie Maker. Download and Install Windows Live Movie Maker. The download link can be found at the end of the article. WLMM is part of Windows Live Essentials, but you can choose to install only the applications you want. You’ll also want to be sure to uncheck any unwanted settings like settings Bing as default search provider or MSN as your browser home page.   Add your recorded TV file to WLMM by clicking the Add videos and photos button, or by dragging and dropping it onto the storyboard.   You’ll see your video displayed in the Preview window on the left and on the storyboard. Adjust the Zoom Time Scale slider at the lower right to change the level of detail displayed on the storyboard. You may want to start zoomed out and zoom in for more detailed edits.   Removing Commercials or Unwanted Sections Note: Changes and edits made in Windows Live Movie Maker do not change or effect the original video file. To accomplish this, we will makes cuts, or “splits,” and the beginning and end of the section we want to remove, and then we will delete that section from our project. Click and drag the slider bar along the the storyboard to scroll through the video. When you get to the end of a row in on the storyboard, drag the slider down to the beginning of the next row. We’ve found it easiest and most accurate to get close to the end of the commercial break and then use the Play button and the Previous Frame and Next Frame buttons underneath the Preview window to fine tune your cut point. When you find the right place to make your first cut, click the split button on the Edit tab on the ribbon. You will see your video “split” into two sections. Now, repeat the process of scrolling through the storyboard to find the end of the section you wish to cut. When you are at the proper point, click the Split button again.   Now we’ll delete that section by selecting it and pressing the Delete key, selecting remove on the Home tab, or by right clicking on the section and selecting Remove.   Trim Tool This tool allows you to select a portion of the video to keep while trimming away the rest.   Click and drag the sliders in the preview windows to select the area you want to keep. The area outside the sliders will be trimmed away. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   Delete any additional clips you don’t want in the final output. You can also accomplish this by using the Set start point and Set end point buttons. Clicking Set start point will eliminate everything before the start point. Set end point will eliminate everything after the end point. And you’re left with only the clip you want to keep.   Output your Video Select the icon at the top left, then select Save movie. All of these settings will output your movie as a WMV file, but file size and quality will vary by setting. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and prompts you to create and burn a DVD.   Conclusion WLMM is one of the few applications that can edit WTV files, and it’s the only one we’re aware of that’s free. We should note only WTV and DVR-MS files will appear in the Recorded TV library in Media Center, so if you want to view your WMV output file in WMC you’ll need to add it to the Video or Movie library. Would you like to learn more about Windows Live Movie Maker? Check out are article on how to turn photos and home videos into movies with Windows Live Movie Maker. Need to add videos from a network location? WLMM doesn’t allow this by default, but you check out how to add network support to Windows Live Move Maker. Download Windows Live Similar Articles Productive Geek Tips Rotate a Video 90 degrees with VLC or Windows Live Movie MakerHow to Make/Edit a movie with Windows Movie Maker in Windows VistaFamily Fun: Share Photos with Photo Gallery and Windows Live SpacesAutomatically Mount and View ISO files in Windows 7 Media CenterAutomatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Quick 2D sight area calculation algorithm?

    - by Rogach
    I have a matrix of tiles, on some of that tiles there are objects. I want to calculate which tiles are visible to player, and which are not, and I need to do it quite efficiently (so it would compute fast enough even when I have a big matrices (100x100) and lots of objects). I tried to do it with Besenham's algorithm, but it was slow. Also, it gave me some errors: ----XXX- ----X**- ----XXX- -@------ -@------ -@------ ----XXX- ----X**- ----XXX- (raw version) (Besenham) (correct, since tunnel walls are still visible at distance) (@ is the player, X is obstacle, * is invisible, - is visible) I'm sure this can be done - after all, we have NetHack, Zangband, and they all dealt with this problem somehow :) What algorithm can you recommend for this? EDIT: Definition of visible (in my opinion): tile is visible when at least a part (e.g. corner) of the tile can be connected to center of player tile with a straight line which does not intersect any of obstacles.

    Read the article

  • Dig Deeper in Windows Defrag via Command Prompt

    - by Matthew Guay
    Windows users have learned over the years that they need to keep their computers defragmented to keep running at top speed.  While Windows Vista and 7 automatically defrag your disks, here’s some ways you can dig deeper into Windows Defragmenter Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Snowy Christmas House Personas Theme for Firefox The Mystic Underground Tunnel Wallpaper Ubunchu! – The Ubuntu Manga Available in Multiple Languages Breathe New Life into Your PlayStation 2 Peripherals by Hooking Them Up to Your Computer Move the Window Control Buttons to the Left Side in Windows Fun and Colorful Firefox Theme for Windows 7

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • HTG Explains: Are You Using IPv6 Yet? Should You Even Care?

    - by Chris Hoffman
    IPv6 is extremely important for the long-term health of the Internet. But is your Internet service provider providing IPv6 connectivity yet? Does your home network support it? Should you even care if you’re using IPv6 yet? Switching from IPv4 to IPv6 will give the Internet a much larger pool of IP addresses. It should also allow every device to have its own public IP address, rather than be hidden behind a NAT router. IPv6 is Important Long-Term IPv6 is very important for the long-term health of the Internet. There are only about 3.7 billion public IPv4 addresses. This may sound like a lot, but it isn’t even one IP address for each person on the planet. Considering people have more and more Internet-connected devices — everything from light bulbs to thermostats are starting to become network-connected — the lack of IP addresses is already proving to be a serious problem. This may not affect those of us in well-off developed countries just yet, but developing countries are already running out of IPv4 addresses. So, if you work at an Internet service provider, manage Internet-connected servers, or develop software or hardware — yes, you should care about IPv6! You should be deploying it and ensuring your software and hardware works properly with it. It’s important to prepare for the future before the current IPv4 situation becomes completely unworkable. But, if you’re just typical user or even a typical geek with a home Internet connection and a home network, should you really care about your home network just yet? Probably not. What You Need to Use IPv6 To use IPv6, you’ll need three things: An IPv6-Compatible Operating System: Your operating system’s software must be capable of using IPv6. All modern desktop operating systems should be compatible — Windows Vista and newer versions of Windows, as well as modern versions of Mac OS X and Linux. Windows XP doesn’t have IPv6 support installed by default, but you shouldn’t be using Windows XP anymore, anyway. A Router With IPv6 Support: Many — maybe even most — consumer routers in the wild don’t support IPv6. Check your router’s specifications details to see if it supports IPv6 if you’re curious. If you’re going to buy a new router, you’ll probably want to get one with IPv6 support to future-proof yourself. If you don’t have an IPv6-enabled router yet, you don’t need to buy a new one just to get it. An ISP With IPv6 Enabled:  Your Internet service provider must also have IPv6 set up on their end. Even if you have modern software and hardware on your end, your ISP has to provide an IPv6 connection for you to use it. IPv6 is rolling out steadily, but slowly — there’s a good chance your ISP hasn’t enabled it for you yet. How to Tell If You’re Using IPv6 The easiest way to tell if you have IPv6 connectivity is to visit a website like testmyipv6.com. This website allows you to connect to it in different ways — click the links near the top to see if you can connect to the website via different types of connections. If you can’t connect via IPv6, it’s either because your operating system is too old (unlikely), your router doesn’t support IPv6 (very possible), or because your ISP hasn’t enabled it for you yet (very likely). Now What? If you can connect to the test website above via IPv6, congratulations! Everything is working as it should. Your ISP is doing a good job of rolling out IPv6 rather than dragging its feet. There’s a good chance you won’t have IPv6 working properly, however. So what should you do about this — should you head to Amazon and buy a new IPv6-enabled router or switch to an ISP that offers IPv6? Should you use a “tunnel broker,” as the test site recommends, to tunnel into IPv6 via your IPv4 connection? Well, probably not. Typical users shouldn’t have to worry about this yet. Connecting to the Internet via IPv6 shouldn’t be perceptibly faster, for example. It’s important for operating system vendors, hardware companies, and Internet service providers to prepare for the future and get IPv6 working, but you don’t need to worry about this on your home network. IPv6 is all about future-proofing. You shouldn’t be racing to implement this at home yet or worrying about it too much — but, when you need to buy a new router, try to buy one that supports IPv6. Image Credit: Adobe of Chaos on Flickr, hisperati on Flickr, Vox Efx on Flickr     

    Read the article

  • thin client solutions: x2go or LTSP

    - by guettli
    We want to use a thin client solution in our small company: about 20 PCs. But connecting from home is needed, too. Ubuntu seems to favor LTSP, but on the x2go FAQ says that LTSP is not well suited for WAN connections: LTSP requires a high bandwidth on your network. It can efficiently be used in Local Area Networks (LANs) only. We tested the x2go client and it works very well even if you connect from home (2k DSL) over OpenVPN tunnel (fat client) Why should you use LTSP and why x2go?

    Read the article

  • How do you administer cups remotely using the web interface?

    - by Evan
    I have an Ubuntu server in my apartment and I just got a printer, so it's time to share! In the past I've used CUPS on my Desktop and I'd just point the browsers to localhost:631 to set things up. Can I used the web based admin tools remotely? I've been playing with the /etc/cups/cupsd.conf file and am currently at the point where I can direct a browser on my LAN to server-ip:631 but I'm getting the 403 Forbidden error. If it's not possible or it's a bad idea for security reasons to allow remote administrator of CUPS, would it be possible to accomplish this using an SSH tunnel to the sever?

    Read the article

  • REST API rule about tunneling

    - by miku
    Just read this in the REST API Rulebook: GET and POST must not be used to tunnel other request methods. Tunneling refers to any abuse of HTTP that masks or misrepresents a message’s intent and undermines the protocol’s transparency. A REST API must not compromise its design by misusing HTTP’s request methods in an effort to accommodate clients with limited HTTP vocabulary. Always make proper use of the HTTP methods as specified by the rules in this section. [highlights by me] But then a lot of frameworks use tunneling to expose REST interfaces via HTML forms, since <form> knows only about GET and POST. My most recent example is a MethodRewriteMiddleware for flask (submitted by the author of the framework): http://flask.pocoo.org/snippets/38/. Any ways to comply to the "Rule" without hacks or add-ons in web frameworks?

    Read the article

  • why would Remmina stop working?

    - by Chris Curvey
    Until sometime last night, I had remmina working fine. I could run RDP through an SSH tunnel and all was well. Then it stopped working. I can get as far as the password dialog for my work machine, but then it just says "Cannot connect to RDP server localhost". I can't even find any logs that look interesting. I've re-installed remmina, cleared my .remmina directory, restarted my machine, and even restarted my gateway. Just to make it really weird, my laptop (which has the same setup -- latest Ubuntu and Remmina) can make the connection just fine. It is even going through the same router, albeit wirelessly. Any thoughts?

    Read the article

  • Reverse X11 forwarding

    - by Oli
    I was playing with my phone (that runs a Linux/X stack) last night and I managed to ssh into my desktop and run an application and have it show up on my phone. It was awesome. Today I'd like to sort of do the opposite. I want to view an application running on my phone on my PC. I could install a SSH server on my phone but I frankly don't fancy that purely for security reasons. I want this to be initiated from my phone. Is there a way to connect from my phone and tunnel the PC's X connection back to the phone and then run an application on the phone that show on the PC?

    Read the article

  • Connecting to an Amazon AWS database [closed]

    - by Adel
    so I'm a bit overwhelmed/bewildered by the whole concept of networking/remote-desktop , etc. The context is that - in my company I need to access a remote database. The standard way I use is to first connect using a VPN-Client( called Shrew Soft Access manager), then once that says: "network device configured tunnel enabled" I'm good to connect using windows "Remote Desktop Connection" . But now our company set up an Amazon AWS database, and I'm told I need to connect, and I ony need to use RDP. So I tried the standard windows one - but it doesn't work. On wikipedia , I looked up remote desktop sftware and downloaded one called VNC Viewer. but it doesn't work. Any advice/tips/comments appreciated EDIT: YAYA! I finally got a little more connected . I had to use my username as a fully qualified name: Computer: XYZ.XYZ.XYZ.XYZ USERNAME: XYZ.XYZ.XYZ.XYZ\aazzam

    Read the article

  • Connect to MySQL on remote server from inside python script (DB API)

    - by Atul Kakrana
    Very recently I have started to write python scripts that need to connect few databases on mySQL server. The problem is that when I work from office my script works fine but running a script from my home while on office VPN generates connection error. I also noticed the mySQL client Squirrel also cannot connect from my home but works fine on Office computer. I think both are giving problem for the same reason. Do I need to create a ssh tunnel and forward the port? If yes how do I do it? mySQL is installed on server I have ssh access. Please help me on this AK

    Read the article

  • My colleague can't visit our website through her provider after long downtime

    - by Peter Westerlund
    We did a frontpage update some days ago that caused the site to crash. The site was down for several hours. After troubleshooting, we concluded that we needed to cache more content. It had been run too many queries. After solving that and rebooting of server, we here in Sweden and Norway were again able to visit the site. But a colleague in Tunisia couldn't. It seems to work from another internet provider but not her own. What could have happened? And what should we do? Edit: I should add: She is able to visit the site through tunnel at anonymouse.org.

    Read the article

  • Why do the interfaces show ipv6 address along with ipv4

    - by nixnotwin
    I have manually specified only ipv4 address for my interfaces. But all the interfaces automatically show inet6 address as well. Does it mean that ubuntu starts an ipv6 tunnel by default. If it does, isn't it dangerous, as ipv6 assigns public ips for all LAN clients. I only have a firewall on my NAT router, and my clients, who's interfaces show ipv6 address, do not have firewalls. Here is a screenshot: eth0 Link encap:Ethernet HWaddr 34:dc:47:2e:ad:13 inet6 addr: fe80::28cf:38ff:fb7b:da19/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5783 errors:0 dropped:0 overruns:0 frame:0 TX packets:6098 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:2961324 (2.9 MB) TX bytes:1573757 (1.5 MB) Interrupt:46 Note: For privacy reasons I have modified the HWaddr and inet6 addr values.

    Read the article

  • How to stop getting too focused on a train of thought when programming?

    - by LDM91
    I often find myself getting too focused on a train of thought when programming, which results in me having what I guess could be described as "tunnel vision". As a result of this I miss important details/clues, which means I waste a fair amount of time before finally deciding the path I'm taking to solve the task is wrong. Afterwards, I take a step back which almost always results in me discovering what I've missed in a lot less time.. It's becoming really frustrating as it feels like I'm wasting a lot of time and effort, so I was wondering if anyone else had experienced similar issues, and had some suggestions to stop going down dead ends and programming "blindly" as it were!

    Read the article

  • x11 Remote Desktop with Ubuntu 12.04

    - by BSchlinker
    When I was running Debian, I was able to start a remote session over x11 by just typing gnome-session However, with Ubuntu 12.04, this only seems to result in my desktop and background being forwarded over x11 -- the top bar (where the clock is) and dock are both missing. I tried starting all of unity by executing unity, but that just resulted in a segfault. How can I start a Unity 2D session over x11? Edit: I prefer x11 as I need to tunnel the connection over 2 other servers. I would need to do a good amount of port forwarding within SSH to get any other connections back. Of course, if someone has any other suggestions, I'm willing to listen.

    Read the article

  • Sidestep Automatically Secures Your Mac’s Connection on Unsecure Networks

    - by Jason Fitzpatrick
    If you’re wary of browsing on wide open public Wi-Fi networks (and you should be), Sidestep is a free Mac application that routes your connection on an unsecure network through a secure proxy. Sidestep automatically detects when you are on an unprotected wireless network and forms an encrypted tunnel to the proxy you specified during setup. Anytime you login a wide open Wi-Fi node (such as at a coffee shop, airport, or other public area) you won’t be broadcasting your login credentials and other personal information in what amounts to plain text into the air around you. Anyone snooping on you or the network in general will simply see your stream of encrypted data going to the proxy. Hit up the link below to grab a copy and read additional information about setting up the program and finding/configuring a proxy server. Sidestep is freeware, Mac OS X only. Sidestep [via Gina Trapani] How to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >