Search Results

Search found 13262 results on 531 pages for 'tools utilities'.

Page 47/531 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • What's the deal with URLs for Yandex.Metrica not prepended with "http"?

    - by sharptooth
    The description of Yandex.Metrica explicitly says that URLs like //mc.yandex.ru/metrika/watch.js (no http: in front) that the web site owner has to insert into his pages are not erroneous. So for example this code: <img src="//mc.yandex.ru/watch/00000" style="position:absolute; left:-9999px;" alt="" /> is claimed to be okay. However the code validator thinks such URLs are not okay and I'd rather make the validator happy so that noone breaks the code later trying to "fix" it. Why are these URLs not prepended with http:? What happens if I actually prepend them with http:?

    Read the article

  • Who Tests the Tester?

    It is scarcely surprising that it can take up to five years to release a new version of SQL Server when one understands the extent of the effort required to test it. When enterprises depend on the reliability of an application or tool such as SQL Backup, the contribution of the tester is of paramount importance. It is an interesting and enjoyable role as well, as Andrew Clarke found out by chatting to testers at Red Gate.

    Read the article

  • T4Toolbox and Visual Studio 2010

    - by Ben Griswold
    I’ve been using the T4Toolbox to help generate my ASP.NET MVC models and scaffolding for a while now.  Another developer tried using my generator project last week and ran into troubles due to a breaking change around the RenderCore() and TransformText() methods in support for VS 2010.  If you upgraded to the latest version of T4Toolbox and receive a build error similar to the following, you are probably in the same boat: GeneratedTextTransformation.[Template].RenderCore(): no suitable method found to override We took the easy way out.  I had him uninstall the latest version of T4Toolbox and install version 9.7.25.1 which my templates were initially coded against.  For now, that worked great, but it sounds like I’ll be doing some rework of the 20+ templates in my project to support Visual Studio 2010 when we migrate later this month.

    Read the article

  • SVN Export or Recursively Remove .SVN Folders

    - by Ben Griswold
    I shared this script with a coworker yesterday. It doesn’t do much; it recursively deletes .svn folders from a source tree.  It comes in handy if you want to share your codebase or you get in a terrible spot with SVN and you just want to start all over. Just blow away all svn artifacts and use your mulligan. It’s true. You can nearly get the same result using the SVN export command which copies your source sans the .svn folders to an alternate location.  The catch is an export only includes those files/folders which exist under version control.  If you want a clean copy of your source – versioned or not – export just might not do. The contents of the .cmd file include the following: for /f "tokens=* delims=" %%i in (’dir /s /b /a:d *.svn’) do ( rd /s /q "%%i" ) Just download and drop the unzipped “SVN Cleanup.cmd” file into the root of the project, execute and away you go.  If you search around enough, I know you can find similar scripts and approaches elsewhere, but I’m still uploading my script for completeness and future reference. Download SVN Cleanup

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • Host your own private git repository via SSH

    - by kerry
    If you are like me you have tons of projects you would like to keep private but track with git, but do not want to pay a git host for a private plan. One of the problems is that most hosts scale their plans by project instead of users. Luckily, it is easy to host your own git repositories on any el cheapo host that provides ssh access. In the interest of full disclosure, I learned this trick from this blog post. I decided to recreate it in case the source material vanishes for some reason. To setup your host, login via ssh and run the following commands: mkdir ~/git/yourprojectname.git cd ~/git/yourprojectname.git git --bare init Then in your project directory (on your local machine): # setup your user info git config --global user.name "Firstname Lastname" git config --global user.email "[email protected]" # initialize the workspace git init git add . git commit -m "initial commit" git add remote origin ssh://[email protected]/~/git/yourprojectname.git git push origin master It’s that easy! To keep from entering your password every time add your public key to the server: Generate your key with ‘ssh-keygen -t rsa‘ on your local machine.  Then add the contents of the generated file to ~/.ssh/authorized_keys on your server.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • How does a website latency simulator work

    - by nighthawk457
    Sites like webpagetest allow users to enter a website url and a test location, to run a speed test on the site from multiple locations using real browsers. Can anyone give me a basic idea of how sites like this work? You also have plugin's like Aptimize latency simulator or charles web debugging proxy app, that simulate the delay while accessing a site from different locations. I am assuming since these are plugin's these function in a different way. How do these plugin's work ?

    Read the article

  • SQL Monitor Alerts in Outlook Without Configuring Email Settings

    - by Fatherjack
    SQL Monitor is a Red Gate tool that I have a long history with and I have worked closely with the development team from a time before it was called SQL Monitor. It is with that history in mind I am a little disappointed in myself that I have only just found out about a pretty cool feature. Out of the box SQL Monitor keeps itself to itself, it busily goes about watching over your servers, noting down when things look suspicious, change drastically or are just out and out wrong. You have to go into the settings and provide email details (SMTP server, account details etc.) before it starts getting at all intrusive with warning and alerts on the condition of your servers. However, it was after installing the most recent version that I was going through the application screen by screen looking for new and interesting changes that I noticed something that had avoided my attention. On the Alerts tab there is an option in the left hand menu. I don’t know how long ago it appeared or why I have never explored it previously but it appears that you can see your Alerts in the format of an RSS feed. Now when you click that link you are taken to a page that is the raw RSS XML – not too interesting but clearly you can use this in an RSS aggregator. Such as Outlook. Note the URL in the newly opened page take it with you into Outlook. For me it is in the form of http://SQLMonitorServerName/Alerts/Inbox/Feed. Again, this is something that I have only recently noticed – Outlook can aggregate RSS feeds. Down below the Inbox, Drafts folders etc, one up from the bottom is RSS Feeds. If you right click that and choose to Add a feed then you can supply the URL for SQL Monitor Alerts: And there you have it, your SQL Monitor Alerts available in Outlook where you can keep an eye on the number of unread items and pick them off at your convenience.

    Read the article

  • ReSharper File Location

    - by Ben Griswold
    By default, the ReSharper cache is stored in the solution folder.  It’s one extra folder and one extra .user file.  It’s no big deal but it does clutter up your solution a bit – especially since the files provide no real value. I prefer to store the ReSharper cache in the system Temp folder.  This setting is available by visiting ReSharper > Options > Environment > General. Just update where you’d like to store the ReSharper cache and you’re good to go.  Note, the .user file continues to linger around the solution folder but at least the _ReSharper.SolutionName folder is moved out of sight.

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • Transposition - the success story of VB6 migration

    - by Visual WebGui
    Since all of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application I am feeling I need to write another post on our migration solution. Hopefully, after reading this and the previous post you will be able to understand the different approach of our solution which already helps organizations around the world move away from the constraints of VB6 and...(read more)

    Read the article

  • Tool to track time estimations vs, actual time

    - by mb1
    Following these two questions: How to respond when asked for Estimate, What's the best project management software for small team I am looking for a tool that combines project management with the ability to plan all tasks for a project, give time estimates and afterwards track actual time spent on items and have comparison between estimate and actual time. I am going to try TargetProcess since I see it has time tracking capabilities. Anyone have a tool they use?

    Read the article

  • One eye on my dinner and one eye on SQL server

    - by fatherjack
    LiveJournal Tags: RedGate,Work Life Balance,Tips and Tricks,SQL Server This is somewhere between a Tweet and a proper blog article - would that be a Bleet? Anyway, I was at a local restaurant yesterday and after placing my order I was thinking about having to get home and log in to check some SQL Servers and then the thought came to me that as we were near civilisation there was likely to be a 3G signal that might actually make using the web browser on my phone bearable. It was surprisingly fast on my HTC Desire, it was almost as good as Wi-Fi. RedGate SQL Monitor works fine on the default HTC browser and here is the proof, me checking the servers while I am waiting for the meal to arrive. Everything checked out OK so I had the evening free from SQL Server. You can get a free 14 day full trial of a SQL Monitor from RedGate here or find out more about it at The Future of Monitoring. Disclosure: I am a friend of RedGate and as such regularly make positive comments about their products. I don't get paid for it but I do get free licenses for testing and reviewing purposes.

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • How do I measure performance of a virtual server?

    - by Sergey
    I've got a VPS running Ubuntu. Being a virtual server, I understand that it shares resources with unknown number of other servers, and I'm noticing that it's considerably slower than my desktop machine. Is there some tool to measure the performance of the virtual machine? I'd be curious to see some approximate measure similar to bogomips, possibly for CPU (operations/sec), memory and disk read/write speed. I'd like to be able to compare those numbers to my desktop machine. I'm not interested in the specs of the actual physical machine my VPS is running on - by doing cat /proc/cpuinfo I can see that it's a nice quad-core Xeon machine, but it doesn't matter to me. I'm basically interested in how fast a program would run in my VPS - how many CPU operations it can make in a second, how many bytes to write to RAM or to disk. I only have ssh access to the machine so the tool need to be command-line. I could write a script which, say, does some calculations in a loop for a second and counts how many loops it was able to do, or something similar to measure disk and RAM performance. But I'm sure something like this already exists.

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • I thought everyone did it like this – Training Session Code Management

    - by Fatherjack
    One of an occasional series of blogs about things that I do that perhaps others don’t. From very early on in my dealings with SQL Server Management Studio I started using Solutions and Projects. This means that I started using them when writing sessions and it wasn’t until speaking with someone at PASS Summit 2013 that I found out that this was a process that was unheard of by some people. So, here we go, a run through how I create and manage code and other documents that I use in presentations. For people unsure what solutions and projects are; • Solution – a container for one or more projects. • Project – a container for files, .sql files are grouped as Queries, all other files are stored as Misc. How do I start? Open Management Studio as normal, and then click File | New and select Project This will bring up the New Project dialog box and you can select/add details as necessary in the places indicated. If this is the first project you are creating then be sure to select the Create directory for solution check box (4). If know in advance that you are going to have more than one project in the solution then you may want to edit the Solution name (3) as by default it will take the name of the project that you enter at (2). This will lead you to the following folder structure (depending on the location that you chose in 3) above. In SSMS you need to turn on the Solution Explorer, either via the View menu or pressing Ctrl + Alt + L                   This will bring up a dockable window that will let you quickly access the files that you choose to include in the Solution.                     Can we get to work and write some code yet please? Yes, we can. As with many Microsoft products there are several ways to go about this, let’s look at the easiest way when creating new code. When writing a presentation I usually start from the position we are currently in – a brand new solution and project with no code. Later on we will look at incorporating existing code files into the Project where we need it. Right-click on the Project name and choose Add New Query           As soon as you click this you will be prompted to select the sql server that you want to connect to and once you have done that you will have your new query open in the text editor and the Solution Explorer will now look like this, showing your server connection and your new query.               And the Project folder will look like this         Now once you have written your code don’t press save, choose Save As and give the code a better name than QueryX.sql. SSMS will interpret this as a request to rename Query1 and your Project and the Project folder will show that SQLQuery1.sql no longer exists but there is now a file named as you requested. If you happen to click save in error then right-click the query in the project and choose rename.               You can then alter the name as you like, even when open in the SSMS text editor, and the file will be renamed. When creating a set of scripts for a presentation I name files with a numeric prefix so that when they are sorted by name they are in the order that I need to use them during the session. I love this idea but I’ve got loads of existing scripts I want to put in Projects Excellent, adding existing files to a project is easy, let’s consider that you have query files in your My Documents folder and you want to bring them into the Project we have just created. Right-click on the Project and choose Add | Existing Item           Navigate to the location of your chosen file and select it. The file will open in SSMS text editor and the Project will be updated to show that the selected query is now part of your project. If you look in Windows Explorer you will see that the query file has been copied into the Project folder, the original file still remains in your My Documents (or wherever it existed). I’ll leave it as an exercise for the reader to explore creating further Projects within a solution but will happily answer questions if you get into difficulties. What other advantages do I get from this? Well, as all your code is neatly in one Solution folder and the folder contains only files that are pertinent to the session you are presenting then it makes it very easy to share this code, simply copy the whole folder onto a USB stick, Blog, FTP location, wherever you choose and it’s all there in one self-contained parcel. You don’t have to limit yourself to .sql query files, you can add any sort of document via the Add Existing Item method, just try it out. Right-click on the protect and choose Add | Existing Item           Change the file type filter.                       You can multi select items here using Ctrl as you click each item you want. When you are done, click the Add button and the items will be brought into your project.                 Again, using this process means the files are copied into the project folder, leaving you original files untouched in their original location. Once they are here you can double click them in the SSMS Solution Explorer to open them, for files with a specific file type then the appropriate application will be launched – ie Word, Excel etc. However, if the files are something that the SSMS Text editor can display then they will open in a tab in SSMS. Try it out with a text file or even a PS1 file … This sounds excellent but what do I need to watch out for? One big thing to consider when working like this is the version of SSMS that you are using. There is something fundamentally different between the different versions in the way that the project (.ssmssqlproj) and solution (.sqlsuo and .ssmssln) files are formatted. If you create a solution in an older version of SSMS and then open it in a newer version you will be given the option to upgrade it. Once you do this upgrade then the older version of SSMS will not be able to open the solution any more. Now this ranks as more of an annoyance than disaster as the files within the projects are not affected in any way, you would just have to delete the files mentioned and recreate the solution in the older version again. Summary So, here we have seen how using SSMS Projects and Solutions can help keep related code files (and other document types) together in a neat structure so that they can be quickly navigated during a presentation and it also makes it incredibly simple to distribute your code and share it with others. I hope this is of use to you and helps you bring more order into your sql files, whether you are a person that does technical presentations or not, having your code grouped and managed can make for a lot of advantages as your code library expands.  

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • Pay Per Click Software

    - by Eddy Freeman
    What software do sites like www.shopzilla.com, www.become.com, www.kelkoo.com etc.. use for the Pay-Per-Click product listing campaigns they offer for their retailers. I am asking what kind of software do they use to know that a certain retailer's products has been clicked 50 times or 100 times(and then the cost of the click is deducted from his money-account) etc... Can someone point me to those kind of softwares? EDIT *Some Explanation :: * In a site like www.shopzilla.com, retailers will upload thier products(list their products on the site). Anytime a buyer clicks on a product to go the retailer's website, an amount of money(say $0.20) is deducted from his account(the money he has deposited in his account with shopzilla). A retailer can see how many times buyers have clicked on his products and how much money remains in his shopzilla accounts. Am looking for such softwares that comparison sites like shopzilla uses to run this type of campaigns. I hope it is clear now.

    Read the article

  • Suddenly I have started getting 404 wordpress errors

    - by rikki
    Suddenly I have started getting 404 error on Google Webmaster. The backend is wordpress. 404 link is http://digitalanalog.in/2011/07/05/augmented-reality-interior-designer-kinect-hack/1345295070000/1345781600000 and it is pointing from this page http://digitalanalog.in/2011/07/05/augmented-reality-interior-designer-kinect-hack/1345295070000/ (this url has been automatically generated. Have no clue how this url exist) I am getting 404 errors of the similar pattern on all the pages

    Read the article

  • problem with Webmaster Google Sitemap

    - by Alex
    I have a wp mu 3.6.1 with domain mapping (0.5.4.3) with w3tc (0.9.3) and Google XML Sitemaps (4.0 BETA). I have 4 different sitemaps. sub-1.com/sitemap.xml sub-2.com/sitemap.xml sub-3.com/sitemap.xml sub-4.com/sitemap.xml on google webmaster i got 59 errors & 14 warnings. Sitemap errorsErrors: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit. General HTTP error: 404 not found Sitemap: sub-2.com/sitemap-pt-post-2011-02.xml etc (but when i click on my sitemap links they work fine) Sitemap errorsWarnings: URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. Sitemap: sub-2.com/sitemap-misc.xml HTTP Error: 404 URL: /sitemap.html (but when i click on my sitemap links they work fine) Sitemap errorsIndex errors URLs not accessible When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted. HTTP Error: 404 URL: /sitemap-pt-post-2010-09.xml (but when i click on my sitemap links they work fine) Web pages 3,276 Submitted 3,247 Indexed what do I have to put on network adminperformance(w3tc)page cachecache preloadSitemap URL: ? i have add "/sitemap.xml" my robots.txt: http://pastebin.com/3K2U0mQa my .htaccess: http://pastebin.com/efJJ6zwy How can I make it work right?

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >