Search Results

Search found 12854 results on 515 pages for 'sysadmin tools'.

Page 41/515 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Test Driven Development (TDD) in Visual Studio 2010- Microsoft Mondays

    - by Hosam Kamel
    November 14th , I will be presenting at Microsoft Mondays a session about Test Driven Development (TDD) in Visual Studio 2010 . Microsoft Mondays is program consisting of a series of Webcasts showcasing various Microsoft products and technologies. Each Monday we discuss a particular topic pertaining to development, infrastructure, Office tools, ERP, client/server operating systems etc. The webcast will be broadcast via Lync and can viewed from a web client. The idea behind the “Microsoft Mondays” program is to help you become more proficient in the products and technologies that you use and help you utilize their full potential.   Test Driven Development in Visual Studio 2010 Level – 300 (  Intermediate – Advanced ) Test Driven Development (TDD), also frequently referred to as Test Driven Design, is a development methodology where developers create software by first writing a unit test, then writing the actual system code to make the unit test pass.  The unit test can be viewed as a small specification around how the system should behave; writing it first helps the developer to focus on only writing enough code to make the test pass, thereby helping ensure a tight, lightweight system which is specifically focused meeting on the documented requirements. TDD follows a cadence of “Red, Green, Refactor.” Red refers to the visual display of a failing test – the test you write first will not pass because you have not yet written any code for it. Green refers to the step of writing just enough code in your system to make your unit test pass – your test runner’s UI will now show that test passing with a green icon. Refactor refers to the step of refactoring your code so it is tighter, cleaner, and more flexible. This cycle is repeated constantly throughout a TDD developer’s workday. Date:   November 14, 2011 Time:  10:00 a.m. – 11:00 a.m. (GMT+3)  http://www.eventbrite.com/event/2437620990/efbnen?ebtv=F   See you there! Hosam Kamel Originally posted at

    Read the article

  • Redistribution of sqlpackage.exe [SSDT]

    - by jamiet
    This is a short note for anyone that may be interested in redistributing sqlpackage.exe. If this isn’t you then no need to keep reading. Ostensibly this is here for anyone that bingles for this information. sqlpackage.exe is a command-line that ships with SQL Server Development Tools (SSDT) in SQL Server 2012 and its main purpose (amongst other things) is to deploy .dacpac files from the command-line. Its quite conceivable that one might want to install only sqlpackage.exe rather than the full SSDT suite (for example on a production server) and I myself have recently had that need. I enquired to the SSDT product team about the possibility of doing this. I said: Back in VS DB Proj days it was possible to use VSDBCMD.exe on a machine that did not have the full VS shell install by shipping lots of pre-requisites along for the ride (details at How to: Prepare a Database for Deployment From a Command Prompt by Using VSDBCMD.EXE). Is there a similar mechanism for using VSDBMCD.exe’s replacement, sqlpackage.exe? here was the reply from Barclay Hill who heads up the development team: Yes, SQLPackage.exe is the analogy of VSDBCMD.exe. You can acquire separately, in a stand-alone package, by installing DACFX. You can get it from: Feature pack is here: http://www.microsoft.com/en-us/download/details.aspx?id=29065 Web Platform Installer here: http://www.microsoft.com/web/gallery/install.aspx?appid=DACFX You will notice it has dependencies on SQLDOM and SQLCLRTYPES.  WebPI will install these for you, but it is al carte on the feature pack. So, now you know. I didn’t enquire about licensing of DACFX but given SSDT is free I am going to assume that the same applies to DACFX too. @Jamiet

    Read the article

  • Redistribution of sqlpackage.exe [SSDT]

    - by jamiet
    This is a short note for anyone that may be interested in redistributing sqlpackage.exe. If this isn’t you then no need to keep reading. Ostensibly this is here for anyone that bingles for this information. sqlpackage.exe is a command-line that ships with SQL Server Development Tools (SSDT) in SQL Server 2012 and its main purpose (amongst other things) is to deploy .dacpac files from the command-line. Its quite conceivable that one might want to install only sqlpackage.exe rather than the full SSDT suite (for example on a production server) and I myself have recently had that need. I enquired to the SSDT product team about the possibility of doing this. I said: Back in VS DB Proj days it was possible to use VSDBCMD.exe on a machine that did not have the full VS shell install by shipping lots of pre-requisites along for the ride (details at How to: Prepare a Database for Deployment From a Command Prompt by Using VSDBCMD.EXE). Is there a similar mechanism for using VSDBMCD.exe’s replacement, sqlpackage.exe? here was the reply from Barclay Hill who heads up the development team: Yes, SQLPackage.exe is the analogy of VSDBCMD.exe. You can acquire separately, in a stand-alone package, by installing DACFX. You can get it from: Feature pack is here: http://www.microsoft.com/en-us/download/details.aspx?id=29065 Web Platform Installer here: http://www.microsoft.com/web/gallery/install.aspx?appid=DACFX You will notice it has dependencies on SQLDOM and SQLCLRTYPES.  WebPI will install these for you, but it is al carte on the feature pack. So, now you know. I didn’t enquire about licensing of DACFX but given SSDT is free I am going to assume that the same applies to DACFX too. @Jamiet

    Read the article

  • Bingbot seems to be adding "ForceRecrawl: 0" to URLs when crawling my sites

    - by Louis Somers
    I'm seeing this in the iis-logs of two websites that I maintain: GET /an/existing/page/on/my/site+ForceRecrawl:+0 - 80 - 207.46.195.105 HTTP/1.1 Mozilla/5.0+(compatible;+bingbot/2.0;++http://www.bing.com/bingbot.htm) I get about one or two of these per day from these IP addresses: 207.46.195.105, 65.52.110.190.. an more, all belonging to msnbot-ip.search.msn.com Probably Microsoft has a bug in their crawler? Any way, doing a search on "ForceRecrawl: 0" in major search engines comes up with a bunch of random sites. Doing the search on StackOverflow or here gave no results (to my amazement). Am I the only one seeing this? I first noticed these on the 9th of this month, and I'm seeing them pass almost daily since... Another thing that I think is crazy, is that the URL http://www.bing.com/bingbot.htm redirects to mail.live.com (hotmail). Currently I'm returning 404's but I'm considering to catch these, strip the trailing " ForceRecrawl: 0" and process as if it were a legitimate url. Could anyone shed some light on this? Could it have to do with some configuration or so in Bing's Webmaster Tools?

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • Recommended: IntelliCommand for Visual Studio 2010/2012

    - by WeigeltRo
    The Morning Brew is a great news source for developers for many years now. In its most recent post it mentioned an extension for Visual Studio 2010 and 2012 called IntelliCommand that implements something that I had wanted for quite some time: Some kind of dynamic help for hotkeys. IntelliCommand shows a popup when you press and hold Ctrl, Shift or Alt (or combinations thereof) for a configurable amount of time, or after you press the first key combination of a chord shortcut key (e.g. Ctrl-E) and wait for an (independently configurable) amount of time. In the following screenshot I pressed and released Ctrl-E, and after a short delay the popup appeared: The extension is available in the Visual Studio Gallery, so finding, downloading and installing it via the Extension Manager is extremely simple: The default delays (2000 / 1600 milliseconds) are a bit long for my liking, but this can be changed in Tools – Options: So far things are working great on my machine. Some known issues do seem to exist, though (e.g. that the extension doesn’t work on non-EN versions of Visual Studio). See the author’s comments in the announcement blog post and in the Visual Studio Gallery for more information.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Sudden drop in Total Indexed pages and increase in 'Not Selected' number.

    - by Pravin
    My blog is around 1 year old and have PR2. The average daily pageviews upto last 1 week were 1800. The total number of posts are 180. Though I have only 180 total posts, the total number of Indexed URL was increasing and it was as high as 510. But in the month of Sept2012, the total number of Indexed pages dropped from 510 to 214. The drop was sudden and it is now increasing very slowly. Also, the other main concern is huge increase in 'Not Selected' number. It is currently 814. I have never posted any post again and never copied any idea from any other blog. But I do use internal linking to some older post those are related to the new posts. The questions are:; Why there is sudden drop in the 'Total Indexed' pages. Why there was increase in total indexed pages to 500 even though the total posts were only 180. As the drop in 'Total Indexed' was in the month of sept2012, I was getting same organic traffic and it was steadily increasing till last week and then there was a 50 drop in the total pageviews. Why. Now, again the traffic is becoming to normal but still there is a problem. Is increase in the 'Not selected' number is a cause of drop in 'Total Indexed'? How to prevent or reduce the number of 'Not Selected' even though I do not have any duplicate post withing blog. Is the 'internal linking' to older post creating 'Not selected' problem? Should I edit my 'Robot.txt' to avoid crawling of labes that may be creating duplicate posts or something like that, if so, what is correct robot.txt. I have uploaded the screenshot of the graph of Webmaster Tools. Please take a look and give suggestions. Please help. Thank you in advance.

    Read the article

  • Why google return soft 404 when I redirect on signup page?

    - by Hettomei
    Since one month, I've got an increased "soft 404" reported by google webmaster tools but work well for users. I made some fix but can't figure out how to solve it. Configuration (maybe useless): I have a website built with rails 3.1 Authentication is handled by the gem Devise Problem: On this page http://en.bemyboat.com/yacht-charter/9965-sailboat-beneteau-oceanis-43 when you click on "Ask a Boat request" (a simple form, in GET to : http://en.bemyboat.com/boat_requests/new/9965) you are redirected with the http status 302 to sign in, and then sent back to the new page if successfully sign in. Google tells me that the link on "ask a boat request" returns a soft 404. I can't make this form in "POST" (which will solve the problem) because we need to automatically redirect user to the good page after sign in. (the Gem Devise memorize the "get" link) To simplify, the question is: how to protect a private page with authentication, reached with a simple "get" and not to be penalized by google with "soft 404". Thank you. PS : this website suffer a lot about english translation... please don't care.

    Read the article

  • How do you communicate improvements in tools and process to the development team?

    - by birryree
    Hi everyone, My team does a lot of internal tooling and infrastructure work - you can think of us as a small scale version of the teams Facebook, Etsy, Netflix, etc. who build all the infrastructure for scaling their services up to thousands/tens of thousands of servers and supporting millions of users. Lately, we've been running full steam ahead improving much of the tools we use internally, like tools for automatically creating new servers, setting up new application instances, etc. An end result of this has been decreased developer frustration, but increased 'ignorance' by most of the developer team about how to use our tools correctly and effectively. More often than not, my team will be asked by other teams to help them use the tools. Solutions we've thought up or things already in place: All our code is relatively simple and self-explanatory, with good comments where necessary, so developers could read the scripts. Counterargument: You can guess this isn't a particularly good idea, having people read our tools' code to figure out how to use it. All our code is committed to Subversion with very detailed commit messages about changes, developers could read the commit emails. Counterargument: Expect the developers to read all our commits? Ludicrous. Wiki - we have an internal company wiki, that we try to maintain with up to date information, but as we are moving so fast, the wiki has to keep pace as well. Counterargument: As mentioned, we move fast in my team, as more improvements on our tools are added daily. Again still relies on people to read something that might change constantly. Email the team? We could email the team when we have a glut of improvements to communicate. So as you can all see, we are trying to find new ideas, and explore options we haven't thought of yet. Anyone else ever been in a similar situation and have some guidance?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • getting started as a web developer [closed]

    - by kmote
    I have over 10 years of programming experience building (Windows-based) desktop applications and utilities (VC++, C#, Python). My goal over the next year is to start transitioning to web application development. I want to teach myself the fundamental tools and technologies that would be considered essential for building professional, online, interactive, visually-stunning, data-driven web apps -- the kind described in Google's recently released "Field Guide: Building Great Web Applications". So my question is, what are the primary, most commonly-used technologies that seasoned professionals will need in their tool belt in the coming years? My plan was to start coming up to speed in Javascript, HTML5, & CSS, and then to do a deep dive into ASP.NET and Ajax, along with SQL DBs. (I was surprised to not be able to find a single book at Amazon with a broad, general scope like this, which caused me to start second-guessing this approach.) So, seasoned professionals: am I on the right track? Are there some glaring omissions in my list? Or some unnecessary inclusions? I would welcome any book suggestions along these lines as well.

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • GUI based backup utility [closed]

    - by Chethan S.
    Possible Duplicate: Comparison of backup tools I have read favorable reviews for 'Back In Time' for the purpose stated above. Still I am posting this question as I have some demands in my mind. Few years back I was using ThinkVantage Rescue and Recovery by IBM on my Lenovo PC under Windows. That provided me nice features like compressed backups, boot time options - OS Repair, Restore entire OS, restore entire system to an older date, restore individual files etc. Out of these the feature I liked the most was compressed backups. Similar features are available in software like Norton Ghost too. In Back In Time I was surprised to see that the snapshot takes up same amount of space as that of the original contents, no compression at all. Furthermore, I was not able to find options to change the compression ratio etc. under settings. According to me compression of backups is a must have feature. Therefore, can anyone suggest me any other utility which can serve the purpose. I insist on GUI based tool since I don't want to mess up with backups!

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >