Search Results

Search found 13165 results on 527 pages for 'hyper v tools'.

Page 66/527 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • git for personal (one-man) projects. Overkill?

    - by Anto
    I know, and use, two version control systems: Subversion and git. Subversion, as of now, gets used for personal projects where I am the only developer and git gets used for open source projects and projects where I believe others will also work on the project. This is mostly because of git's amazing forking and merging capabilities, where everyone may work on their own branch; very handy. Now, I use Subversion for personal projects, as I think git makes little sense there. It seems to be a little bit of overkill. It is OK for me if it is centralized (on my home server, usually) when I am the only developer; I take regular backups anyway. I don't need the ability to make my own branch, the main branch is my branch. Yes, SVN has simple support for branching, but much more powerful support for it makes no sense, I think. Merging can be a pain with it, or at least from my little experience. Is there any good reason for me to use git on personal projects, or is it just simply overkill?

    Read the article

  • Portal Server comparisons / TCoO

    - by Scott
    We have a client whom is looking to incorporate Oracle Portal into our next release. I'm newer to this team, but the team is currently working with Apache, so whichever Portal Server we choose will likely incur a bit of a learning curve. Is there any comparison (not marketing) out there which discusses the differences in the servers and/or the total cost of ownership on them? With 5 developers, installing RAD becomes expensive, which I'd assume they'd wish to move onto us with the change to Oracle Portal and WebSphere.

    Read the article

  • Redirecting non existing post to homepage; is that good for SEO?

    - by BlackEagle
    I am checking my website out on Google Webmasters and I am seeing an astonishing 5000 links that could not be found by Google's Crawlers. That's normal, because my website is built in a manner that users can drop their own things, which also lead to 404 pages. Not a problem at all if I can find a workaround of course... So my question is: what if I made a function or a mod rewrite that will check if the link exists (a post for example) and if not, it will redirect it to the home page. Is this good for SEO? Will Google see this as 'link found'? How do I have to look at this problem?

    Read the article

  • tumblr blog under subdomain blog.mysite.com - do i still get the benefit?

    - by sam
    ive got a tumblr blog for my main site, i use it more as a proper blog with articles and images, rather than just a tumblog. The blog is mapped to blog.mydomain.com Becuase of the ease of 'social' reposting in tumblr people often repost my articles (which is great - Backlinks !) But all these links go back to my blog and not my main site which is a mydomain.com does google see blog.mydomain and mydomain as the same / linked item and is my main site getting the benefit of these links ?

    Read the article

  • Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu?

    - by Stuart Woodward
    Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu? "Hazel watches whatever folders you tell it to, automatically organizing your files according to the rules you create. It features a rule interface [..]. Have Hazel move files around based on name, date, type, what site/email address it came from [..] and much more. Automatically put your music in your Music folder, movies in Movies. Keep your downloads off the desktop and put them where they are supposed to be." This question probably won't make sense unless you have used Hazel, but basically you can define rules via the GUI to move and rename files automatically to make an automated workflow.

    Read the article

  • Seeking a free Lint for C which programmers will *want* to use

    - by Mawg
    When I try to persuade others to Lint their code I always get excuses - too difficult to set up, too difficult to understand, false positives, etc (most of which translates to too lazy, too stupid or too afraid of new things). Is there any way that I can make Linting easier? We code in C using Netbeans. Can I incorporate Splint into Netbeans? I did find a Splint GUI which was quite good, but there was no way to lint a directory tree. Any ideas? Thanks in advance

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • Webmasters hentry error and authorless pages

    - by Ben Racicot
    Within Google Webmasters Search Appearance-Structured data I'm getting a series of errors: Error: Missing required hCard "author". And most of my 44 errors have: Missing: Author Missing: entry-title Missing: updated There seems to be no CLEAR explanation of these errors. It is either because these classes exist without their nested classes, or they are expected to exist because of something else, possibly itemscope or itemtype='' The Question: How do you specify with richsnippets that the page is about a location and there is no human author?

    Read the article

  • Database Maintenance Scripting Done Right

    - by KKline
    I first wrote about useful database maintenance scripts on my SQLBlog account way back in 2008. Hmmm - now that I think about it, I first wrote about my own useful database maintenance scripts in a journal called SQL Server Professional back in the mid-1990's on SQL Server v6.5 or some such. But I digress... Anyway, I pointed out a couple useful sites where you could get some good scripts that would take care of preventative maintenance on your SQL Server, such as index defragmentation, updating...(read more)

    Read the article

  • Google Fetch issue

    - by Karen
    When I do a Google fetch on any of my webpages the results are all the same (below). I'm not a programmer but I'm pretty sure this is not correct. Out of all the fetches I have done only one was different and the content length was 6x below and showed meta tags etc. Maybe this explains other issues I've been having with the site: a drop in indexed pages. Meta tag analyzer says I have no title tag, meta tags or description even though I do it on all pages. I had an SEO team working on the site and they were stumped by why pages were not getting indexed. So they figure it was some type of code error. Are they right? HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 11 Oct 2012 11:45:41 GMT Content-Length: 1054 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript"> function getCookie(cookieName) { if (document.cookie.length > 0) { cookieStart = document.cookie.indexOf(cookieName + "="); if (cookieStart != -1) { cookieStart = cookieStart + cookieName.length + 1; cookieEnd = document.cookie.indexOf(";", cookieStart); if (cookieEnd == -1) cookieEnd = document.cookie.length; return unescape(document.cookie.substring(cookieStart, cookieEnd)); } } return ""; } function setTimezone() { var rightNow = new Date(); var jan1 = new Date(rightNow.getFullYear(), 0, 1, 0, 0, 0, 0); // jan 1st var june1 = new Date(rightNow.getFullYear(), 6, 1, 0, 0, 0, 0); // june 1st var temp = jan1.toGMTString(); var jan2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); temp = june1.toGMTString(); var june2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); var std_time_offset = (jan1 - jan2) / (1000 * 60 * 60); var daylight_time_offset = (june1 - june2) / (1000 * 60 * 60); var dst; if (std_time_offset == daylight_time_offset) { dst = "0"; // daylight savings time is NOT observed } else { // positive is southern, negative is northern hemisphere var hemisphere = std_time_offset - daylight_time_offset; if (hemisphere >= 0) std_time_offset = daylight_time_offset; dst = "1"; // daylight savings time is observed } var exdate = new Date(); var expiredays = 1; exdate.setDate(exdate.getDate() + expiredays); document.cookie = "TimeZoneOffset=" + std_time_offset + ";"; document.cookie = "Dst=" + dst + ";expires=" + exdate.toUTCString(); } function checkCookie() { var timeOffset = getCookie("TimeZoneOffset"); var dst = getCookie("Dst"); if (!timeOffset || !dst) { setTimezone(); window.location.reload(); } } </script> </head> <body onload="checkCookie()"> </body> </html>

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

  • Google webmaster Verification failed.

    - by KMC
    I have a site created by Ruby on Rails. I had verified against Google Webmaster Tool some months ago, which was successful. One day webmaster starts giving me Re-verification fails. I tried again to verify my site using Meta tags and HTML files. But I kept having "Verification failed. The connection to your server timed out." Since then, Google stop crawling my site's content - though, somehow google still crawl my PDF contents on my site.

    Read the article

  • Is there a modern (eg NoSQL) web analytics solution based on log files?

    - by Martin
    I have been using Awstats for many years to process my log files. But I am missing many possibilities (like cross-domain reports) and I hate being stuck with extra fields I created years ago. Anyway, I am not going to continue to use this script. Is there a modern apache logs analytics solution based on modern storage technologies like NoSQL or at least somehow ready to cope with large datasets efficiently? I am primarily looking for something that generates nice sortable and searchable outputs with the focus on web analytics, before having to write my own frontends. (so graylog2 is not an option) This question is purely about log file based solutions.

    Read the article

  • schema.org 'reviewRating' tag not recognized by google snippet testing tool

    - by saravanak
    I'm trying to add more structural information to my webpages by using the microdata format suggested in www.schema.org. The procedure seems straight forward but I'm having issues validating my results in the Google Rich Snippets Testing Tool. Check out this review page, here I'm using the 'reviewRating' property item to specify rating values for that particular review. I followed the same format as defined in schema.org/Rating but this markup fails validation in Google's rich snippet testing tool with the following error info. Item Type: http://schema.org/rating reviewrating = 5 ratingvalue = 5 Warning: Property "reviewrating" was not found.

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

  • Which tool to create a sitemap to plan a future site?

    - by peterpurzelbaum
    I'd like to create a sitemap to plan a future site, and I'm looking for a tool to do it. I'd like to create a list of all articles first. Then a hierachy. Then I'd like to put the articles on several places in the hierachy. I should be able to put one article at different places. I'd like to have the ability to mark the articles in different colors whether they should come into the first version of the website or later.

    Read the article

  • Reflector Pro Cometh

    Reflector 6 is here. Nick Harrison is a long-time Reflector enthusiast, and has been responsible for writing an add-in. As he'd helped test the new version, Nick asked to review it for Simple-Talk. The team were anxious to know what he thought. They needn't have worried.

    Read the article

  • 3D Paint tool in Maya

    - by Joris
    Hey everyone, could someone help me out on this question? When I am trying to texture objects in maya (for example a barrel) I want to texture it with a brush. When I try to use the 3D painting tool it al gets black even when I have a image file selected. Also the resolution is very very low of the models and textures in maya, though the texture image is very high quality. Can someone please help me? Thanks so much for helping.

    Read the article

  • What's the best platform for blogging about coding? [closed]

    - by timday
    I'm toying with starting an occasional blog for posting odd bits of coding related stuff (mainly C++, probably). Are there any platforms which can be recommended as providing exceptionally good support (e.g syntax highlighting) for posting snippets of code ? (Or any to avoid because posting mono-spaced font blocks of text is a pain). Outcome: I accepted Josh K's answer because what I actually ended up doing was realizing I was more interested in articles than a blog style, getting back into LaTeX (after almost 20 years away from it), using the "listings" package for code, and pushing the HTML/PDF results to my ISP's static-hosting pages. (HTML generated using tex4ht). Kudos to the answers mentioning Wordpress, Tumblr and Jekyll; I spent some time looking into all of them.

    Read the article

  • How to set up Google DFP (DoubleClick for Publishers) for a site?

    - by Manoj Kumar
    I have a website and I have an AdSense account as well. I have integrated AdSense and ads are also getting displayed (480 x 60). Somewhere I read that I can manage the ads that are being shown in my website (480 x 60) and filter out the ads on a CPM/CPC basis. NOTE: I don't have any ads to be displayed on other's websites. I just want to show other's ads on my website. Now, can I use Google DFP to manage the ads? I mean is Google DFP useful for me to filter the ads and get me more revenue?

    Read the article

  • Creating WPF Prototypes with SketchFlow

    Prototyping with Sketchflow transforms what was once a frustrating and time-consuming chore. With SketchFlow, WPF prototypes can be created and changed with amazing ease. SketchFlow is WPF's secret weapon. Well, it was secret until Michael Sorens produced this article.

    Read the article

  • How does Google Analytics aggregate the Count of Visits (Frequency & Recency Report)?

    - by Brian Dant
    Here's my simple understanding of Count of Visits: Each person that comes to my site gets one "count" for each visit. They are put into a bucket of people with the same number of total counts -- if you visit twice, you are in the two bucket, if you visit six times, you are in the six bucket. From there, a report (Frequency & Recency) makes a line for each bucket and reaches into the bucket and totals the number of people in that bucket, putting that total in the second column. My Question: Will a two month report automatically put someone into two buckets, and put them on two separate lines in the Count of Visits table? This explaination makes it seem like a two-month long report will put the same person into a bucket twice, one bucket for each month. The two-month report will then show that person's visits on two different lines, instead of aggregating them. Example for Clarification: Bob comes to my site three times in January and seven times in February. I run a report for Jan 1 -- Feb 28. Will Bob be on both the Three Count line and the Seven Count line, or will he be on the Ten Count line?

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >