Search Results

Search found 5530 results on 222 pages for 'nested urls'.

Page 107/222 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Any frameworks or library allow me to run large amount of concurrent jobs schedully?

    - by Yoga
    Are there any high level programming frameworks that allow me to run large amount of concurrent jobs schedully? e.g. I have 100K of urls need to check their uptime every 5 minutes Definitely I can write a program to handle this, but then I need to handle concurrency, queuing, error handling, system throttling, job distribution etc. Will there be a framework that I only focus on a particular job (i.e. the ping task) and the system will take care of the scaling and error handling for me? I am open to any language.

    Read the article

  • Seo Google Publisher Network

    - by Andy
    I'm just about to start a new business which creates niche affiliate sites. I'm curious about the impacts to me from Google of all the urls being hosted with the same analytic tags, webmaster tool tags and server ip ranges. To benefit the most from google's serps should i have each domain within seperate analytic accounts and webmaster tools or is it ok for me to have all of my domains within one account. My issue is duplicate content and the fact that i am building a publisher network and i'm not sure how much google likes them. I'm notoriously bad at searching and as such havent found what i'm looking for yet. Any help would be very much appreciated.

    Read the article

  • Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5

    - by TATWORTH
    This book introduces a new technology that promises a considerable improvement on the ASP.NET membership system. If you ever had to write an extranet, system you should be aware of the problems in setting up membership for your site. The Windows Identity Foundation promises to be an excellent replacement. Therefore the book Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5 at  http://oreilly.com/catalog/9780735627185, is breaking new ground. I recommend this book to all ASP.NET development teams. You should reckon on 3 to 5 man-days to study it and try out the sample programs and see if it can replace your bespoke solution. Rember this is version 1 of WIF and give yourself adequete time to read this book and familiarise yourself with the new software. Some URLs for more information: WIF home page at http://msdn.microsoft.com/en-us/security/aa570351.aspx The Identity Training Kit at http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=c3e315fa-94e2-4028-99cb-904369f177c0 The author's blog at http://www.cloudidentity.net/

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • Disadvantages of a fake phpMyAdmin honeypot that causes ip blacklisting and robots.txt disallow/exclusion of the honeypot?

    - by Tchalvak
    I'm trying to figure out whether I should set up a honeypot system with a fake phpMyAdmin (site gets hits all the time with people spidering for insecurities with that app). My thought was to create a honeypot php script that would mimic a phpMyAdmin login, and then blacklist ips that hit that url (and aren't already whitelisted). I would then add the appropriate urls to the robots.txt so that spiders that actually respect my robots.txt wouldn't be caught by the blacklist. Are there disadvantages to this approach, do legit robots sometimes not respect robots.txt in certain circumstances, are there any problems with this that I should consider in advance?

    Read the article

  • Free Pluralsight Videos for this week

    - by TATWORTH
    Pluralsight have issued two free videoshttp://blog.pluralsight.com/2012/09/05/video-end-the-global-pollution-crisis-in-javascript/http://blog.pluralsight.com/2012/08/31/video-fake-it-until-you-make-it-with-fakeiteasy/Their exact words were: Free Videos this Week End the Global Pollution Crisis... In Javascript Too many globally scoped variables and functions can make Javascript code difficult to work with, particularly on large projects. See how to simulate the concept of namespaces using objects. Fake it Until You Make It with FakeItEasy FakeItEasy is a .NET framework for easily creating mock objects in tests. See how easy it is to FakeItEasy with complex nested hierarchies.

    Read the article

  • SEO penalty for landing page redirects

    - by therealsix
    Using ebay as an example- lets say I have a large number of items whose URLs' look like this: cgi.ebay.com/ebaymotors/1981-VW-Vanagon-manual-seats-seven-/250953153841 I want to give my client the ability to put links to these items on their website EASILY, without knowing or checking my URL. So I created a redirect service that will map their identifier with my URL: ebay.com/fake_redirect_service/shared_identifier9918 would redirect to the link above. This works great- my clients can easily setup these links with information they already have, and the user will see the page as usual. So on to the problem... I'm concerned that this redirecting service will have a negative impact on my SEO ranking. Having a landing page redirect you immediately to a different URL seems like something a typical spam site would do. Will this hurt me? Any better solutions?

    Read the article

  • Google Site Search (commercial) not indexing files in sitemap

    - by melat0nin
    I have a client for whom we have purchased Google Site Search. It works well for HTML pages served by the CMS, but files aren't being reliably indexed. I wrote a script to generate an XML feed (sitemap) of all the files in the CMS which I've plugged in to Google Webmaster Tools for the site. It says that for that sitemap 923 URLs have been submitted, but only 26 have been indexed. The client relies heavily on searching within files, which is why we decided to use Google search, so this is a bit of a problem. Many of the files aren't linked to from any page on the site, as they are old and therefore don't merit having a page of their own. But they still need to be accessible through search for archiving purposes. The file archive xml can be found at www.sniffer.org.uk/file-archive and the standard xml sitemap (of pages) can be found at www.sniffer.org.uk/sitemap.xml. Any thought would be much appreciated!

    Read the article

  • How do I configure multiple domain names on my IIS server? [closed]

    - by Dillie-O
    We have a few websites that we are running on one instance of IIS that need to be mapped for each of their domain names. For example. Site A has the domain name coolness.com Site B has the domain name 6to8Weeks.com Site C has the domain name PhatTech.com When I look at the "Web Site Identification" section of the IIS configuration window, I notice that I can specify an IP address and port, but if I click the Advanced button, I can also configure the site based on host header values as well. How do I configure each site in IIS? Ideally I would like them to all be able to listen to port 80, so I don't have weird URLs, but I'm not sure if I do this using headers, IP addresses, both, or something else.

    Read the article

  • Hidden Windows 7 Wallpaper

    - by BizTalk Visionary
    To find the hidden wallpaper: Type globalization in a search of your C: drive. The only result should be a folder located in the main Windows directory, and you should only be able to see ELS and Sorting folders nested here. Now search for MCT in the top-right search bar. This will display five new unindexed folders, each corresponding to a different global region. Browse these folders for some extra themes and wallpapers specific to Australia, USA, South Africa, and Canada. From here you can select a new wallpaper.

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • How to generate "language-safe" UUIDs?

    - by HappyDeveloper
    I always wanted to use randomly generated strings for my resources' IDs, so I could have shorter URLs like this: /user/4jz0k1 But I never did, because I was worried about the random string generation creating actual words, eg: /user/f*cker. This brings two problems: it might be confusing or even offensive for users, and it could mess with the SEO too. Then I thought all I had to do was to set up a fixed pattern like adding a number every 2 letters. I was very happy with my 'generate_safe_uuid' method, but then I realized it was only better for SEO, and worse for users, because it increased the ratio of actual words being generated, eg: /user/g4yd1ck5 Now I'm thinking I could create a method 'replace_numbers_with_letters', and check that it haven't formed any words against a dictionary or something. Any other ideas? ps. As I write this, I also realized that checking for words in more than one language (eg: english and french, spanish, etc) would be a mess, and I'm starting to love numbers-only IDs again.

    Read the article

  • How will this affect my SEO ranking?

    - by dunc
    I run a fishkeeping website based on a WordPress (PHP) CMS. I've recently put a fairly complex "filter" into place which searches my content for mentions of fish species profiles and turns them into an active link. For example, asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and A. panduro and Apistogramma panduro ...becomes asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and <a href="/?p=1703" class="link_species">A. panduro</a> and <a href="/?p=1703" class="link_species">Apistogramma panduro</a> On the rest of my website, the species are linked with pretty URLs such as /species/apistogramma-panduro/ but due to the way this filter works, the only information I can get access to is the idof the post. As such, I'm using /?p=1703 or whatever the ID is. What I'd like to know is: how much will this affect my SEO rating/ranking? Will it be detrimental if I don't rewrite the function? Thanks in advance,

    Read the article

  • Best way to redirect in IIS

    - by stephmoreland
    We have a website that has two URLs (one for the US side and another for the Canadian side which is then broken into Canadian English and Canadian French). For the purposes of my question, I will write as: www.us_url.com (US) www.canada_url.ca/ca_en/ (Canadian English) www.canada_url.ca/ca_fr/ (Canadian French) To make sure people are on the correct site, what do I do if they go to the US URL with Canadian English content (e.g. www.us_url.com/ca_en/canada.asp) but I want to make sure the URL is the Canadian one (e.g. www.canada_url.ca/ca_en/canada.asp) so it shows up properly in Google Analytics. We're using IIS 7 and classic ASP.

    Read the article

  • Powershell progress dialogs

    - by Norgean
    Creating nested progress dialogs in Powershell is easy. Let the code speak for itself: for ($i = 1; $i -le 2; $i++) {     Write-Progress -ID 1 -Activity "Outer loop" -Status "Tick $i" -percentComplete ($i / 2*100)     for ($j = 1; $j -le 3; $j++)     {         Write-Progress -ID 2 -Activity "Mid loop" -Status "Tick $j" -percentComplete ($j / 3*100)         for ($k = 1; $k -le 3; $k++)         {             Write-Progress -ID 3 -Activity "Inner loop" -Status "Tick $k" -percentComplete ($k / 3*100)             Sleep(1)         }     } } I.e. some text that explains what we're doing (Activity and Status), and ID numbers. Easy.

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

  • VMware Player and Ubuntu 12.04 - Full Screen

    - by DotNetStudent
    I have installed VMware Player 4.0.2 under Ubuntu 12.04 (Final) and, apart from having to patch the modules, everything went smoothly. However, there's an irritating behavior when toggling full screen mode: toggling full screen (using Virtual Machine Toggle Full Screen or Ctrl + Alt + Return), minimizing the player and maximizing it again changes the resolution of the guest to some strange one and the player gets "nested" between GNOME3's taskbar as every other of Ubuntu's native windows. To switch to full screen again I have to Ctrl + Alt + Return twice. Can anyone please tell me if this is the nromal, expected behavior? Is there any way of "correcting" it? The host operating system is Ubuntu 12.04 (Final) and the guest is Windows 7 (both 64 bits).

    Read the article

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • HTAccess redirect directories to index.html

    - by BFTrick
    Hi there, I am working on a site that where I do not have permission to the server and someone else keeps changing the settings. That person just changed the settings preventing users from going to example.com/foo/ and seeing the index page. This Virtual Directory does not allow contents to be listed. If you type in example.com/foo/index.html you can still see the file. So I want to use htaccess to redirect all urls that end in a directory to change into directory/index.html How do I write that? I started with some code that changes .php files to .html files and tried to work from that but I couldn't quite get it to work. RewriteRule ^(.*)\.php$ /$1.html [R=301,L] Any suggestions?

    Read the article

  • Alternatives to OAuth?

    - by sdolgy
    The Web industry is shifting / has shifted towards using OAuth when extending API services to external consumers & developers. There is some elegance in simple....and well, the 3-step OAuth process isn't too bad ... i just find it is the best of a bad bunch of options. Are there alternatives out there that could be better, and more secure? The security reference is derived from the following URLs: http://www.infoq.com/news/2010/09/oauth2-bad-for-web http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/

    Read the article

  • Is the use of hashbang really a good idea? [on hold]

    - by user32642
    I've been working on a WordPress site lately that was design with hashbang or shebang in the dynamically generated URLs. After doing some research, I noticed that there was some preference by Google in their use and how it crawled the site. However, after I ran several sitemap generators and Screaming Frog SEO Spider, I realized that the only page being crawled was the index page. So now I am questioning the use of hashbangs. What do you think? Should I attempt to remove them? Or will it even matter? And does anyone know of a easy way to remove this? The site is www.modernvintage1005.com

    Read the article

  • How can redirect pages from old core PHP site to new Joomla site?

    - by pkachhia
    We have our old site into core PHP and we have developed it again into Joomla 1.5 last year( because of some limitations we have to build it into 1.5). Now the problem is the URL of sites changed as we have use SEO URLS on joomla. In between we have use .htaccess to redirect user from old URL to new like this Redirect /pages/oldpage.php http://www.mydomain.com/products/category/new_page.html Is this good practice to redirect user to new URL or not?(we have used same server). One more thing, We have used splash page on our site, and to set up it we have made some changes and because of it one of the important link is not working, and it is http://www.mydomail.com/index.php How can I get rid of it? I have used DirectoryIndex splash.html home.html index.php in .htaccess to open splash page first when someone open my site http://www.mydomain.com. Note: my website hosted on dedicated ubuntu server.

    Read the article

  • Website Remodel Redirects

    - by inKit
    We've recently built a site for a new client who has not inserted all the content that they had from their old site into their new one. Also a lot of content is dynamic with ID's not matching from the old site to the new one. We have added dynamic redirects for most of the patterns we could find in pages that were 404ing, but there are still a lot of pages that had content, or just jumbled urls that we cannot match up with content pages on the new site. Is it better to redirect these leftover pages to the homepage? Or leave them 404ing?

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • Hosting a Small PHP/MySQL Project [closed]

    - by paulrehkugler
    I have a small PHP/MySQL project in the works and I'm looking for somewhere to host it. My criteria are: Ability to run PHP/MySQL (either natively, or contains the ability to install). Ability to manipulate the web server, so I can make pretty URLs. Not spammy (if you've ever looked for hosting, you know what I mean). Semi-professional - no ridiculous downtime or long response time. I obviously don't need anything spectacular (I'm not aiming to be the next Facebook) but something that doesn't seem cheap. Reasonably priced - obviously this is a side project for fun so I'm not planning on making or dispensing any sort of "serious" revenue.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >