Search Results

Search found 5416 results on 217 pages for 'urls py'.

Page 115/217 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How do I interpret direct traffic that lands on random pages?

    - by mfg
    Looking at yesterday, according to Google Analytics, I got six direct visitors to my site (their source/medium is direct/(none)). Only one ended up at the actual domain. The other five ended up at miscellaneous foo.com/xyz.html. I did not send out links to people by email, and I'm not sure how likely it is the people would have copy/pasted the URLs. How do the visitors end up there? Is there a way to better capture where they might be coming from?

    Read the article

  • Content Optimization only?

    - by danie7L T
    There are tons of discussions around tips&tricks to improve Search Engines "ranking" and SEOs. What if the focus of the webmaster/client is 100% set on the quality of the content with precise keywords in meta tags, clean design, regular articles updates, clean URLs and highly filtered external links leading to pages on websites dealing on the same,or related subjects; isn't it the job of a good search engine like Google to catch this website and show it in its front-page ? Or does Search Engines count on us to help them find us, and webmasters will always have to be up-to-date regarding SEO tools and rules updates on top of websites' design, browsers customization, progressive enhancement etc ?

    Read the article

  • Any frameworks or library allow me to run large amount of concurrent jobs schedully?

    - by Yoga
    Are there any high level programming frameworks that allow me to run large amount of concurrent jobs schedully? e.g. I have 100K of urls need to check their uptime every 5 minutes Definitely I can write a program to handle this, but then I need to handle concurrency, queuing, error handling, system throttling, job distribution etc. Will there be a framework that I only focus on a particular job (i.e. the ping task) and the system will take care of the scaling and error handling for me? I am open to any language.

    Read the article

  • Seo Google Publisher Network

    - by Andy
    I'm just about to start a new business which creates niche affiliate sites. I'm curious about the impacts to me from Google of all the urls being hosted with the same analytic tags, webmaster tool tags and server ip ranges. To benefit the most from google's serps should i have each domain within seperate analytic accounts and webmaster tools or is it ok for me to have all of my domains within one account. My issue is duplicate content and the fact that i am building a publisher network and i'm not sure how much google likes them. I'm notoriously bad at searching and as such havent found what i'm looking for yet. Any help would be very much appreciated.

    Read the article

  • Disadvantages of a fake phpMyAdmin honeypot that causes ip blacklisting and robots.txt disallow/exclusion of the honeypot?

    - by Tchalvak
    I'm trying to figure out whether I should set up a honeypot system with a fake phpMyAdmin (site gets hits all the time with people spidering for insecurities with that app). My thought was to create a honeypot php script that would mimic a phpMyAdmin login, and then blacklist ips that hit that url (and aren't already whitelisted). I would then add the appropriate urls to the robots.txt so that spiders that actually respect my robots.txt wouldn't be caught by the blacklist. Are there disadvantages to this approach, do legit robots sometimes not respect robots.txt in certain circumstances, are there any problems with this that I should consider in advance?

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5

    - by TATWORTH
    This book introduces a new technology that promises a considerable improvement on the ASP.NET membership system. If you ever had to write an extranet, system you should be aware of the problems in setting up membership for your site. The Windows Identity Foundation promises to be an excellent replacement. Therefore the book Programming Windows Identity Foundation - ISBN 978-0-7356-2718-5 at  http://oreilly.com/catalog/9780735627185, is breaking new ground. I recommend this book to all ASP.NET development teams. You should reckon on 3 to 5 man-days to study it and try out the sample programs and see if it can replace your bespoke solution. Rember this is version 1 of WIF and give yourself adequete time to read this book and familiarise yourself with the new software. Some URLs for more information: WIF home page at http://msdn.microsoft.com/en-us/security/aa570351.aspx The Identity Training Kit at http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=c3e315fa-94e2-4028-99cb-904369f177c0 The author's blog at http://www.cloudidentity.net/

    Read the article

  • Particle Effect Completion

    - by Siddharth
    In my game I use particle effect for various purposes. In that I detect the completion of the particle effect. Basically I want to do something after completion of the particle effect. But the problem is that I didn't able to find the particle effect completion. So any community member please help me. EDIT : I was creating particle effect using following code pointParticleEmtitter = new PointParticleEmitter(pX, pY); particleSystem = new ParticleSystem(pointParticleEmtitter, maxRate, minRate, maxParticles, mParticleTextureRegion.deepCopy()); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new ColorInitializer(0f, 0f, 1f)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 0, 0.5f)); particleSystem.addParticleModifier(new ExpireModifier(0.5f)); gameObject.getScene().attachChild(particleSystem); Using above code the particle effect was started but when finished that I want to detect. After finishing effect I want to remove the object from the scene.

    Read the article

  • SEO penalty for landing page redirects

    - by therealsix
    Using ebay as an example- lets say I have a large number of items whose URLs' look like this: cgi.ebay.com/ebaymotors/1981-VW-Vanagon-manual-seats-seven-/250953153841 I want to give my client the ability to put links to these items on their website EASILY, without knowing or checking my URL. So I created a redirect service that will map their identifier with my URL: ebay.com/fake_redirect_service/shared_identifier9918 would redirect to the link above. This works great- my clients can easily setup these links with information they already have, and the user will see the page as usual. So on to the problem... I'm concerned that this redirecting service will have a negative impact on my SEO ranking. Having a landing page redirect you immediately to a different URL seems like something a typical spam site would do. Will this hurt me? Any better solutions?

    Read the article

  • Google Site Search (commercial) not indexing files in sitemap

    - by melat0nin
    I have a client for whom we have purchased Google Site Search. It works well for HTML pages served by the CMS, but files aren't being reliably indexed. I wrote a script to generate an XML feed (sitemap) of all the files in the CMS which I've plugged in to Google Webmaster Tools for the site. It says that for that sitemap 923 URLs have been submitted, but only 26 have been indexed. The client relies heavily on searching within files, which is why we decided to use Google search, so this is a bit of a problem. Many of the files aren't linked to from any page on the site, as they are old and therefore don't merit having a page of their own. But they still need to be accessible through search for archiving purposes. The file archive xml can be found at www.sniffer.org.uk/file-archive and the standard xml sitemap (of pages) can be found at www.sniffer.org.uk/sitemap.xml. Any thought would be much appreciated!

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • How do I configure multiple domain names on my IIS server? [closed]

    - by Dillie-O
    We have a few websites that we are running on one instance of IIS that need to be mapped for each of their domain names. For example. Site A has the domain name coolness.com Site B has the domain name 6to8Weeks.com Site C has the domain name PhatTech.com When I look at the "Web Site Identification" section of the IIS configuration window, I notice that I can specify an IP address and port, but if I click the Advanced button, I can also configure the site based on host header values as well. How do I configure each site in IIS? Ideally I would like them to all be able to listen to port 80, so I don't have weird URLs, but I'm not sure if I do this using headers, IP addresses, both, or something else.

    Read the article

  • python factory function best practices

    - by Jason S
    Suppose I have a file foo.py containing a class Foo: class Foo(object): def __init__(self, data): ... Now I want to add a function that creates a Foo object in a certain way from raw source data. Should I put it as a static method in Foo or as another separate function? class Foo(object): def __init__(self, data): ... # option 1: @staticmethod def fromSourceData(sourceData): return Foo(processData(sourceData)) # option 2: def makeFoo(sourceData): return Foo(processData(sourceData)) I don't know whether it's more important to be convenient for users: foo1 = foo.makeFoo(sourceData) or whether it's more important to maintain clear coupling between the method and the class: foo1 = foo.Foo.fromSourceData(sourceData)

    Read the article

  • How to generate "language-safe" UUIDs?

    - by HappyDeveloper
    I always wanted to use randomly generated strings for my resources' IDs, so I could have shorter URLs like this: /user/4jz0k1 But I never did, because I was worried about the random string generation creating actual words, eg: /user/f*cker. This brings two problems: it might be confusing or even offensive for users, and it could mess with the SEO too. Then I thought all I had to do was to set up a fixed pattern like adding a number every 2 letters. I was very happy with my 'generate_safe_uuid' method, but then I realized it was only better for SEO, and worse for users, because it increased the ratio of actual words being generated, eg: /user/g4yd1ck5 Now I'm thinking I could create a method 'replace_numbers_with_letters', and check that it haven't formed any words against a dictionary or something. Any other ideas? ps. As I write this, I also realized that checking for words in more than one language (eg: english and french, spanish, etc) would be a mess, and I'm starting to love numbers-only IDs again.

    Read the article

  • How will this affect my SEO ranking?

    - by dunc
    I run a fishkeeping website based on a WordPress (PHP) CMS. I've recently put a fairly complex "filter" into place which searches my content for mentions of fish species profiles and turns them into an active link. For example, asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and A. panduro and Apistogramma panduro ...becomes asdasd this is a test about abdomen to see if the caudal fin will work asdadasdas try again with abdomen and <a href="/?p=1703" class="link_species">A. panduro</a> and <a href="/?p=1703" class="link_species">Apistogramma panduro</a> On the rest of my website, the species are linked with pretty URLs such as /species/apistogramma-panduro/ but due to the way this filter works, the only information I can get access to is the idof the post. As such, I'm using /?p=1703 or whatever the ID is. What I'd like to know is: how much will this affect my SEO rating/ranking? Will it be detrimental if I don't rewrite the function? Thanks in advance,

    Read the article

  • Best way to redirect in IIS

    - by stephmoreland
    We have a website that has two URLs (one for the US side and another for the Canadian side which is then broken into Canadian English and Canadian French). For the purposes of my question, I will write as: www.us_url.com (US) www.canada_url.ca/ca_en/ (Canadian English) www.canada_url.ca/ca_fr/ (Canadian French) To make sure people are on the correct site, what do I do if they go to the US URL with Canadian English content (e.g. www.us_url.com/ca_en/canada.asp) but I want to make sure the URL is the Canadian one (e.g. www.canada_url.ca/ca_en/canada.asp) so it shows up properly in Google Analytics. We're using IIS 7 and classic ASP.

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

  • Why is ComboBoxText giving me a "no attribute" error?

    - by boywithaxe
    I'm trying to add a text Combo Box into my app. I've created in and populated the list but when I I try to print out the active text I get an error. Here's the part of the code in question: def on_netif_changed(self, widget): netif = widget.gtk_combo_box_text_get_active_text() print netif And the error I get: Traceback (most recent call last): File "/home/boywithaxe/Developer/Quickly/broadcast/broadcast/BroadcastWindow.py", line 44, in on_netif_changed netif = widget.gtk_combo_box_text_get_active_text() AttributeError: 'ComboBoxText' object has no attribute 'gtk_combo_box_text_get_active_text' I'm a bit at a loss here, I've no problem betting text from text boxes, but this seems a completely different issue. I tried RTFMing but came up short. I would appreciate any suggestions.

    Read the article

  • HTAccess redirect directories to index.html

    - by BFTrick
    Hi there, I am working on a site that where I do not have permission to the server and someone else keeps changing the settings. That person just changed the settings preventing users from going to example.com/foo/ and seeing the index page. This Virtual Directory does not allow contents to be listed. If you type in example.com/foo/index.html you can still see the file. So I want to use htaccess to redirect all urls that end in a directory to change into directory/index.html How do I write that? I started with some code that changes .php files to .html files and tried to work from that but I couldn't quite get it to work. RewriteRule ^(.*)\.php$ /$1.html [R=301,L] Any suggestions?

    Read the article

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • Alternatives to OAuth?

    - by sdolgy
    The Web industry is shifting / has shifted towards using OAuth when extending API services to external consumers & developers. There is some elegance in simple....and well, the 3-step OAuth process isn't too bad ... i just find it is the best of a bad bunch of options. Are there alternatives out there that could be better, and more secure? The security reference is derived from the following URLs: http://www.infoq.com/news/2010/09/oauth2-bad-for-web http://hueniverse.com/2010/09/oauth-2-0-without-signatures-is-bad-for-the-web/

    Read the article

  • Is the use of hashbang really a good idea? [on hold]

    - by user32642
    I've been working on a WordPress site lately that was design with hashbang or shebang in the dynamically generated URLs. After doing some research, I noticed that there was some preference by Google in their use and how it crawled the site. However, after I ran several sitemap generators and Screaming Frog SEO Spider, I realized that the only page being crawled was the index page. So now I am questioning the use of hashbangs. What do you think? Should I attempt to remove them? Or will it even matter? And does anyone know of a easy way to remove this? The site is www.modernvintage1005.com

    Read the article

  • How to remove text from file icons ?

    - by Jesse
    I am fairly new to Ubuntu (Linux). I noticed when creating a .java file or .py gives me an icon, however when I start writing code it overrides the thumbnail into showing text. Is there a way to disable this silly feature ? When I installed Linux Mint on the VM I noticed I was able to accomplish this task by unchecking "Show text in icon". I know they use Namo as their file manager. I read this thread and it did not work. How to stop Nautilus from creating thumbnails of specific file types?

    Read the article

  • How can redirect pages from old core PHP site to new Joomla site?

    - by pkachhia
    We have our old site into core PHP and we have developed it again into Joomla 1.5 last year( because of some limitations we have to build it into 1.5). Now the problem is the URL of sites changed as we have use SEO URLS on joomla. In between we have use .htaccess to redirect user from old URL to new like this Redirect /pages/oldpage.php http://www.mydomain.com/products/category/new_page.html Is this good practice to redirect user to new URL or not?(we have used same server). One more thing, We have used splash page on our site, and to set up it we have made some changes and because of it one of the important link is not working, and it is http://www.mydomail.com/index.php How can I get rid of it? I have used DirectoryIndex splash.html home.html index.php in .htaccess to open splash page first when someone open my site http://www.mydomain.com. Note: my website hosted on dedicated ubuntu server.

    Read the article

  • Website Remodel Redirects

    - by inKit
    We've recently built a site for a new client who has not inserted all the content that they had from their old site into their new one. Also a lot of content is dynamic with ID's not matching from the old site to the new one. We have added dynamic redirects for most of the patterns we could find in pages that were 404ing, but there are still a lot of pages that had content, or just jumbled urls that we cannot match up with content pages on the new site. Is it better to redirect these leftover pages to the homepage? Or leave them 404ing?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >