Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 79/150 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • Possible automated Bing Ads fraud?

    - by Gary Joynes
    I run a website that generates life insurance leads. The site is very simple a) there is a form for capturing the user's details, life insurance requirements etc b) A quote comparison feature We drive traffic to our site using conventional Google Adwords and Bing Ads campaigns. Since the 6th January we have received 30-40 dodgy leads which have the following in common: All created between 2 and 8 AM Phone number always in the format "123 1234 1234' Name, Date Of Birth, Policy details, Address all seem valid and are unique across the leads Email addresses from "disposable" email accounts including dodgit.com, mailinator.com, trashymail.com, pookmail.com Some leads come from the customer form, some via the quote comparison feature All come from different IP addresses We get the keyword information passed through from the URLs All look to be coming from Bing Ads All come from Internet Explorer v7 and v8 The consistency of the data and the random IP addresses seem to suggest an automated approach but I'm not sure of the intent. We can handle identifying these leads within our database but is there anyway of stopping this at the Ad level i.e. before the click through.

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • How can I backup my PPAs?

    - by Scaine
    Related to this question. But my concern is that over the past year, most of my more interesting (or used) applications are from PPAs, and just backing up my sources list won't add the associated launchpad keys the way that add-apt-repository does. So I'm looking for a way to list all the PPA urls (like ppa:chromium-daily/stable) so that I can easily script a series of add-apt-repository commands to add them into a new installation gracefully. Short of dumping my bash history of course. Which might be feasible, depending on how far back that file goes back?

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • How long does it take for Google Webmasters to index site after submitting sitemap? [closed]

    - by Venkatesh Hodavdekar
    Possible Duplicate: Why isn't my website in Google search results? I have submitted my website today into Google search using Google Webmasters using sitemaps. The status on the sitemap says OK and it shows that 12 urls have been recognized. I was wondering how long does it take for the link to get indexed, as the indexed url option says "No data available. Please check back soon." I am not sure if it is showing this message due to some error, or everything is fine.

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • Installing Oracle 11g SOA Suite?

    - by asantaga
    Are you working for an SI like Accenture or Cap Gemini? Are you a sales consultant who needs to install software quickly??? Well I’m sure if your reading this you probably are.. Anyway if your like me, and like many tecchies reading manuals isn't natural to us, we’ll download the software, try to install it and then… ultimately fail.. or take a lot longer than it should..  However never fear help is here! For Oracle 11g SOA Suite (ps3) a good friend of mine , a SOA 11g PM in the states, has written a document, a quick start and its on OTN.. Although the document is PS3 focused, apart from the download URLs its also totally applicable for PS4 too. The document can be found at this link

    Read the article

  • My blog which gets 300+ daily impressions has stopped appearing on the 1st page of Google

    - by Sangram
    I have a blog regarding Placement papers from December 2010. My monthly impressions are around 4000. For the last 2 days, my blog has disappeared from Google search engine result pages. Impressions have reduced drastically. Please check Stat reports: My blog is still on the search engine because when I search site: mydomain.com on Google, I can see my all pages indexed over there… But my pages which used to appear on the first or second pages of Google do not appear any more. Example: If I search with query GE round 2 code writing test on bing.com or Yahoo search, the first link on the result page is my blog. But if you do same on Google, my URLs do not appear even on the 1st 3 result pages. I used to get lots of visitors by these search query earlier.

    Read the article

  • SOLVED BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs. SOLVED - See my comment in the "Answer" response from Tim

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Website falsely blocked because of spam. Does anyone know how we should proceed?

    - by Thomas Crepain
    I'm responsible for ICT at FOS Open Scouting, a belgian scouting organisation. Our website was hacked a few years back and blocked by Facebook as a result. After we regained control over the site Facebook continued to block our domain and this is causing us a number of problems. We have tried many times in the past year to contact Facebook using their 'I am blocked from adding content' form (https://www.facebook.com/help/contact.php?show_form=block_appeal) to no avail. The blocked URLs are: http://www.fos.be and http://www.fosopenscouting.be Does anyone know how we should/could proceed?

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • How frequently Googlebot fetch sitemaps? Is it depending on page rank?

    - by JITHIN JOSE
    How much frequently google fetches sitemaps? I am now working with a high traffic website normally have 30 new posts per minute.But currently it provides sitemaps which links to new 100 posts(3 minutes). Is this method is enough ?. Is Bots fetch sitemaps every 3 minutes?. Did need to change sitemaps to list all 5M posts(indexed sitemaps)?. How this change will effect on traffic and page rank. Is google bot remove urls that previously listed on sitemap but not now?

    Read the article

  • How to track in Google Analytics registrations come from Google AdWords ads?

    - by automatix
    I created a campaign in Google AdWords and some ads in it and gave them URLs like mydomain.tld/registration/?utm_campaign=mycampaing&ad=x mydomain.tld/registration/?utm_campaign=mycampaing&ad=y mydomain.tld/registration/?utm_campaign=mycampaing&ad=z All ads lead to the registration page. A registration is a visit of the page mydomain.tld/registration-complited/?user={ID} So I can track the registrations in Google Analytics. I just go to Behavior -> Site Content -> All Pages and filter the pages to registration-complited. But how can I see, how many and which users have registered, after they came from an ad of a campaign, e.g. utm_campaign? And how can I also track this for a sigle ad of the campaign, e.g. x?

    Read the article

  • Getting the keyword as a parameter from Adwords using ValueTrack

    - by Stephen Ostermiller
    I set up an AdWords campaign for website following the instructions for Google AdWords ValueTrack. One of the things that it is supposed to be able to do is pass the keyword as a URL parameter using the code {keyword} in the URL. I set it up for integration with Google Analytics such the landing URLs would look like: http://example.com/landing.html?utm_source=adwords&utm_medium=cpc&utm_term=%7Bkeyword%7D&utm_content=my_content&utm_campaign=my_page where {keyword} is in the utm_term parameter. Hower, this keyword substitution isn't happening. Why?

    Read the article

  • securing unpatched websites

    - by neuron
    I have a client with a lot (read several thousand) websites in several old cms solutions that are no longer maintained. Now moving all of them to a maintained solution isn't really an option at this point. So I'm thinking about ways to secure the solutions without patching them. The solutions are mostly joomla 1.0/1.5 and wordpress. What I'm thinking is something like this: mod_suexec to lock everyone into their own home directory apparmor to deny any and all file writes by default. (exclude by default, include things like "images" directories). use htaccess to prevent anything in writable directories from being executed. (aka disable php_engine for images/ directory). mysql triggers to check the "users" tables to prevent adding new admins/superadmins. Does this make sense? Is it viable? Am I missing something obvious?

    Read the article

  • Redirect before rewrite

    - by Kirk Strobeck
    Had an issue where I need to redirect old URLs, but not disable the mod_rewrite for page structure. redirect 301 /home.html http://www.url.com/ It needs to live on the Symphony 2.0 .htaccess file ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ######

    Read the article

  • A lot of 302 redirects

    - by user3651934
    I have a website for which one month stat shows: Unique Visitors 6274 Total Visitors 7260 Pages visited 9520 Hits 88891 Whats concerns me about is the HTTP status code: 302 Moved temporarily (redirect) 36302 How come 40% hits are being redirected. If it is not normal, what could be the possible reasons? ------------------------ adding more information ------------------------ Ok, here is the code I'm using in my .htaccess file for clean URLs. Is this causing as many as 36302 redirect hits? RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^([^\.]+)$ $1.php [L] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.+[^/])$ $1/ [R] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] RewriteRule ^(.*)/$ index.php?page=$1 [L,QSA]

    Read the article

  • Moving one site in Webmaster Tools to more than one site [closed]

    - by Towhid
    Possible Duplicate: How should I structure my urls for both SEO and localization? I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

  • wamp - Changing PHP version stops server running

    - by James Connor
    I downloaded wamp which works (green icon). However I need to test a site locally in Joomla 1.5 which caused errors when using php 5.3 I believe I need a PHP version lower then this i.e 5.2.x To do this I have gone through PHP - Version - Get More.. and installed a older PHP version. However when I start this PHP version the icon stays on an orange colour and going in localhost doesn't work. I haven't used wamp before so my knowledge of it is limited. If anyone could point me in the right direction it would be greatly appreciated, Regards.

    Read the article

  • What is the best way to have the same website in multiple domains?

    - by Daniel Magliola
    I would like to have the same website to sell a specific product, in multiple domains , to take advantage of keywords matching the domain name, for several different searches. However, I understand that having the same content in multiple sites will unleash the wrath of Google. If I have a redirect from all domains minus one, to that last one, do I still get any bonus for the "magic exact domain match jackpot"? Same question applies to canonical URLs... What's the best way to approach this? Thanks!

    Read the article

  • Extracting meta tags attribute using wget [migrated]

    - by Amit
    I have a file having some URLs per line. I need to extract the "keywords" present in the tags i.e. if there is meta tag for "keywords" then i want to get "content" value for it. Example: if the web-page has this meta-tag then for that URL i want "wikipedia,encyclopedia" to be extracted. One approach is to download the web-page using "wget" and then parse it using some standard HTML parser. I was wondering is there any better way to do this without downloading the entire web-page.

    Read the article

  • Rewrite rule to show as directory using .htaccess

    - by chanchal1987
    I want to implement a rewrite rule in my .htaccess file to show a specific url as a directory of my server. See the code below I written, RewriteRule ^(.*)/$ ?page=$1 [NC] This will rewrites urls like www.mysite.com/abc/ to www.mysite.com/index.php?page=abc. But if I request www.mysite.com/abc then it is throwing an 404 error. How can I write a rewrite rule which will match www.mysite.com/abc and www.mysite.com/abc/ both? Edit: My current .htaccess file (After Litso's answer's 3rd revision) is like below: ## ErrorDocument 401 /index.php?error=401 ErrorDocument 400 /index.php?error=400 ErrorDocument 403 /index.php?error=403 ErrorDocument 500 /index.php?error=500 ErrorDocument 404 /index.php?error=404 DirectoryIndex index.htm index.html index.php RewriteEngine on RewriteBase / Options +FollowSymlinks RewriteRule ^(.+)\.html?$ $1.php RewriteCond !-d RewriteRule ^(.*)/$ ?page=$1 [NC,L] RewriteCond %{REQUEST_URI} !index.php RewriteRule ^(.*)$ ?page=$1 [NC,L] ##

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >