Search Results

Search found 11367 results on 455 pages for 'pages 09'.

Page 115/455 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • Unity Dash and top toolbar won't open after updating to 12.10

    - by pgrytdal
    Today, I updated to Ubuntu 12.10. After re-starting, like the updater suggested, the toolbar on the top of the screen, and the dash won't load. I seem to be missing other features, as well, like alttab to switch windows, etc. I am able to access the Terminal, by typing CtrlAltT, which is how I was bale to access Firefox. How do I fix this problem? Edit: 2:10 PM on 10/19/12 As Chris Carter suggested, I'm including the results of the teminal command lspci (Sorry... I dont know how to format between Back-tics): 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Acer Incorporated [ALI] AMD RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3a) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration (rev 40) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Link Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] 03:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 09:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Wireless network unstable and often WPA2 protected networks just don't work

    - by Pedro
    I have an issue with my wireless network,so that the connection is working for only a few minutes, after which my browser no longer is able to load pages, even if the wireless is still active/connected. Furthermore, most of the time WPA2-personal protected networks don't work, (yesterday was the first time it worked - for a few minutes). By "don't work" I mean that it seems to successfully connect, but the browser can't load pages. I am running Ubuntu 10.10 32bit, and my wireless card is a RaLink rt3090. No changes have been done to any settings since Ubuntu was installed - networking began working on its own after the installation - but as described in first paragraph not very well.

    Read the article

  • facebook internal search by google [migrated]

    - by Alexis
    I am currently working on a challenge; basically using google api; I would search for facebook fan pages; most specifically the "about" timeline for the email add and the date the business was founded. So far I have come to this: site:facebook.com/pages + "business type" + "country" + "@email.com" I need to add something else so that it can give me back the date it was founded in. If you see the facebook fan page in the about section; there is for e.g (Founded 06/02/2010) The bracketed info above is what I need to add to succeed in adding; any idea?

    Read the article

  • Phishing : une nouvelle technique se répand avec le HTML5, elle contourne le blacklistage des URL malicieuses

    Phishing : une nouvelle technique se répand avec le HTML5 Elle contourne le blacklistage des URL malicieuses Les spammeurs et autres cyber-escrocs se mettent eux aussi au HTML5 pour contourner les mesures anti-spam et anti-phishing de plus en plus répandues et efficaces des navigateurs et les clients de messagerie. Au lieu d'intégrer aux mails des liens HTML classiques vers des pages souvent blacklistées, les spammeurs "modernes" privilégieraient désormais les « attachements HTML ». M86, la firme de sécurité met en tout cas en garde contre la recrudescence de ces menaces. Les liens dans les mail pointent désormais vers des pages HTML jointe, qui contiennen...

    Read the article

  • algorithm for project euler problem no 18

    - by Valentino Ru
    Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if the "Traversor" is greedy, meaning that he always choosed the child with be higher value the maximum of every single walkthrough is asked The NOTE says, that it is possible to solve this problem by trying every route. This means to me, that is is also possible without! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }

    Read the article

  • Using 301 Redirects on new site when access to old site denied?

    - by Cape Cod Gunny
    I have a situation where I'm standing up a new website on a different web host. I've been denied access to the old site by the hosting company and the old site will most likely be turned off very soon. If my new site contains pages that are named slightly different how do I go about setting up 301 redirects on my new site? For example: www.oldsite.com\aboutus\ www.newsite.com\aboutus.html www.newsite.com\productx.html www.oldsite.com\productx\ Edit: Clarification: The old domain name is different from the new domain name. On my newsite do I just duplicate every page that existed on the old site and place redirect code inside those pages? What does the redirect code look like?

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • root issues in softwarecenter, synaptic and update manager

    - by user188977
    i have a notebook samsung ativ 2 and ubuntu 12.04 precise, cinnamon desktop. after logging in today my update manager, synaptic and ubuntu softwarecenter stopped working. synaptic i can only launch from terminal the others from panel.when choosing to update, nothing happens. same thing when trying to install programms from syn. or softw.center.when launching softwarec. from terminal i get: marcus@ddddddddd:~$ software-center 2013-11-10 22:30:46,206 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2013-11-10 22:30:46,217 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4772): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2013-11-10 22:30:46,977 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2013-11-10 22:30:47,320 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2013-11-10 22:30:48,057 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() 2013-11-10 22:31:00,646 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/share/software-center/softwarecenter/utils.py', 201, 'get_title_from_html')' 2013-11-10 22:31:00,645 - root - WARNING - failed to parse: '<div style="background-color: #161513; width:1680px; height:200px;">  <div style="background: url('/site_media/exhibits/2013/09/AAMFP_Leaderboard_700x200_1.jpg') top left no-repeat; width:700px; height:200px;"></div> </div>' ('ascii' codec can't encode character u'\xa0' in position 70: ordinal not in range(128)) 2013-11-10 22:31:02,268 - softwarecenter.db.update - INFO - skipping region restricted app: 'Comentarios Web' (not whitelisted) 2013-11-10 22:31:02,769 - softwarecenter.db.update - INFO - skipping region restricted app: 'reEarCandy' (not whitelisted) 2013-11-10 22:31:04,821 - softwarecenter.db.update - INFO - skipping region restricted app: 'Flaggame' (not whitelisted) 2013-11-10 22:31:05,622 - softwarecenter.db.update - INFO - skipping region restricted app: 'Bulleti d'esquerra de Calonge i Sant Antoni ' (not whitelisted) 2013-11-10 22:31:08,352 - softwarecenter.ui.gtk3.app - INFO - software-center-agent finished with status 0 2013-11-10 22:31:08,353 - softwarecenter.db.database - INFO - reopen() database 2013-11-10 22:31:08,353 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2013-11-10 22:33:32,319 - softwarecenter.backend - WARNING - _on_trans_error: org.freedesktop.PolicyKit.Error.Failed: ('system-bus-name', {'name': ':1.72'}): org.debian.apt.install-or-remove-packages 2013-11-10 22:36:01,818 - softwarecenter.backend - WARNING - daemon dies, ignoring: <AptTransaction object at 0x48e4b40 (aptdaemon+client+AptTransaction at 0x645aaa0)> exit-failed 2013-11-10 22:36:01,820 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open()

    Read the article

  • How to track in Google Analytics registrations come from Google AdWords ads?

    - by automatix
    I created a campaign in Google AdWords and some ads in it and gave them URLs like mydomain.tld/registration/?utm_campaign=mycampaing&ad=x mydomain.tld/registration/?utm_campaign=mycampaing&ad=y mydomain.tld/registration/?utm_campaign=mycampaing&ad=z All ads lead to the registration page. A registration is a visit of the page mydomain.tld/registration-complited/?user={ID} So I can track the registrations in Google Analytics. I just go to Behavior -> Site Content -> All Pages and filter the pages to registration-complited. But how can I see, how many and which users have registered, after they came from an ad of a campaign, e.g. utm_campaign? And how can I also track this for a sigle ad of the campaign, e.g. x?

    Read the article

  • remove ssl from Google search results

    - by user73457
    I am the webadmin of a WordPress site that serves up http pages statically. The problem is that some of the pages are shown as https in Google search results. For instance, if the search term "Example Press Kit" is entered the search result site link comes up as: https://example.com/presskit/ We don't have a site ssl certificate, so surfers are being bounced. I have tried everything. Most recently I created a new website in Google WebAdmin for the https version of our home page. Then, I added sitelinks that should have redirected site links intended for https://example.com/* to http://example.com/*. But it doesn't work! Google still shows a dead link to http://example.com/presskit. I didn't think dead links lasted very long on Google results, but there they are, two weeks later. Any ideas?

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

  • How can I create a dynamic site that is still search-bot friendly?

    - by zuko
    If I want to have a slide effect between pages. You click a link, it is loaded off to the side and then slides in (pushing the old page off the other side). I can imagine using jQuery to do the PHP and the effects... but how do I do something like this that gracefully degrades for users without Javascript, including bots? Possibly more problematic: what if I wanted to have a sort of mural background across the site, perhaps with a parallax scrolling effect, and sliding to other pages reveals more of the, possibly giant image? Again, I can imagine how to do this with lots of fancy jQuery and PHP but it would heavily rely on those. How can I gracefully degrade in a situation like that? Any pointers, articles or books would be greatly appreciated. I keep trying to search for answers but I just get a lot of "theory"-based, unhelpful blogs.

    Read the article

  • Exporting from the GAC

    - by TATWORTH
    Recently I had need to export from the GAC - here are some useful resources:http://gacassemblyexporter.codeplex.com/SourceControl/list/changesetshttp://blogs.msdn.com/b/johnwpowell/archive/2009/01/14/how-to-copy-an-assembly-from-the-gac.aspxThere is an alternative method at http://aspdotnetcodebook.blogspot.co.uk/2008/09/get-copy-of-dll-in-gac-or-add-reference.html that involves de-installing what is part of the operating system - I would recommend this as a method of last resort.

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Google Analytics Export API - nextPagePath data

    - by Btibert3
    I am probably missing something obvious, but I do not understand when I query: start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:previousPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS my data return as expected, all of the previous pages including (entrance). However, when I modify the code to be nextPagePath start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:nextPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS only one line of data are returned; the pagepath and nextpagepath are identical with itself. I replicated this result using the Query Explorer. What am I missing or doing wrong? I was expecting to see a large number of "next" pages, including (exit). Thanks in advance.

    Read the article

  • Chrome 10 rend possible l'exécution d'applications Web en arrière plan, Google publie un exemple

    Chrome 10 rend possible l'exécution d'applications Web en arrière plan Même quand le navigateur est fermé, Google publie un exemple Mise à jour du 24/02/11 par Gordon Fowler Google vient de dévoiler une nouvelle fonctionnalité disponible dans la version 10 (en beta) de son navigateur Chrome. La fonctionnalité, baptisée « Background Pages », bien que n'ayant pas été mise en avant lors de la sortie Chrome 10, est bel et bien là. Elle permet d'exécuter des pages Web en arrière-plan de façon totalement transparente pour l'utilisateur. Certaines applications (qualifiées « d'applications d'arrière plan ») peuvent ainsi continuer à tourn...

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • Splitting a sitemap by content type

    - by James
    I currently am tasked with submitting our website sitemap to the search engines every week. We have a module which does offer sitemap generation but we find using it does not work very well as not all pages are included and it does not split the sitemap by content. I've used various (online and offline) tools to generate the sitemaps which is not the problem. The problem is that after every generation (which takes most of each Monday) I have to manually go through the sitemap and categorise the links in to products, pages, categories and sub categories. I've experimented successfully with XSL to split the sitemap but it is still a labour intensive process. Does anyone know of a good method to split the sitemap? Currently there are around 20,000 links (iirc) in total.

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • Recommended flexible website solution?

    - by Omega
    My site has a MyBB forums installation, and that is pretty much all I need. Forums. However, I need a homepage and a couple other static pages, for showing relevant information, links, etc. I don't need something fancy, all I need is something very flexible regarding theme and style editing, and just a couple simple modules, like public polls. That's all. I am very graphical, and I am looking for something to let me edit pretty much every aspect of the site. These are static pages, mainly, so I don't need something very complex. Some people tell me to use Dreamweaver, but quite honestly, that is not what I am looking for, even thought it does offer a lot of flexibility. I want something like, you know, Drupal or.. some other simple, deeply-editable in terms of graphics and style web platform. What would you recommend to me? Thank you.

    Read the article

  • SEO Mapping, Tracking and Reporting

    Linking the pages of a website is done because search engines will be more aware of a site's presence when its pages are found at the other end of industry terms in anchor text contained with content at other locations. The total and quality of those links are factors that help promote rankings; when placed for SEO purposes they should be one-way links rather than reciprocal since reciprocal links are not any help in ranking brownie points and it is prohibitively time-consuming to administer a thousand of them. This is not to be confused with link exchanges; when you can...

    Read the article

  • Is there a media player that works on HTTPS sites?

    - by Iain Hallam
    I'm currently using Yahoo! Media Player for a site that needs to play MP3 files that are stored on our server. In total, there's quite a bit more than the free limits at Soundcloud, but each file is only a few minutes long. YMP is pretty good, but causes security warnings on HTTPS pages, because it can only be served via HTTP. Is there an equivalent free player I can embed for the HTTPS pages? EDIT: Just to clarify, I'm initially looking for something that will scan the page and turn media links playable.

    Read the article

  • Rehosting content from another server

    - by Lana_M
    We have a set of static pages that will augment a customer's existing site. The pages will not reside on the customer's servers for logistical reasons and because we need to maintain control of the content. The plan is for the customer to set up a mod_rewrite rule that will funnel certain types of URLs to a single server-side handler script that will grab the appropriate file from a CDN and just output its content. This illustrates the approach: <?php echo(file_get_contents(str_replace($customer_host, $cdn_host, $_SERVER['REQUEST_URI']))); ?> Can anyone think of pitfalls or offer up a different approach? Is there some way to circumvent a script altogether?

    Read the article

  • Determining cause of random latency/loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of css/js files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well. Any help or advice is much appreciated.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >