Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 97/467 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Doubt regarding search engine/plugin(One present on the website itself)

    - by Ravi Gupta
    I am new to web development and trying to study various types of websites as case study. Right now my focus is on how search engines works for an eCommerce website. I know basic functioning for a search engine, i.e. crawl web pages, index them and the display the results using those indexes. But I got little confuse in case of an eCommerce website. Don't you think that it would be better if a search engine instead of crawling the web pages containing products, it should directly crawl the database and index the products stored in the database? And when a user search for any product, it will simply give us the rows of the table which matches the user query? If this is not the case, can someone please explain how the usual method works on eCommerce website?

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by kath
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tryed to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code this webpage has a redirect loop the website is php the problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents i really need some advice what to do with this problem here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L]

    Read the article

  • How can I force Google to re-index my site?

    - by Matthias
    I changed the structure of my URLs. The pages are already indexed by Google and have the following structure: http://mypage.com/myfolder/page.apsx The new structure is: http://mypage.com/page.aspx Now all URLs that Google knows are wrong. How can I tell Google to re-index and that the structure has changed? Internally I redirect in ASP.NET when the URL contains myfolder by I want Google to update the URLs. Thanks for the answers - I use IIS 6 and I do not know how to configure a redirect of all pages that contains the folder to page one folder below. So I did the trick in the Begin_Request method and did a Context.Response.Redirect. This is no 301 redirect, only a redirect done with ASP.NET via code. Will this also do the trick so that Google notices that the URL /folder/page1.aspx now is redirected to /page1.aspx?

    Read the article

  • Alternatives for saving data with jquery

    - by Phil Vallone
    I am not sure if this question is considered too broad, but I would like to reach out to my fellow programmers to see what alternatives are out there for saving data using jquery. I have a content management system that generates an set of HTML pages called an IETM (Interactive Electronic Technical Manual). The HTML pages are written in HTML and uses jquery. The ITEM is meant to be light weight, portable and run on most modern browsers. I am looking for a way to save data. I have considered cookies and sqlite. Are there any other alternatives for saving data using jquery?

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • Good structure of IT / programmer CV

    - by tomas
    Hi, company where I applied for a job requires a very detailed CV mainly of programming languages, frameworks, technology. My CV have 3 pages but for this company is not enough detailed. ;) What structure have your CV in programming languages, frameworks, technology, third-party libraries? Any sample of good structured CV. (as pdf file) Of course I had used the google but I found a dozen same old things. I would like have someting orignal and fresh. Any inspiration? I do not know what to write for example C #. C# OOP, delagate, event, generic, LINQ other WPF control, data template, converter, style, triggers..? Prims, Caliburn, MEF ? Also which skills from OS, IDE, util is suitably to have in CV. I would’t have a 10 pages CV or have bad and immense structure of CV. Sory for my english

    Read the article

  • Google ranking - Modal views - google analytics events [duplicate]

    - by minchiya
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers I modified a site recently : - I added many google analytics events, to better understand user behaviour. - I added also two buttons on almost all the pages of the site. Those buttons show modal-views (I am using bootstrap) with questions about user opinion. This modals views are on almost all pages of the site. After this modification the ranking of the site decreased on google search from the second place to the seconde page :( Is it the events-collected or the model-views added ? If the model-views are the reason, then how to better do similar surveys ? Did you have please similar experience, or explanation to this ? Perhaps it is the effect of panda4 update. In this cas, what can I look for to improve the site. How to debug the problem/reasons ?

    Read the article

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • Wireless network unstable and often WPA2 protected networks just don't work

    - by Pedro
    I have an issue with my wireless network,so that the connection is working for only a few minutes, after which my browser no longer is able to load pages, even if the wireless is still active/connected. Furthermore, most of the time WPA2-personal protected networks don't work, (yesterday was the first time it worked - for a few minutes). By "don't work" I mean that it seems to successfully connect, but the browser can't load pages. I am running Ubuntu 10.10 32bit, and my wireless card is a RaLink rt3090. No changes have been done to any settings since Ubuntu was installed - networking began working on its own after the installation - but as described in first paragraph not very well.

    Read the article

  • Using 301 Redirects on new site when access to old site denied?

    - by Cape Cod Gunny
    I have a situation where I'm standing up a new website on a different web host. I've been denied access to the old site by the hosting company and the old site will most likely be turned off very soon. If my new site contains pages that are named slightly different how do I go about setting up 301 redirects on my new site? For example: www.oldsite.com\aboutus\ www.newsite.com\aboutus.html www.newsite.com\productx.html www.oldsite.com\productx\ Edit: Clarification: The old domain name is different from the new domain name. On my newsite do I just duplicate every page that existed on the old site and place redirect code inside those pages? What does the redirect code look like?

    Read the article

  • facebook internal search by google [migrated]

    - by Alexis
    I am currently working on a challenge; basically using google api; I would search for facebook fan pages; most specifically the "about" timeline for the email add and the date the business was founded. So far I have come to this: site:facebook.com/pages + "business type" + "country" + "@email.com" I need to add something else so that it can give me back the date it was founded in. If you see the facebook fan page in the about section; there is for e.g (Founded 06/02/2010) The bracketed info above is what I need to add to succeed in adding; any idea?

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Phishing : une nouvelle technique se répand avec le HTML5, elle contourne le blacklistage des URL malicieuses

    Phishing : une nouvelle technique se répand avec le HTML5 Elle contourne le blacklistage des URL malicieuses Les spammeurs et autres cyber-escrocs se mettent eux aussi au HTML5 pour contourner les mesures anti-spam et anti-phishing de plus en plus répandues et efficaces des navigateurs et les clients de messagerie. Au lieu d'intégrer aux mails des liens HTML classiques vers des pages souvent blacklistées, les spammeurs "modernes" privilégieraient désormais les « attachements HTML ». M86, la firme de sécurité met en tout cas en garde contre la recrudescence de ces menaces. Les liens dans les mail pointent désormais vers des pages HTML jointe, qui contiennen...

    Read the article

  • How to track in Google Analytics registrations come from Google AdWords ads?

    - by automatix
    I created a campaign in Google AdWords and some ads in it and gave them URLs like mydomain.tld/registration/?utm_campaign=mycampaing&ad=x mydomain.tld/registration/?utm_campaign=mycampaing&ad=y mydomain.tld/registration/?utm_campaign=mycampaing&ad=z All ads lead to the registration page. A registration is a visit of the page mydomain.tld/registration-complited/?user={ID} So I can track the registrations in Google Analytics. I just go to Behavior -> Site Content -> All Pages and filter the pages to registration-complited. But how can I see, how many and which users have registered, after they came from an ad of a campaign, e.g. utm_campaign? And how can I also track this for a sigle ad of the campaign, e.g. x?

    Read the article

  • remove ssl from Google search results

    - by user73457
    I am the webadmin of a WordPress site that serves up http pages statically. The problem is that some of the pages are shown as https in Google search results. For instance, if the search term "Example Press Kit" is entered the search result site link comes up as: https://example.com/presskit/ We don't have a site ssl certificate, so surfers are being bounced. I have tried everything. Most recently I created a new website in Google WebAdmin for the https version of our home page. Then, I added sitelinks that should have redirected site links intended for https://example.com/* to http://example.com/*. But it doesn't work! Google still shows a dead link to http://example.com/presskit. I didn't think dead links lasted very long on Google results, but there they are, two weeks later. Any ideas?

    Read the article

  • Is this Anti-Scraping technique viable with Crawl-Delay?

    - by skibulk
    I want to prevent web scrapers from abusing 1,000,000 on my website. I'd like to do this by returning a "503 Service Unavailable" error code for users that access an abnormal number of pages per minute. I don't want search engine spiders to ever receive the error. My inclination is to set a robots.txt crawl-delay which will ensure spiders access a number of pages per minute under my 503 threshold. Is this an appropriate solution? Do all major search engines support the directive? Could it negatively affect SEO? Are there any other solutions or recommendations?

    Read the article

  • How can I create a dynamic site that is still search-bot friendly?

    - by zuko
    If I want to have a slide effect between pages. You click a link, it is loaded off to the side and then slides in (pushing the old page off the other side). I can imagine using jQuery to do the PHP and the effects... but how do I do something like this that gracefully degrades for users without Javascript, including bots? Possibly more problematic: what if I wanted to have a sort of mural background across the site, perhaps with a parallax scrolling effect, and sliding to other pages reveals more of the, possibly giant image? Again, I can imagine how to do this with lots of fancy jQuery and PHP but it would heavily rely on those. How can I gracefully degrade in a situation like that? Any pointers, articles or books would be greatly appreciated. I keep trying to search for answers but I just get a lot of "theory"-based, unhelpful blogs.

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Google Analytics Export API - nextPagePath data

    - by Btibert3
    I am probably missing something obvious, but I do not understand when I query: start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:previousPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS my data return as expected, all of the previous pages including (entrance). However, when I modify the code to be nextPagePath start.date = DATE_START, end.date = DATE_END, dimensions = c("ga:pagePath","ga:nextPagePath"), metrics = c("ga:pageviews"), filters = mypageofinterest, table.id = "ga:mytable", max.results=RESULTS only one line of data are returned; the pagepath and nextpagepath are identical with itself. I replicated this result using the Query Explorer. What am I missing or doing wrong? I was expecting to see a large number of "next" pages, including (exit). Thanks in advance.

    Read the article

  • Chrome 10 rend possible l'exécution d'applications Web en arrière plan, Google publie un exemple

    Chrome 10 rend possible l'exécution d'applications Web en arrière plan Même quand le navigateur est fermé, Google publie un exemple Mise à jour du 24/02/11 par Gordon Fowler Google vient de dévoiler une nouvelle fonctionnalité disponible dans la version 10 (en beta) de son navigateur Chrome. La fonctionnalité, baptisée « Background Pages », bien que n'ayant pas été mise en avant lors de la sortie Chrome 10, est bel et bien là. Elle permet d'exécuter des pages Web en arrière-plan de façon totalement transparente pour l'utilisateur. Certaines applications (qualifiées « d'applications d'arrière plan ») peuvent ainsi continuer à tourn...

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • Splitting a sitemap by content type

    - by James
    I currently am tasked with submitting our website sitemap to the search engines every week. We have a module which does offer sitemap generation but we find using it does not work very well as not all pages are included and it does not split the sitemap by content. I've used various (online and offline) tools to generate the sitemaps which is not the problem. The problem is that after every generation (which takes most of each Monday) I have to manually go through the sitemap and categorise the links in to products, pages, categories and sub categories. I've experimented successfully with XSL to split the sitemap but it is still a labour intensive process. Does anyone know of a good method to split the sitemap? Currently there are around 20,000 links (iirc) in total.

    Read the article

  • SEO Mapping, Tracking and Reporting

    Linking the pages of a website is done because search engines will be more aware of a site's presence when its pages are found at the other end of industry terms in anchor text contained with content at other locations. The total and quality of those links are factors that help promote rankings; when placed for SEO purposes they should be one-way links rather than reciprocal since reciprocal links are not any help in ranking brownie points and it is prohibitively time-consuming to administer a thousand of them. This is not to be confused with link exchanges; when you can...

    Read the article

  • Recommended flexible website solution?

    - by Omega
    My site has a MyBB forums installation, and that is pretty much all I need. Forums. However, I need a homepage and a couple other static pages, for showing relevant information, links, etc. I don't need something fancy, all I need is something very flexible regarding theme and style editing, and just a couple simple modules, like public polls. That's all. I am very graphical, and I am looking for something to let me edit pretty much every aspect of the site. These are static pages, mainly, so I don't need something very complex. Some people tell me to use Dreamweaver, but quite honestly, that is not what I am looking for, even thought it does offer a lot of flexibility. I want something like, you know, Drupal or.. some other simple, deeply-editable in terms of graphics and style web platform. What would you recommend to me? Thank you.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >