Search Results

Search found 19186 results on 768 pages for 'sharepoint search'.

Page 160/768 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • How long is the penalty for Duplicate ecommerce content after it has been ressurected

    - by will
    I am fixing all of the duplicate content on my ecommerce site with all orignal descriptions etc. How long does it take google to start ranking it again? I used to have a good ranking that converted quite a few sales, in the last week i have had next to nothing. Also would the disclaimer i created under each product be considered duplicate content because it is on most of my product pages & is the same.

    Read the article

  • Problem in domain name of site in google search [on hold]

    - by Jayadratha Mondal
    My domain provider dont supports advanced DNS at this moment. So I have used iframe to forward to my webserver. Suppose my domain abc.com have a simple html which is opening xyz.com via Iframe. Google was showing abc.com on the search. But from last two days it is showing xyz.com, other things like description, name of the site are ok only the domain changed to xyz.com, but I want to show abc.com. When I'm typing the full domain then google showing the domain as I want. But If I type a part or any keyword then it showing that xyz.com. Anyone have any idea how to show abc.com again? EDIT: I dont know the domain name provider because its not mine. When I'm trying to set new A record its showing that "you need advanced dns to do this currently you are using simple DNS". I'm doing this from cpanel. Normally as per I know there are 3 sections. 1 domain name 2 A record 3Cname. But I dont have any domain name section.

    Read the article

  • Are generic keywords in url bad for SEO? [closed]

    - by user1661479
    Possible Duplicate: Squeezing all the SEO out of a URL as possible Need help with url structure. Let's say I'm a manufacturer of Wire EDM machines. Is it bad for me to put the keywords wire-edm in my url to help try to raise SEO ranking? For example: mywebsite.com/wire-edm/machine/model-xxxx mywebsite.com/wire-edm/customer-service mywebsite.com/wire-edm/contact Or should I leave it as the following because the gains are fairly insignificant and it doesn't help users understand my site structure: mywebsite.com/machine/model-xxxx mywebsite.com/customer-service mywebsite.com/contact I’d like to hear what everyones thoughts are on this and please provide some sources for which method is better.

    Read the article

  • Are there disadvantages an literal + instead of an encoded + (%2B) in an URL?

    - by M_rk
    A client of mine has a product ending with a plus-sign (e.g. Google+) and would like the webpage of this product to have an URL that is human-readable (i.e. an URL that doesn't contain %2B). Since our projects use the following .htaccess RewriteRule RewriteRule ^(.*)$ index.php?$1 it is possible to use an urlencoded space in an URL like that. However, while the url would read like /google+, the actual meaning of the URL would be /google[space]. (The markup won't let me place a real space there.) Now my concern is that this would have disadvantages for SEO. Is this concern valid and/or are there other culprits to this approach?

    Read the article

  • Google indexing pages with #! although we don't have any

    - by Benjamin Gruenbaum
    Our company has developed a Single Page Application using AngularJS and its routing. Google indexed our site decently with JavaScript but it did not index some pages very well so we have developed an HTML only version. We have followed the Ajax Crawling Specification posted here and have a <meta name='fragment' content='!'> tag and canonical urls. We expect http://www.example.com/foo/bar to be fetched from http://www.example.com/?_escaped_fragment_=/foo/bar. However, we have found out that when we rolled the AJAX specification we now have all pages indexed twice, once with the JavaScript version as http://www.example.com/foo/bar and once with the new version as http://www.example.com/#!/foo/bar. This is harmful to us since it's duplicate content and also mis-representing out site. I have tried looking for similar questions here and in the Google product forum but could not come up with anything.

    Read the article

  • I have many domain names and 1 website, how can I improve my SEO strategy?

    - by user114659
    I have some domains with several extensions like .us, .net, .org etc. I want to use them all to redirect to one website, which is a social networking website. I want to use these domains in such a way that these domains become helpful in SEO point of view, this time. So far I am doing the followoing: pointing all domains to one directory on my hosting I have some other options including using 301 redirect, but I don't want to see duplicate contents in Google, What else do I need to do?

    Read the article

  • multiple languages same pages shall I change the page URL path as well?

    - by Athanatos
    We own multiple country code top-level domains for our website e.g DE, UK ,FR. When someone visits for one of those domains they redirect to .com and the language automatically changes for the first time to the one from the originating domain. Also users can change the language from the .com website using a dropdown, however the page URI stays exactly the same e.g service.php. How will that be indexed in Google ? Will all the different language will be indexed or only the default lang (English) ? Is it recommended for SEO purposes to do something with the page URL (even using the htaccess maybe) so that I can also append to the title or page name the language ? e.g service.php?lang=fr

    Read the article

  • How do I login to Sphider?

    - by dayuloli
    Has anyone else here used Sphider? I am able to install everything (exactly as instructed here)and change the password, but unable to login at the login screen. Whenever I login, it just refreshes the page, whether the credentials are correct or not. I have looked at the forum and others have the same problem too, but no one has provided a solution. So if any of you have tried Sphider and had the same problem your input would be much appreciated. Running version 1.3.6 Path to admin page is 'sphider-1.3.6/admin/admin.php' PHP Version 5.3.13 MySQL Version 5.0.91-log

    Read the article

  • FBA and audience targeting in sharepoint 2007

    - by intangible02
    I want to use audience targeting feature for webpart with FBA. I tried to add FBA user directly in sharepoint group or added the user to a asp.net role and then added the role to sharepoint group, but both failed. My observation is that all FBA users can see the webpart content no matter whether they are specified in audience targeting or not. After googling it, I found there are three different conclusion about this, FBA cannot be used with audience targeting FBA can be used with audience targeting FBA can be used with audience targeting if the FBA user is added directly to sharepoint group May I know which is the correct explanation? Where can I find official Microsoft documentation regarding to this problem?

    Read the article

  • SharePoint 2007 Central Admin w3wp.exe process consumin 99% CPU

    - by Matrich
    Hi, I have been running an intranet using SharePoint 2007 for over a year and all has been working fine. However, after some time, I realized that the intranet portal was slow. Trying to access the Central Admin over another computer not the SharePoint server also became an issue. So I logged onto the real SharePoint Server and it took some ages to login and then was so slow even on the server unlike other times. When I checked the Task Manager, I found out that w3wp.exe was consuming 99% of the CPU speed. When I restarted the Central Admin App Pool, everything came back to normal and all was running well but after a few minutes (15 or so), it again became slow. I have checked the Event Logs and nothing conclusive was there to help me out. Anyone who has had this experience? or has any good resource? Please help. Thanks in advance

    Read the article

  • How can i open Thickbox from a form and feed it with data ?

    - by Mark Dekker
    I have a simple search php script, within that script there is some html and javascript to make a search input field and a button. What i am trying to do is when someone enters a search, and presses submit, thickbox opens, and the results will be displayed in the thickbox. What i have so far is the search field and button, when i press submit, it briefly shows the thickbox, and than is overloaded by the result page, but than with no search results. Here is the code: <form method="get"> <input type="text" name="merk" size=10 style="font-weight: bold; background-color:#D5DF23;">&nbsp;&nbsp; <input type="image" name="merk" class="thickbox" onclick="document.location.href='searcher.php?keepThis=true&TB_iframe=true&height=520&width=800';" src="zoek1.jpg" width="110" alt="Zoek" onMouseOver="this.src='zoek2.jpg'" onMouseOut="this.src='zoek1.jpg'"> </form></input>

    Read the article

  • geographical deployment Vs geo load balancing SharePoint 2010

    - by vrajaraman
    we have a company wide SharePoint portals planned for few thousand users. since the users are distributed among different countries and their applications (hosted in sharepoint) We would like to consider geo deployment Vs geo load balancing. Please share your inputs. We are aware of this, Geo SharePoint Cluster facilitates - Farms at Central and other sites , db into regional. 2 db cluster - syncing using logshipping or SAN sync or SQL 2008 features like database mirroing Vs Loading balancing using URL and some 3rd party. all farm,sites,db centralised. benefits expecting. 1 High availability. 2.diaster recovering management. 3.maintenance hope i miss some of the points to be covered

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • Searching for Records

    - by 47
    I've come up with a simple search view to search for records in my app. The user just enters all parameters in the search box then all this is matched against the database, then results are returned. One of these fields is the phone number....now in the database it's stored in the format XXX-XXX-XXX. A search, for example, for "765-4321" pull up only "416-765-4321...however I want it to return both "416-765-4321" and "4167654321" My view is as below: def search(request, page_by=None): query = request.GET.get('q', '') if query: term_list = query.split(' ') q = Q(first_name__icontains=term_list[0]) | Q(last_name__icontains=term_list[0]) | Q(email_address__icontains=term_list[0]) | Q(phone_number__icontains=term_list[0]) for term in term_list[1:]: q.add((Q(first_name__icontains=term) | Q(last_name__icontains=term) | Q(email_address__icontains=term) | Q(phone_number__icontains=term)), q.connector) results = Customer.objects.filter(q).distinct() all = results.count() else: results = [] if 'page_by' in request.GET: page_by = int(request.REQUEST['page_by']) else: page_by = 50 return render_to_response('customers/customers-all.html', locals(), context_instance=RequestContext(request))

    Read the article

  • Searching documents by tag using the Scribd API is no longer returning expected results.

    - by George
    Recently I have encountered an issue with Scribd where searching via Scribd API (docs.search) for documents by tag is no longer working. This has been working (for over 6 months) to return a number of documents that I have tagged with "fdsafetyandprevention" (accessible here http://www.scribd.com/tag/fdsafetyandprevention). Just recently my search via the API has stopped working. Note that test searches such as @tags "selfhelp" as described in the Scribd documentation DO work. Could my issue be related to caching or the age of my documents and Scribd choosing to not return them in search results? I have been using scribd.php (http://www.scribd.com/developers/libraries) to interface with the API using $scribd-search(@tags "fdsafetyandprevention", 20, 0, "all"). I am following the Scribd documentation for docs.search and advanced help (http://www.scribd.com/developers/search_help). Help greatly appreciated. George.

    Read the article

  • Nesting a SharePoint Webpart inside of a User Control

    - by jlech
    I know it's usually the other way around, but I have some extenuating requirements that must be met (read as "No one bothered to do the research and now I have to bail them out") I have a standard user control (ascx) that is to be imported into a SharePoint 2007 website. Due to a design constraint, a sharepoint web part that is also needed has to be nested inside of this user control. So in other words, the user control would have to look something like this: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="foo.ascx.cs" Inherits="foo" %> <div id="container"> ...snipped... <!-- SharePoint web part goes here --> ...snipped... </div> Any help would be appreciated. Thanks!

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Project Server 2007 install issue - ProjectEventService won't start

    - by Brian Meinertz
    Trying to install PS2007 with SP1 on Server 2003. The install goes fine, but when running the SharePoint Configuration Wizard, it fails at stage 6 of 12 with the error: Failed to register SharePoint Services. An exception of type System.InvalidOperationException was thrown. Additional exception information: Cannot start service ProjectEventService on computer '.'. From the PSCDiagnostics log: Exception: System.InvalidOperationException: Cannot start service ProjectEventService on computer '.'. --- System.ComponentModel.Win32Exception: The service did not respond to the start or control request in a timely fashion. The ProjectEventService (Microsoft Office Project Server Event) won't even start manually using the Network Service account. Starting the service with a domain account works, but subsequently running the Config Wizard causes the service to be removed and re-provisioned to run using the Network Service account, which again fails. Presumably Network Service needs elevated permissions, but even adding it to the local Admin group makes no difference. Anyone come across this sort of issue before?

    Read the article

  • Creating a new Active Directory account with an InfoPath form

    - by ryan
    I am setting up a business partner portal in our Sharepoint server. There will be an AD group with permissions limited to viewing and possibly contributing to the specific business partner site and employees of our business partners will have accounts created for them as needed. Now we would like to let our business development group(BDG) have control over the partner accounts. Ideally they should be able to add and delete accounts and change permissions on them. The BDG are not domain admins so we don't want to give them access to the domain controller. We want to create an Infopath form that will allow them to do all this. Is it possible to create and manage AD accounts from within an Infopath form on the sharepoint server? I searched this site and MSDN and can not find anything specifically related to my question.

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • Project Server 2007 install issue - ProjectEventService won't start

    - by Brian Meinertz
    Trying to install PS2007 with SP1 on Server 2003. The install goes fine, but when running the SharePoint Configuration Wizard, it fails at stage 6 of 12 with the error: Failed to register SharePoint Services. An exception of type System.InvalidOperationException was thrown. Additional exception information: Cannot start service ProjectEventService on computer '.'. From the PSCDiagnostics log: Exception: System.InvalidOperationException: Cannot start service ProjectEventService on computer '.'. --- System.ComponentModel.Win32Exception: The service did not respond to the start or control request in a timely fashion. The ProjectEventService (Microsoft Office Project Server Event) won't even start manually using the Network Service account. Starting the service with a domain account works, but subsequently running the Config Wizard causes the service to be removed and re-provisioned to run using the Network Service account, which again fails. Presumably Network Service needs elevated permissions, but even adding it to the local Admin group makes no difference. Anyone come across this sort of issue before?

    Read the article

  • ISA forms authentication problems after installing moss sp2

    - by user22215
    Guys I have a problem that's flared back up after installing WSS and MOSS service pack 2. The problem centers around the users being prompted to enter credentials when interacting with office documents. This problem came up before and I was able to go into ISA server and configure a persistent cookie on the web listener. As we all know when configuring a cookie you have two options use only on private computers or use on all computers. If I select use on all computers I can't even log in to Sharepoint from the forms page however if I select use only on private computer I'm able to log in and also I don't get prompted when opening office documents. So I would like to ask has something changed with Sharepoint service pack 2 because that’s the only change that’s been made to my environment.

    Read the article

  • Assembling Word Doc using Data from Excel- MS Office 2010

    - by Sascha
    I have a questionnaire that users complete. It is in Excel. After users complete the questionnaire I would like to be able to generate a Word document that contains their answers. For example "The answer to your question was [answer from Excel Questionnaire cell A49 ]" I have seen that this is possible with Sharepoint. However, I don't have Sharepoint. I am working on MS Office 2010. I also have visual Studio Express 2010. What is the best way to achieve the above, pretty please? Thanks.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >