Search Results

Search found 24735 results on 990 pages for 'site ranking'.

Page 19/990 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Learn How to Start a Membership Site

    Each and every individual is curious to find out how to start a membership site. A membership site is a site where a visitor has to pay to view the contents, and unauthorized viewers are prohibited from surfing such sites.

    Read the article

  • Since Google reduces the value of links alongside nofollow links, what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links, and thus reduces the value of the other links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Create a new site programmatically and select parent site? (SharePoint)

    - by peter
    Hi, I am using the following code to create a new site: newWeb = SPContext.GetContext(HttpContext.Current).Web.Webs.Add(newSiteUrl, newSiteName, null, (uint)1033, siteTemplate, true, false); try { newWeb.Update(); } NewSiteUrl and newSiteName are values from two textboxes and on whichever site I use this code (in a web part) the new site will be a subsite to this site. I would now like to be able to select a parent site so that the new site can sit anywhere in the site collection, not just as a subsite to the site where I use the web part. I created the following function to get all the sites in the site collection and populate a drop down with the name and url for every site private void getSites() { SPSite oSiteCollection = SPContext.Current.Site; SPWebCollection collWebsite = oSiteCollection.AllWebs; for (int i = 0; i < collWebsite.Count; i++) { ddlParentSite.Items.Add(new ListItem(collWebsite[i].Title, collWebsite[i].Url)); } oSiteCollection.Dispose(); } If the user selects a site in the dropdown, is it possible to use that URL in newSiteUrl so decide where the new site should be? I don't get it to work really and the new site still becomes a subsite to the current one. I guess it has to do with HttpContext.Current? Any ideas on how I should do it instead? It's the first time I write custom web parts and the sharepoint object model is a bit overwhelming at the moment. Thanks in advance.

    Read the article

  • Since Google doesn't use nofollow anymore what is smart alternative?

    - by SharkTheDark
    Since from 2009. Google count nofollow links also as outgoing link, what are alternative to stop Google count outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Since Google doesn't use nofollow anymore what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Will rewriting your .htaccess to 404 to return search results from your site negatively effect your ranking in Google?

    - by leeand00
    Depending on the type of site that you are running, it may or may not be advantageous to display search results instead of a 404 page, when someone visits a non-existent page on your site. I believe that the site I've been maintaining recently would benefit from this as it is the site of a publication. With a publication the more people you can get to read your site the better. But after reading up on how Google ranks the "quality" of your site, where you will appear in SERPs, based on how much the meta text of a page relates to the content of the page, I have to wonder if making a 404 page link to the search results would harm the "quality" of your site in Google eyes.

    Read the article

  • PHP XPath - how to get value from result

    - by LeeTee
    I have the followng XML file from Ebay <GeteBayDetailsResponse xmlns="urn:ebay:apis:eBLBaseComponents"><Timestamp>2012-07-04T12:02:14.541Z</Timestamp><Ack>Success</Ack><Version>779</Version><Build>E779_INTL_BUNDLED_14986004_R1</Build><SiteDetails><Site>US</Site><SiteID>0</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Canada</Site><SiteID>2</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>UK</Site><SiteID>3</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Germany</Site><SiteID>77</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Australia</Site><SiteID>15</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>France</Site><SiteID>71</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>eBayMotors</Site><SiteID>100</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Italy</Site><SiteID>101</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Netherlands</Site><SiteID>146</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Spain</Site><SiteID>186</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>India</Site><SiteID>203</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>HongKong</Site><SiteID>201</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Singapore</Site><SiteID>216</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Malaysia</Site><SiteID>207</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Philippines</Site><SiteID>211</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>CanadaFrench</Site><SiteID>210</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Poland</Site><SiteID>212</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_Dutch</Site><SiteID>123</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_French</Site><SiteID>23</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Austria</Site><SiteID>16</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Switzerland</Site><SiteID>193</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Ireland</Site><SiteID>205</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></GeteBayDetailsResponse> I need to get the value of node SiteID where the node Site matches whatever variable I pass. The code I have is: $xml_file ='SiteDetails.xml'; $xmlDoc = new DomDocument(); $xmlDoc->load($xml_file); $xpath = new DOMXpath($xmlDoc); $siteIDList = $xpath->query("/GeteBayDetailsResponse/SiteDetails[Site=$site]/SiteID"); var_dump($siteIDList); echo $siteIDList->SiteID; I get the following result: object(DOMNodeList)#11 (0) { } Can anyone help? I want to get a value of 3.

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • Ranking based string matching algorithm..for Midi Music

    - by Taha
    i am working on midi music project. What i am trying to do is:- matching the Instrument midi track with the similar instrument midi track... for example Flute track in a some midi music is matched against the Flute track in some other music midi file... After matching ,the results should come ranking wise according to their similarity.. Like 1) track1 2) track2 3) track3 I have this sort of string coming from my midi music .. F4/0.01282051282051282E4/0.01282051282051282Eb4/0.01282051282051282 D4/0.01282051282051282C#4/0.01282051282051282C4/0.01282051282051282 Which ranking algorithm with good metrics should i use for such data ? Thanking you in anticipation!

    Read the article

  • How to determine the correct (case sensitive) URL for a SharePoint site

    - by Goyuix
    SharePoint is generally very tolerant of accepting a URL in a case-insensitive fashion, however there are a few cases where it completely breaks down. For example, when creating a site column it somehow stores and uses the URL when it was created, and when trying to edit the field definition through the Site Column Gallery (fldedit.aspx page in the LAYOUTS) you end up throwing the error below. Value does not fall within the expected range. at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName) at Microsoft.SharePoint.ApplicationPages.BasicFieldEditPage.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) How can I reliably get the correct URL for a site/web? The SPSite.Url and SPWeb.Url properties seem to return back whatever case they are instantiated with. In other words, the site collection is provisioned using the following URL: http://server/Path/Site If I create a new Site Column using the SharePoint object model and happen to use http://server/path/site when instantiating the SPSite and SPWeb objects, the site column will be made available but when trying to access it through the gallery the error above is generated. If I correct the URL in the address bar, I can still view/modify the definition for the SPField in question, but the default URL that is generated is bogus. Clear as mud? Example code: (this is a bad example because of the case sensitivity issue) // note: site should be partially caps: http://server/Path/Site using (SPSite site = new SPSite("http://server/path/site")) { using (SPWeb web = site.OpenWeb()) { web.Fields.AddFieldAsXml("..."); // correct XML really here } }

    Read the article

  • Document ranking strategy for P2P file sharing system.

    - by ablmf
    Recently, I got a task of building a P2P file sharing system. There is one requirement : the system should have a document ranking algorithm so that it could be used help users find more valuable files. Several strategy might be useful : force user to give a score to a file before it was downloaded The documents containing certain key words would get higher rank Manager could modify file ranking manually the more a file was downloaded, it would get a higher rank. Do you know any other strategy or methods that also appropriate? Or is there any real world example?

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • SharePoint 2010 Hosting :: Error – HTTP Error 401.1 when Accessing Your SharePoint 2010 Site

    - by mbridge
    When attempting to view a MOSS (SharePoint) 2007 or SharePoint 2010 site locally from a Web Front End (WFE) you get an error stating: “HTTP Error 401.1 – Unauthorized: Access is denied due to invalid credentials.” I have noticed that this happens on Windows 2003/2008 Server SP1/SP2/R2 when using Host Headers and Alternate Access Mappings on a web application in MOSS 2007. If you can access the site from remote machines and cannot access the site from the server itself, then this might be your issue. For all my newer farm installs this includes SharePoint 2007 (MOSS) and SharePoint 2010. I use method number 2 on all SharePoint and SQL Servers in the farm. If you cannot access the web site locally or remotely from other machines then there is an issue with security on the site and/or possibly a Kerberos related security issue I implemented fix #2 listed in the following Microsoft KB Article. I implemented this fix on all servers in the MOSS 2007 Farm (WFE’s and Indexing/Search Server). If using method 1, you would add all Host Headers and Alternate Access Mappings for all web applications to the BackConnectionHostNames value, then you will be able to access the sites locally from the WFE’s. Microsoft KB Link: http://support.microsoft.com/kb/896861 Method 1: Specify Host Names Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK. 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 3. Right-click MSV1_0, point to New, and then click Multi-String Value. 4. Type BackConnectionHostNames, and then press ENTER. 5. Right-click BackConnectionHostNames, and then click Modify. 6. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK. 7. Quit Registry Editor, and then restart the IISAdmin service. Method 2: Disable the Loopback Check  Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa 3. Right-click Lsa, point to New, and then click DWORD Value. 4. Type DisableLoopbackCheck, and then press ENTER. 5. Right-click DisableLoopbackCheck, and then click Modify. 6. In the Value data box, type 1, and then click OK. 7. Quit Registry Editor, and then restart your computer. Give it try and good luck.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Migrate ASP.Net web site from IIS6 to IIS7

    - by David.Chu.ca
    I have to migrate an ASP.Net web site from IIS6 to IIS7. I tried to copy the all files for a web site from IIS6 (c:\inetpub\wwwroot\MySite) to another box with Windows Server 2008 R2 where IIS7 is the default web server. However, the simply copy seems not working. Should I rebuild the web site for IIS7 or should I make changes on the new box with IIS7 such as web.config? Thanks for the comments. Further investigation I found that http handers seems caused exception: <!--httpHandlers> <add path="Reserved.ReportViewerWebControl.axd" verb="*" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false"/> </httpHandlers--> After I comment out the above handler in web.config, the web page works fine. This is just my initial test. I am not sure if I should rebuild the web site from source codes or not. If so, do I need to specify for IIS7?

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • How to publishing access DB to https SharePoint2010 site with self-signed certificate

    - by ybbest
    If you are having troubles (shown below) when you publish your access database to https SharePoint2010 site with self-signed certificate. Problem: First you are getting a warning see the screenshot below: And then getting the error message: Solution: The error “The name of the security certificate is invalid or does not match the name of the site” comes when the ‘common name’ in the certificate doesn’t match the address you provided in browser to access the site. To fix the problem , you need to use script to generate the certificate rather than using the IIS UI, this is because it will default the common name to the server name and you will have the above problem when using that certificate to a different host-name web application. You can use SelfSSl.exe (IIS 6.0 only), you have to specify common name(cn), for example as: selfssl.exe /T /N:cn=testsharepoint.com /K:1024 /V:7 /S:1 /P:443 OR you can use makecert (IIS7.0 and above) makecert -r -pe -n 'CN=my.domain.here' -b 01/01/2000 -e 01/01/2036 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 After you have created the certificate, you then need to add that self-signed certificate to your IIS web site and to the Trusted Root Certification Authorities. (To get to there, Key-in Windows + R and Type mmc.exe and add the certifications console) I have compiled the solution from the questions I have asked in sharepointstackexchange

    Read the article

  • Google Analytics Visitors drop-off for certain region of site only

    - by crmpicco
    I have an issue with the tracking on my site where I have seen a dramatic drop off of visitors to the site from a certain region. I have four regions on my site at the moment, these are UK, EU, US and RoW (Rest of the World). The UK, EU and US regions are unaffected, only the RoW region suffers this drop-off. I have included a screen shot below from my GA account, which shows this effect. My GA code, which is included on every page on the site is below. I have changed the UA account number intentionally for this example. There have been no changes made to the GA account or the tracking code in a live environment for some considerable time, but for some reason I am seeing the drop-off for this region only. In the code below I am not tracking page views on certain pages as I have event tracking setup for these pages. <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-18721873-5']); _gaq.push(['_setCookiePath', '/row/']); if ( typeof(p_page) != 'undefined') { // do nothing if user is on above pages // N.B. there are a series of conditions in this if statement checking that we are not on a particular page } else { _gaq.push(['_trackPageview']); } </script>

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • How to add files to a document library in a site definition in SharePoint 2007?

    - by jaloplo
    Hi all, I'm doing a site definition for SharePoint 2007. When the site is created, a document library called "Folder2" is created also. Now, I need to add some documents to this document library and appear as items in the document library standard views. My code is: <Lists> <List FeatureId="00bfea71-e717-4e80-aa17-d0c71b360101" Type="101" Title="Folder2" Url="Folder2"> <Data> <Rows> <Row> <Field Name="Name">MyFile.txt</Field> <Field Name="Title">MyFile.txt</Field> <Field Name="FileLeafRef">MyFile.txt</Field> </Row> </Rows> </Data> </List> </Lists> When I see the items of the Document Library there is one element with title "1_". Does anybody know how to add files in a site definition? The onet.xml I used is the same as blank site. Thanks!!!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >