Search Results

Search found 25514 results on 1021 pages for 'site blocking'.

Page 15/1021 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Website migration from WordPress to a static site and doing 301 redirects without access to existing site?

    - by user3114468
    Currently working on a project that is a hosted on WordPress that is being migrated to a static site. However I presently do not have access to the existing site as it's managed by another developer. The concern is not the lack of having access to content as the site owner has generated very little content (reason for migration) and we were able to do this manually. Rather the concern is to do 301 redirects. The site will not change domains but URLs such as from example.com/?page_id=3 to example.com/services. To add, the site is migrating to new server using same domain name. I thought maybe this could be done via editing permalinks prior to migration and WordPress would update automatically if configured to write on server. But if not configured (as this is not always the case) I do not have htaccess to fix it in case there are suddenly a bunch of 404 errors for every page. Really could use some help on the best procedure to follow in this case. This is the first migration project I've worked on.

    Read the article

  • Multiple cross-site scripting (XSS) vulnerabilities in OpenStack Horizon

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2014-3473 cross-site scripting (XSS) vulnerability 4.3 OpenStack Horizon Solaris 11.2 11.2.1.5.0 CVE-2014-3474 cross-site scripting (XSS) vulnerability 4.3 CVE-2014-3475 cross-site scripting (XSS) vulnerability 4.3 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Learn How to Start a Membership Site

    Each and every individual is curious to find out how to start a membership site. A membership site is a site where a visitor has to pay to view the contents, and unauthorized viewers are prohibited from surfing such sites.

    Read the article

  • Create a new site programmatically and select parent site? (SharePoint)

    - by peter
    Hi, I am using the following code to create a new site: newWeb = SPContext.GetContext(HttpContext.Current).Web.Webs.Add(newSiteUrl, newSiteName, null, (uint)1033, siteTemplate, true, false); try { newWeb.Update(); } NewSiteUrl and newSiteName are values from two textboxes and on whichever site I use this code (in a web part) the new site will be a subsite to this site. I would now like to be able to select a parent site so that the new site can sit anywhere in the site collection, not just as a subsite to the site where I use the web part. I created the following function to get all the sites in the site collection and populate a drop down with the name and url for every site private void getSites() { SPSite oSiteCollection = SPContext.Current.Site; SPWebCollection collWebsite = oSiteCollection.AllWebs; for (int i = 0; i < collWebsite.Count; i++) { ddlParentSite.Items.Add(new ListItem(collWebsite[i].Title, collWebsite[i].Url)); } oSiteCollection.Dispose(); } If the user selects a site in the dropdown, is it possible to use that URL in newSiteUrl so decide where the new site should be? I don't get it to work really and the new site still becomes a subsite to the current one. I guess it has to do with HttpContext.Current? Any ideas on how I should do it instead? It's the first time I write custom web parts and the sharepoint object model is a bit overwhelming at the moment. Thanks in advance.

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • What should we tell our unsupported IE6 users?

    - by Dan Fabulich
    In the upcoming version of our web app, we've broken IE6, and we don't intend to fix it. We've had a clear warning posted for IE6 users for some months; we've decided it's time not to support it. My question is: how should we communicate this to our users? Some people here feel that we should block IE6 users who would try to access the web app, because it's not going to work for them. Others feel that we should just leave up a warning, saying "This doesn't work in IE6," but not block them; instead, if they click to dismiss the warning, just let them in to the broken site to see for themselves that it doesn't work. Who is right? Is there a better way?

    Read the article

  • PHP XPath - how to get value from result

    - by LeeTee
    I have the followng XML file from Ebay <GeteBayDetailsResponse xmlns="urn:ebay:apis:eBLBaseComponents"><Timestamp>2012-07-04T12:02:14.541Z</Timestamp><Ack>Success</Ack><Version>779</Version><Build>E779_INTL_BUNDLED_14986004_R1</Build><SiteDetails><Site>US</Site><SiteID>0</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Canada</Site><SiteID>2</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>UK</Site><SiteID>3</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Germany</Site><SiteID>77</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Australia</Site><SiteID>15</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>France</Site><SiteID>71</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>eBayMotors</Site><SiteID>100</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Italy</Site><SiteID>101</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Netherlands</Site><SiteID>146</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Spain</Site><SiteID>186</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>India</Site><SiteID>203</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>HongKong</Site><SiteID>201</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Singapore</Site><SiteID>216</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Malaysia</Site><SiteID>207</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Philippines</Site><SiteID>211</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>CanadaFrench</Site><SiteID>210</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Poland</Site><SiteID>212</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_Dutch</Site><SiteID>123</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_French</Site><SiteID>23</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Austria</Site><SiteID>16</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Switzerland</Site><SiteID>193</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Ireland</Site><SiteID>205</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></GeteBayDetailsResponse> I need to get the value of node SiteID where the node Site matches whatever variable I pass. The code I have is: $xml_file ='SiteDetails.xml'; $xmlDoc = new DomDocument(); $xmlDoc->load($xml_file); $xpath = new DOMXpath($xmlDoc); $siteIDList = $xpath->query("/GeteBayDetailsResponse/SiteDetails[Site=$site]/SiteID"); var_dump($siteIDList); echo $siteIDList->SiteID; I get the following result: object(DOMNodeList)#11 (0) { } Can anyone help? I want to get a value of 3.

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • How to determine the correct (case sensitive) URL for a SharePoint site

    - by Goyuix
    SharePoint is generally very tolerant of accepting a URL in a case-insensitive fashion, however there are a few cases where it completely breaks down. For example, when creating a site column it somehow stores and uses the URL when it was created, and when trying to edit the field definition through the Site Column Gallery (fldedit.aspx page in the LAYOUTS) you end up throwing the error below. Value does not fall within the expected range. at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName) at Microsoft.SharePoint.ApplicationPages.BasicFieldEditPage.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) How can I reliably get the correct URL for a site/web? The SPSite.Url and SPWeb.Url properties seem to return back whatever case they are instantiated with. In other words, the site collection is provisioned using the following URL: http://server/Path/Site If I create a new Site Column using the SharePoint object model and happen to use http://server/path/site when instantiating the SPSite and SPWeb objects, the site column will be made available but when trying to access it through the gallery the error above is generated. If I correct the URL in the address bar, I can still view/modify the definition for the SPField in question, but the default URL that is generated is bogus. Clear as mud? Example code: (this is a bad example because of the case sensitivity issue) // note: site should be partially caps: http://server/Path/Site using (SPSite site = new SPSite("http://server/path/site")) { using (SPWeb web = site.OpenWeb()) { web.Fields.AddFieldAsXml("..."); // correct XML really here } }

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • SharePoint 2010 Hosting :: Error – HTTP Error 401.1 when Accessing Your SharePoint 2010 Site

    - by mbridge
    When attempting to view a MOSS (SharePoint) 2007 or SharePoint 2010 site locally from a Web Front End (WFE) you get an error stating: “HTTP Error 401.1 – Unauthorized: Access is denied due to invalid credentials.” I have noticed that this happens on Windows 2003/2008 Server SP1/SP2/R2 when using Host Headers and Alternate Access Mappings on a web application in MOSS 2007. If you can access the site from remote machines and cannot access the site from the server itself, then this might be your issue. For all my newer farm installs this includes SharePoint 2007 (MOSS) and SharePoint 2010. I use method number 2 on all SharePoint and SQL Servers in the farm. If you cannot access the web site locally or remotely from other machines then there is an issue with security on the site and/or possibly a Kerberos related security issue I implemented fix #2 listed in the following Microsoft KB Article. I implemented this fix on all servers in the MOSS 2007 Farm (WFE’s and Indexing/Search Server). If using method 1, you would add all Host Headers and Alternate Access Mappings for all web applications to the BackConnectionHostNames value, then you will be able to access the sites locally from the WFE’s. Microsoft KB Link: http://support.microsoft.com/kb/896861 Method 1: Specify Host Names Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK. 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 3. Right-click MSV1_0, point to New, and then click Multi-String Value. 4. Type BackConnectionHostNames, and then press ENTER. 5. Right-click BackConnectionHostNames, and then click Modify. 6. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK. 7. Quit Registry Editor, and then restart the IISAdmin service. Method 2: Disable the Loopback Check  Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa 3. Right-click Lsa, point to New, and then click DWORD Value. 4. Type DisableLoopbackCheck, and then press ENTER. 5. Right-click DisableLoopbackCheck, and then click Modify. 6. In the Value data box, type 1, and then click OK. 7. Quit Registry Editor, and then restart your computer. Give it try and good luck.

    Read the article

  • 10035 error on a blocking socket

    - by Andrew
    Does anyone have any idea what could cause a 10035 error (EWOULDBLOCK) when reading on a blocking socket with a timeout? This is under Windows XP using the .NET framework version 3.5 socket library. I've never managed to get this myself, but one of my colleagues is getting it all the time. He's sending reasonably large amounts of data to a much slower device and then waiting for a response, which often gives a 10035 error. I'm wondering if there could be issues with TCP buffers filling up, but in that case I would expect the read to wait or timeount. The socket is definitely blocking, not non-blocking.

    Read the article

  • A non-blocking server with java.io

    - by Jon
    Everybody knows that java IO is blocking, and java NIO is non-blocking. In IO you will have to use the thread per client pattern, in NIO you can use one thread for all clients. Now my question follows: is it possible to make a non-blocking design using only the Java IO api. (not NIO) I was thinking about a pattern like this (obviously very simplified); List<Socket> li; for (Socket s : li) { InputStream in = s.getInputStream(); byte[] data = in.available(); in.read(data); // processData(data); (decoding packets, encoding outgoing packets } Also note that the client will always be ready for reading data. What are your opinions on this? Will this be suitable for a server that should at least hold a few hundred of clients without major performance issues?

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Migrate ASP.Net web site from IIS6 to IIS7

    - by David.Chu.ca
    I have to migrate an ASP.Net web site from IIS6 to IIS7. I tried to copy the all files for a web site from IIS6 (c:\inetpub\wwwroot\MySite) to another box with Windows Server 2008 R2 where IIS7 is the default web server. However, the simply copy seems not working. Should I rebuild the web site for IIS7 or should I make changes on the new box with IIS7 such as web.config? Thanks for the comments. Further investigation I found that http handers seems caused exception: <!--httpHandlers> <add path="Reserved.ReportViewerWebControl.axd" verb="*" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false"/> </httpHandlers--> After I comment out the above handler in web.config, the web page works fine. This is just my initial test. I am not sure if I should rebuild the web site from source codes or not. If so, do I need to specify for IIS7?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >