Search Results

Search found 13911 results on 557 pages for 'sharepoint group'.

Page 145/557 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Integrating Search Server 2008 Express with WSS 3.0

    - by Jason Kemp
    I'm setting up the environment for an intranet using WSS (Windows SharePoint Services) 3.0. The catch is getting the environment configured to work with MS Search Server 2008 Express. Here's the environment I'd like to setup: A: Web Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1 B: Search Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1; Search Server 2008 Express C: Database Server; Win Server 2003 SP2; SQL Server 2000 SP3 - Admin db, Content db, Config db, Search db The question is whether 3 servers can be used like the above configuration or if the Search Server (B) has to be combined with (A) since we're using the free Express version of the Search Server. The documentation from MS doesn't make it clear either way. I can attack this problem with trial and error but would rather not. The bigger question is: What is the best practice for a WSS / Search Server installation?

    Read the article

  • Accessing previous activity instances in a sequence activity

    - by Dan Revell
    This has a rather SharePoint spin to it but the problem is straight workflow. I've got a parallel replication activity which contains a sequence activity. The sequence activity contains a CreateTask activity, a CodeActivity, a OnTaskChanged activity and finally a CompleteTask activity. The idea is to create a task for each username passed into the ReplicatorActivity.InitialChildData property. Typically in workflow I bind a field to the CreateTask.TaskId and CreateTask.TaskProperties and inside the CreateTask.MethodInvoking I set these through the local bound fields. This works and my tasks all get created properly. However in the CodeActivity that follows, I want to then access the TaskProperties. The problem I am encountering is that this field holds the values of the final task to be created as the CreateTask runs for all the replications before the CodeActivity gets to runs. From the CodeActivity, here are two ways I've tried to access the CreateTask activity instance from the same context or instance or whatever the terminology is for the replicated sequence. CreateTask task = ((CreateTask)sender.Parent.GetActivityByName("createSoftwareRequestTask", true)); CreateTask createTask = (CreateTask)sender.Parent.Activities[0]; Unfortunately the CreateTask activities both refer back to the last task to be created and not the task from the context that the CodeActivity is executing within. Two reasons why this might be that I can think of. I'm not accessing the correct instance with my code. I am accessing the correct instance, but as the properties I require were bound to and set through local fields then their previous data was overwritten. I'm hitting a brick wall with my understanding of workflow with this and would very much appreciate some assistance here.

    Read the article

  • HtmlAgilityPack - Vs 2010 - c# ASP - File Not found

    - by Janosch Geigowskoskilu
    First, I've already searched the web & StackOverflow for hours, and i did find a lot about troubleshooting HtmlAgilityPack and tried most of these but nothing worked. The Situation: I'm developing a C# ASP .NET WebPart in SharePoint Foundation. Everything works fine, now I want to Parse a HTML Page to get all ImagePaths and save the Images on HD/Temp. To do that I was downloading HtmlAgilityPack, current version, add reference to Project, everything looks OK, IntelliSense works fine. The Exception: But when I want to run the section where HtmlAgilityPack should be used my Browser shows me a FileNotFoundException - The File or Assembly could not be found. What I tried: After first searches i tried to include v1.4.0 of HtmlAgilityPack cause I read that the current version in some case is not really stable. This works fine to until the point I want to use HtmlAgilityPack, the same Exception. I also tried moving the HtmlAgilityPack direct to the Solution directory, nothing changed. I tried to insert HtmlAgilityPack via using and I tried direct call e.g. HtmlAgilityPack.HtmlDocument. Conclusion : When I compile no error occurs, the reference is set correct. When I trace the HtmlAgilityPack.dll with ProcMon the Path is shown correct, but sometimes the Result is 'File Locked with only Readers' but I don't know enough about ProcMon to Know what this means or if this is critical. It couldn't have something to do with File Permissions because if I check the DLL the permissions are all given.

    Read the article

  • Access is denied

    - by Lasse Gaardsholt
    Hi, I got this code for my sharepoint, but I get a Access is Denied, can anyone help me out here ? <!-- Load and display list - iframe version --> <!-- Questions and comments: [email protected] --> <DIV id="ListPlaceholder"><IMG src="/_layouts/images/GEARS_AN.GIF"></DIV> <!-- Paste the URL of the source list below: --> <iframe id="SourceList" style="display:none;" src="xXxXxX" onload="DisplayThisList()"></iframe> <script type="text/javascript"> function DisplayThisList() { var placeholder = document.getElementById("ListPlaceholder"); var displaylist = null; var sourcelist = document.getElementById("SourceList"); try { if(sourcelist.contentWindow) // Internet Explorer { displaylist = sourcelist.contentWindow.document.getElementById("WebPartWPQ1") ; } } catch(err) { alert(err.message); } displaylist.removeChild(displaylist.getElementsByTagName("table")[0]); placeholder.innerHTML = displaylist.innerHTML; } </script>

    Read the article

  • Difference between KeywordQuery, FullTextQuerySearch type for Object Model and Web service Query

    - by Raghu
    Initially I believed these 3 to be doing more or less the same thing with just the notation being different. Until recently, when i noticed that their does exists a big difference between the results of the KeyWordQuery/FullTextQuerySearch and Web service Query. I used both KeywordQuery and FullText method to search of the the value of a customColumn XYZ with value (ASDSADA-21312ASD-ASDASD):- When I run this query as:- FullTextSqlQuery:- FullTextSqlQuery myQuery = new FullTextSqlQuery(site); { // Construct query text String queryText = "Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD') "; myQuery.QueryText = queryText; myQuery.ResultTypes = ResultType.RelevantResults; }; // execute the query and load the results into a datatable ResultTableCollection queryResults = myQuery.Execute(); ResultTable resultTable = queryResults[ResultType.RelevantResults]; // Load table with results DataTable queryDataTable = new DataTable(); queryDataTable.Load(resultTable, LoadOption.OverwriteChanges); I get the following result representing the document. * Title: TestPDF * path: http://SharepointServer/Shared Documents/Forms/DispForm.aspx?ID=94 * author: null * isDocument: false Do note the Path and isDocument fields of the above result. Web Service Method Then I tried a Web Service Query method. I used Sharepoint Search Service Tool available at http://sharepointsearchserv.codeplex.com/ and ran the same query i.e. Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD'). This time I got the following results:- * Title: TestPDF * path: http://SharepointServer/Shared Documents/TestPDF.pdf * author: null * isDocument: true Again note the path. While the search results from 2nd method are useful as they provide me the file path exactly, I can't seem to understand why is the method 1 not giving me the same results? Why is there a discrepancy between the two results?

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by pointlesspolitics
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • Lookup site column not saving/storing metadata for Office 2007 documents?

    - by Greg Hurlman
    I'm having this issue on several server environments. We have a list at the site collection root. There is a site column created as a multi-value lookup on that list's Title field. This site column is used in document libraries in subsites as a required field. When we upload anything but an Office 2007 document, the user is presented with the document metadata fill-in screen (EditForm.aspx?Mode=Upload), the user fills in the appropriate data (including picking a value(s) for this lookup), and clicks "check in" - the document is checked in as expected, with the lookup field's value filled in. With an Office 2007 document, this fails. The user selected values for the lookup field do not ever make it to the server - no errors are thrown, but the field is not saved with the document. We have an event listener on these document libraries, and if we inspect the incoming SPListItem on the event listener method before a single line of our code has run, we see that the value for the lookup field is null. It smells like a SharePoint bug to me - but before I go calling Microsoft, has anyone seen this & worked around it? Edit: the only entry I see in the SP trace logs relating to the problem: CMS/Publishing/8ztg/Medium/Got List Item Version, but item was null

    Read the article

  • Setting ModerationInformation.Status from Approved back to pending removes

    - by Gavin Morgan
    Seeing if anyone else has had this problem and a resolution to it. I have a visual studio sequential workflow on a list (not a library) which does NOT use tasks, the approval process is done through the Approve/Reject OOTB buttons on the list item. The approval is a 2 stage approval, whereby if the 1st stage is completed (via clicking the Approve OOTB button), i reset the ModerationInformation.Status from Approved back to pending then send an email to the 2nd stage approver. My problem is, when i set the the ModerationInformation.Status back to Pending from Approved so there is never an approved version, the Creator loses permissions to view the item, and i get the "cannot find item" error from SharePoint for the person who created the item. The 1st and 2nd level approvers and anyone with approve rights CAN still see the item. Some more background information. the code i am using to update the moderationinformation is I get the properties from the workflow event and get a hook into the listitem properties.Item.ModerationInformation.Status = SPModerationStatusType.Pending; properties.Item.Update(); can anyone help.

    Read the article

  • Accessing a share point site using the object model.

    - by Prashanth
    Hi I am trying to access a share point site using the SP object model from a console application. I am trying to do something like this.. SPSite site = new SPSite(sitePath) //Operations go here This works fine when the share point site and the console app are on the same machine. However when the console app and the site are on different machines, I get an error "The Web application at "http://server/url" could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application" Here are the things that I have already done: 1) I have tried accessing the site via both IP address as well as machine name, assuming that it could be a DNS resolution issue. 2) Initially I impersonated using a farm admin account, still i could not access. Then I added myself as the farm admin, still no joy. 4) The site is accessible via IE. So it is not a permission issue I guess. 5) I have tried almost all the solutions suggested by various links obtained by googling the error message. I am trying this on share point 2010. A similar issue occurs on 2007 also. Sometimes its kind of frustrating to do SharePoint development , since I get the feeling of stumbling from one error to the next, with no clue as to what could be wrong and the error messages not being helpful in the least :(

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Sort wss3.0 view by more than 2 columns

    - by russellGove
    Does anyone know if it's possible to sort a sharepoint list view bymore than 3 columns. It doesn't let me do it in the UI and whwn I added this: > <Query> > <GroupBy Collapse="TRUE" GroupLimit="100"> > <FieldRef Name="Category" /> > <FieldRef Name="SubCategory" /> > <FieldRef Name="Topic" /> > </GroupBy> > <OrderBy> > <FieldRef Name="Category" /> > <FieldRef Name="SubCategory" /> > <FieldRef Name="Topic" /> > </OrderBy> > </Query> I get an error on the page: <!-- #RENDER FAILED -->

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Is there such thing like a "refactoring/maintainability group" role in software companies?

    - by dukeofgaming
    So, I work in a company that does embedded software development, other groups focus in the core development of different products' software and my department (which is in another geographical location) which is located at the factory has to deal with software development as well, but across all products, so that we can also fix things quicker when the lines go down due to software problems with the product. In other words, we are generalists while other groups specialize on each product. Thing is, it is kind of hard to get involved in core development when you are distributed geographically (well, I know it really isn't that hard, but there might be unintended cultural/political barriers when it comes to the discipline of collaborating remotely). So I figured that, since we are currently just putting fires out and somewhat being idle/sub-utilized (even though we are a new department, or maybe that is the reason), I thought that a good role for us could be detecting areas of opportunity of refactoring and rearchitecting code and all other implementations that might have to do with stewarding maintainability and modularity. Other groups aren't focused on this because they don't have the time and they have aggressive deadlines, which damage the quality of the code (eternal story of software projects) The thing is that I want my group/department to be recognized by management and other groups with this role officially, and I'm having trouble to come up with a good definition/identity of our group for this matter. So my question is: is this role something that already exists?, or am I the first one to make something like this up?

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • where is peopleresults.aspx page in sharepoint search ?

    - by Lalit
    Hi, I am define my contact list, and the made it serachable trough SSP- serach settings. Now, I add the people search Webpart which is out of box web part. ok? but when I go to search any keyword it redirect me on peopleresults.aspx page. with message "404 NOT FOUND" Or some times before it was showing me "page cannot find" error. So what is the reason? How to configure "peopleresults.aspx" with People Search box? please guide me its too urgent.

    Read the article

  • Sharepoint 2007 - cant find my modifications to web.config in SpWebApplication.WebConfigModification

    - by user303672
    Hi, I cant seem to find the modifications I made to web.config in my FeatureRecievers Activated event. I try to get the modifications from the SpWebApplication.WebConfigModifications collection in the deactivate event, but this is always empty.... And the strangest thing is that my changes are still reverted after deactivating the feature... My question is, should I not be able to view all changes made to the web.config files when accessing the SpWebApplication.WebConfigModifications collection in the Deactivating event? How should I go about to remove my changes explicitly? public class FeatureReciever : SPFeatureReceiver { private const string FEATURE_NAME = "HelloWorld"; private class Modification { public string Name; public string XPath; public string Value; public SPWebConfigModification.SPWebConfigModificationType ModificationType; public bool createOnly; public Modification(string name, string xPath, string value, SPWebConfigModification.SPWebConfigModificationType modificationType, bool createOnly) { Name = name; XPath = xPath; Value = value; ModificationType = modificationType; this.createOnly = createOnly; } } private Modification[] modifications = { new Modification("connectionStrings", "configuration", "<connectionStrings/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, true), new Modification("add[@name='ConnectionString'][@connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass']", "configuration/connectionStrings", "<add name='ConnectionString' connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass'/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, false) }; public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite; SPWebApplication webApplication = siteCollection.WebApplication; siteCollection.RootWeb.Title = "Set from activating code at " + DateTime.Now.ToString(); foreach (Modification entry in modifications) { SPWebConfigModification webConfigModification = CreateModification(entry); webApplication.WebConfigModifications.Add(webConfigModification); } webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications(); webApplication.WebService.Update(); } public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite; SPWebApplication webApplication = siteCollection.WebApplication; siteCollection.RootWeb.Title = "Set from deactivating code at " + DateTime.Now.ToString(); IList<SPWebConfigModification> modifications = webApplication.WebConfigModifications; foreach (SPWebConfigModification modification in modifications) { if (modification.Owner == FEATURE_NAME) { webApplication.WebConfigModifications.Remove(modification); } } webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications(); webApplication.WebService.Update(); } public override void FeatureInstalled(SPFeatureReceiverProperties properties) { } public override void FeatureUninstalling(SPFeatureReceiverProperties properties) { } private SPWebConfigModification CreateModification(Modification entry) { SPWebConfigModification spWebConfigModification = new SPWebConfigModification() { Name = entry.Name, Path = entry.XPath, Owner = FEATURE_NAME, Sequence = 0, Type = entry.ModificationType, Value = entry.Value }; return spWebConfigModification; } } Thanks for your time. /Hans

    Read the article

  • How to deploy 2 Sharepoint page leyouts

    - by mickey
    I have a problem to deploy two page layouts, I can deploy one page layout with no problem but if I add another one the first one gets the same layout as the newly added second page layout... Here is the XML I add to Elements.xml <File Path="masterpage\CustomLayout.aspx" Url="CustomLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="CustomLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\HomePageLayout.aspx" Url="HomePageLayout.aspx" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="HomePageLayout" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/CustomPageLayout.png" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage.master" Url="masterpage.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File> <File Path="masterpage\masterpage2.master" Url="masterpage2.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" > <Property Name="Title" Value="My Custom masterpage 2" /> <Property Name="ContentType" Value="$Resources:cmscore,contenttype_pagelayout_name;" /> <Property Name="PublishingPreviewImage" Value="~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg, ~SiteCollection/_catalogs/masterpage/$Resources:core,Culture;/Preview Images/back.jpg" /> <Property Name="PublishingAssociatedContentType" Value=";#$Resources:cmscore,contenttype_articlepage_name;;#0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D;#"/> </File>

    Read the article

  • MDX: Problem filtering results in MDX query used in Reporting Services query

    - by wgpubs
    Why aren't my results being filtered by the members from my [Group Hierarchy] returned via the filter() statment below? SELECT NON EMPTY {[Measures].[Group Count], [Measures].[Overall Group Count] } ON COLUMNS, NON EMPTY { [Survey].[Surveys By Year].[Survey Year].ALLMEMBERS * [Response Status].[Response Status].[Response Status].ALLMEMBERS} DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( { [Survey Type].[Survey Type Hierarchy].&[9] } ) ON COLUMNS FROM ( SELECT ( { [Response Status].[Response Status].[All] } ) ON COLUMNS FROM ( SELECT ( STRTOSET(@SurveySurveysByYear, CONSTRAINED) ) ON COLUMNS FROM ( SELECT(filter([Group].[Group Hierarchy].members, instr(@GroupGroupFullName,[Group].[Group Hierarchy].Properties( "Group Full Name" )))) on columns FROM [SysSurveyDW])))) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

    Read the article

  • SharePoint 2007 Web Application is not found

    - by David
    I created a Web Application called testwebapp and then a site collection (testsite). When I try siteCollection = new SPSite("http://localhost"); in Visual Studio 2008 it throws an error Web Application is not found. Of course, the localhost works in IE and I don't know why testwebapp doesn't work. Any ideas? TIA! David

    Read the article

  • SharePoint User Profile Search

    - by lucasstark
    Is there a way to search profiles in MOSS from the object model? I need to search for profiles that have a certain value set on their profile and then perform some action for them. I need to write some c# code that can search the profiles database and return the matching profiles. Basically, List of Profiles = Select Profiles From Profile Store Where Profile Property Value = SomeValue I'm trying to avoid the following: private IEnumerable<UserProfile> SearchProfiles(string value) { ServerContext serverContext = ServerContext.GetContext(SPContext.Current.Site); UserProfileManager profileManager = new UserProfileManager(serverContext); foreach (UserProfile profile in profileManager) { if ((string)profile["MyProp"].Value == value) { yield return profile; } } }

    Read the article

  • Javascript childNodes does not find all children of a div when appendchild has been used

    - by yesterdayze
    Alright, I am hoping someone can help me out. I apologize up front that this one may be confusing. I have included an example to try to help ease the confusion as this is better seen then heard. I have created a webpage that contains a group or set of groups. Each group has a subgroup. In a nutshell what is happening is this page will allow me to combine multiple groups containing subgroups into a new group. The page will give the chance to rename the old subgroups before they are combined into new groups in order to avoid confusion. When a group is renamed it will check to make sure there is not already a group with that name. If there is it will copy itself out of it's own group and into that group and then delete the original. If the group does not already exist it will create that group, copy itself in and then delete the original. Subgroups can also be renamed at which point they will move into the group with the same name if it exists, or create a new one if it doesn't. The page has a main div. The main div contains 'new sub group' divs. Inside each of those is another div containing the 'old sub group' divs. I use a loop through the child nodes of the 'new sub group' div when renaming a group in order to find each child node. These are then copied into a new div within the main div. The crux of the problem is this. If I loop through a DIV and copy all of the DIVs in it into a new or existing DIV all is well. When I then try to take that DIV and copy all of it's DIVs into another or new DIV it always skips one of the moved DIVs. For simplicity I have copied the entire working code below. To recreate the issue click the spot where the image should appear next to the name ewrewrwe and rename it to something else. All is well. Now click that new group the same way and name it something else. You will see it skip one each time. I have linked the page here: http://vtbikenight.com/test.html The link is clean, it is my personal website I use for a local motorycle group I am part of. Thanks for the help everyone!!! Please let me know if I can clarify on anything. I know the code is not the best right now, it is just demo code and my intent is to get the concept working then streamline it all.

    Read the article

  • ItemAdded Event for document library in sharepoint 2007

    - by Azra
    hi I am having a document library in share point 2007, I want to validate certain custom properties before a document is uploaded Or Properties are entered when Edit properties event is cliked. I am trying to validate the fields at ItemAdding event whne a documetn is uploaded , however when EditForm.aspx opens up for editing properties, no events firs. How can I troubleshoot the issue? thanks azra

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >