Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 321/858 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • CodePlex Daily Summary for Monday, June 06, 2011

    CodePlex Daily Summary for Monday, June 06, 2011Popular ReleasesSharepoint 2010 Diagnostic Log Compression: Sharepoint 2010 Diagnostic Log Compression v1.0: ====================== Requirements: Microsoft Sharepoint 2010 Foundation or Microsoft Sharepoint 2010 Server Release Version : 1.0PowerShell SQL Developer Tools: SQLDevTools_Module (folder) Version 0.1.0: SQLDevTools Module version 0.1.0(alpha release) Just unzip the file and move the folder "SQLDevTools" into your User "WindowsPowerShell\Modules" folder. "C:\Users\UserName...\Documents\WindowsPowerShell\Modules\SQLDevTools" Then, to load *module at either the PowerShell Console or ISE, use: Import-Module SQLDevTools To remove module use: Remove-Module SQLDevTools Again, you need have SQL Server 2008 (or above). or the standalone version of "SQL Server Management Studio 2008 R2 " installed ...ShowUI: Write-UI -in PowerShell: ShowUI: ShowUI is a PowerShell module to help you write rich user interfaces in script.SQL Compact Query Analyzer: Build 0.2.0.0: Beta build of SQL Compact Query Analyzer Features: - Execute SQL Queries against a SQL Server Compact Edition database - Easily edit the contents of the database - Supports SQLCE 3.5 and 4.0 Prerequisites: - .NET Framework 4.0Media Companion: MC 3.405b weekly: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you are upgrading from a version of MC before 3.404 then..... Due to an issue where the date added & the full genre information was not being read into the Movie cache, it is highly recommended that you perform a Rebuild Movies when first running this latest version. This will read in the information from the ...Babylon Toolkit: Babylon.Toolkit v1.0.4: Note about samples: In order to run samples, you need to configure visual studio to run them as an "Out-of-browser application". in order to do that, go to the property page of a sample project, go to the Debug tab, and check the "Out-of-browser application" radio. New features : New Effects BasicEffect3Lights (3 dir lights instead of 1 position light) CartoonEffect (work in progress) SkinnedEffect (with normal and specular map support) SplattingEffect (for multi-texturing with smooth ...SizeOnDisk: 1.0.8.2: With installerTerrariViewer: TerrariViewer v2.5: Added new items associated with Terraria v1.0.3 to the character editor. Fixed multiple bugs with Piggy Bank EditorySterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...patterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...Claims Based Identity & Access Control Guide: Release Candidate: Highlights of this release This is the release candidate drop of the new "Claims Identity Guide" edition. In this release you will find: All code samples, including all ACS v2: ACS as a Federation Provider - Showing authentication with LiveID, Google, etc. ACS as a FP with Multiple Business Partners. ACS and REST endpoints. Using a WP7 client with REST endpoints. All ACS specific chapters. Two new chapters on SharePoint (SSO and Federation) All revised v1 chapters We are now ...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.Caliburn Micro: WPF, Silverlight and WP7 made easy.: Caliburn.Micro v1.1 RTW: Download ContentsDebug and Release Assemblies Samples Changes.txt License.txt Release Highlights For WP7A new Tombstoning API based on ideas from Fluent NHibernate A new Launcher/Chooser API Strongly typed Navigation SimpleContainer included The full phone lifecycle is made easy to work with ie. we figure out whether your app is actually Resurrecting or just Continuing for you For WPFSupport for the Client Profile Better support for WinForms integration All PlatformsA power...VidCoder: 0.9.1: Added color coding to the Log window. Errors are highlighted in red, HandBrake logs are in black and VidCoder logs are in dark blue. Moved enqueue button to the right with the other control buttons. Added logic to report failures when errors are logged during the encode or when the encode finishes prematurely. Added Copy button to Log window. Adjusted audio track selection box to always show the full track name. Changed encode job progress bar to also be colored yellow when the enco...AutoLoL: AutoLoL v2.0.1: - Fixed a small bug in Auto Login - Fixed the updaterEPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.9.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server This version has been updated to .Net Framework 3.5 New Features Data Validation. PivotTables (Basic functionalliy...) Support for worksheet data sources. Column, Row, Page and Data fields. Date and Numeric grouping Build in styles. ...and more And some minor new features... Ranges Text-Property|Get the formated value AutofitColumns-method to set the column width from the content of the range LoadFromCollection-metho...jQuery ASP.Net MVC Controls: Version 1.4.0.0: Version 1.4.0.0 contains the following additions: Upgraded to MVC 3.0 Upgraded to jQuery 1.6.1 (Though the project supports all jQuery version from 1.4.x onwards) Upgraded to jqGrid 3.8 Better Razor View-Engine support Better Pager support, includes support for custom pagers Added jqGrid toolbar buttons support Search module refactored, with full suport for multiple filters and ordering And Code cleanup, bug-fixes and better controller configuration support.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all ...update is strongly recommended Known Issues: - some addin ribbon examples has a COM Register function with missing codebase entry(win32 registry) ...the problem is only affected to c# examples. fixed in latest source code. NetOffice\ReleaseTags\NetOffice Release 0.8.rar Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:...................Serviio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...New Projects.NET library for Audiogalaxy: AgNet is a .NET Client library for the Audiogalaxy APIAsuro scanner: A asuro programm which can recognize light values and perform predefined actions for them. This was realised for school project.AutoComplete.Net (for Asp.Net and Asp.Net MVC): AutoComplete.Net is a wrapper around the JQuery UI Autocomplete widget. It makes it easier to add autocomplete functionalities without having to bother with tons of javascript. Includes a WebControl for your Asp.Net web applications and a bunch of extension methods for your Asp.Net MVC web applications. Their purpose is to provide developers with an easy interface to the cool JQuery UI autocomplete Widget. It's developed in C#.AveSoft Advance Keylogger: Free and completely open source keystroke logging application, allows you to capture and record all keyboard input to file. Program is extremely small and lightweight, can run in a hidden mode, upload log file to specified web server. Due to open source nature, software is secure and transparent. Main Features Captures all keyboard input Caching and low level hooks uses minimal amount of CPU cycles Highly customizable Hidden mode Windows integration Log file...DDDProjector: DDDPrjector ?、DDD (Domain Driven Design: ????????) ?????????????????????????????????????、C# ???????????????????。 ???????????????????? C# ????????????????、??????????????????????????。 O/R ?????? NHibernate ??????。DogTouch: DogTouchEVMDOCS Reports: This project contain diploma of student VSTU. The project is devoted to automating the learning process at the university. Contains reports in OpenOfficeFidely - Search Query Compilation Framework: Fidely is a framework to implement a search query compiler that parses a search query string and builds up an expression tree (Expression<Func<T, bool>>) to filter a collection object. Fidely allows you to customize processes of parsing a query string and building up an expression tree, so you can create various search query compilers. The goal of this project is to make it easy to implement collection filtering features into your application.Frets: Frets is a library for creating and printing guitar chord diagrams. FuncLib: C# automatic differentiation library, transparent use with operator overloading, unlimited order of differentiation and unlimited number of variables, on-the-fly function compilation to IL code for very fast evaluation.ITAP: a management system for HCM IU's IELTS simulation examjQXB/M Expression Binder: JQ Expression Binder is a lightweigth framework (10 kbytes) for the association between javascript objects and HTML elements. Based on jQuery provides a natural and flexible way to process the data exchanged between client and server in a real WEB 2.0 paradigm. Using a few custom attributes to decorate yout HTML tags now you are enabled to take complete control over the managing of your business data directly inside yours web pages. Easy integration with ASP.NET WEBForm web applications, ...Log Processor: This software helps you to automatically analyze and process data in files (such as plain text logs, binary dumps...) based on : - plugins (for multiple input/output file types support) - plain text rules files (for a complete customization of the analysis needed) You'll then be able to easily produce stats reports from you application logs.MiniIndexer: A web application the demonstrates how to make a simple indexing search engine backed by a relational database. Monads: Monads helpers for .NET, Silverlight and Windows Phone 7.Movie Browser: Movie Browser helps collecting information from imdb and manage personal movie folders.Opc Configurable Xml Server/Service (OCXS): A Microsoft .NET 4.0 C# OPC XML Data Access 1.0 Server/ServiceSQL DAL: Data access layerTankMate: A simple Visual Basic application that lists and manages all your fish tanks. software initially designed to list fish tanks and enter their width/height/depth in inches and it display their volume in litres tank calculator was then expanded to alert if water needs changing after set interval of days number of days between changes is settable for each tank any sound file can be used for alert sounds has online update feature to keep it up to date Twoosh!: Simply, a fully-functional Twitter client for Windows... Twoosh makes it easier for Tweeple to Tweet. You'll no longer have to wait for the Twitter page to load or see the fail whale error page. It's developed in Visual Basic .NET 2008.WCF RIA Services for Azure Tables and SQL/Azure: Th project demos RIA services with POCO objects using two different back end data storage technologies, SQL/Azure and Azure Table Storage. The example shows a simple association between two tables - department and employee for read, insert and update operationsWindows Azure Storage Console Client: Windows Azure Storage Console ClientXNA 4.0 VirtualCompass: VirtualCompass is an incredibly simple and easy to use library for path finding in Microsoft's XNA 4.0 . It implements A* (A-star) path finding techniques to find the shortest path. This is an alpha status project. Limited testing and optimization thus far. Developed in C#.

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Apache FOP - Table top and bottom borders missing pagebreak inside table

    - by Thomas
    I am using Apache FOP to generate a PDF from a XLS FO document. I have created a test XLS FO document that contains a table with collapsed borders that with several tall rows. One of the rows starts on one page and ends on the next and this works as expected. The problem is that the bottom border of the table on the first page is missing and the top border of the table on the second pages is also missing. Below is the sample XLS FO document. <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <!-- defines the layout master --> <fo:layout-master-set> <fo:simple-page-master master-name="first" page-height="29.7cm" page-width="21cm" margin-top="1cm" margin-bottom="2cm" margin-left="2.5cm" margin-right="2.5cm"> <fo:region-body margin-top="3cm"/> <fo:region-before extent="3cm"/> <fo:region-after extent="1.5cm"/> </fo:simple-page-master> </fo:layout-master-set> <!-- starts actual layout --> <fo:page-sequence master-reference="first"> <fo:title>Sample Doc</fo:title> <fo:flow flow-name="xsl-region-body" font-size="x-small" font="Times New Roman"> <!-- table start --> <fo:table table-layout="fixed" width="100%" border-collapse="collapse"> <fo:table-column column-width="35mm"/> <fo:table-column column-width="100mm"/> <fo:table-column column-width="20mm"/> <fo:table-body> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Column 1</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Columns 2</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Column 3</fo:block> </fo:table-cell> </fo:table-row> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Row 1</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Some text</fo:block> </fo:table-cell> </fo:table-row> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Row 2</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Some text</fo:block> </fo:table-cell> </fo:table-row> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Row 3</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Some text</fo:block> </fo:table-cell> </fo:table-row> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Row 4</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Some text</fo:block> </fo:table-cell> </fo:table-row> <fo:table-row> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Row 5</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> <fo:block>Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum Lorem Ipsum</fo:block> </fo:table-cell> <fo:table-cell border-width="0.5mm" border-style="solid"> <fo:block>Some text</fo:block> </fo:table-cell> </fo:table-row> </fo:table-body> </fo:table> <!-- table end --> </fo:flow> </fo:page-sequence> </fo:root> This Image shows the bottom border on page 1 missing and the top border on page 2 missing, but all text seams to be there: Please note that I have allready experimented with using an empty header and footer with borders, for example. This works, but I need to use these functions for other things than fixing this issue so what I need to know is if there is an other sollution to the problem?

    Read the article

  • help fixing pagination class

    - by michael
    here is my pagination class. the data in the construct is just an another class that does db queries and other stuff. the problem that i am having is that i cant seem to get the data output to match the pagination printing and i cant seem to get the mysql_fetch_assoc() to query the data and print it out. i get this error: Warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource in /phpdev/pagination/index.php on line 215 i know that can mean that the query isnt correct for it to fetch the data but i have entered it correctly. sorry for the script being a little long. i thank all in advance for any help. <?php include_once("class.db.php"); mysql_connect("host","login","password"); mysql_select_db("iadcart"); class Pagination { var $adjacents; var $limit; var $param; var $mPage; var $query; var $qData; var $printData; protected $db; function __construct() { $this->db = new MysqlDB; } function show() { $result = $this->db->query($this->query); $totalPages = $this->db->num_rows($result); $limit = $this->limit; /* ------------------------------------- Set Page ------------------------------------- */ if(isset($_GET['page'])) { $page = intval($_GET['page']); $start = ($page - 1) * $limit; } else { $start = 0; } /* ------------------------------------- Set Page Vars ------------------------------------- */ if($page == 0) { $page = 1; } $prev = $page - 1; $next = $page + 1; $lastPage = ceil($totalPages/$limit); $lpm1 = $lastPage - 1; $adjacents = $this->adjacents; $mPage = $this->mPage; //the magic $this->qData = $this->query . " LIMIT $start, $limit"; $this->printData = $this->db->query($this->qData); /* ------------------------------------- Draw Pagination Object ------------------------------------- */ $pagination = ""; if($lastPage > 1) { $pagination .= "<div class='pagination'>"; } /* ------------------------------------- Previous Link ------------------------------------- */ if($page > 1) { $pagination .= "<li><a href='$mPage?page=$prev'> previous </a></li>"; } else { $pagination .= "<li class='previous-off'> previous </li>"; } /* ------------------------------------- Page Numbers (not enough pages - just display active page) ------------------------------------- */ if($lastPage < 7 + ($adjacents * 2)) { for($counter = 1; $counter <= $lastPage; $counter++) { if($counter == $page) { $pagination .= "<li class='active'>$counter</li>"; } else { $pagination .= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } } /* ------------------------------------- Page Numbers (enough pages to paginate) ------------------------------------- */ elseif($lastPage > 5 + ($adjacents * 2)) { //close to beginning; only hide later pages if($page < 1 + ($adjacents * 2)) { for ($counter = 1; $counter < 4 + ($adjacents * 2); $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } $pagination.= "..."; $pagination.= "<li><a href='$mPage?page=$lpm1'>$lpm1</a></li>"; $pagination.= "<li><a href='$mPage?page=$lastPage'>$lastPage</a></li>"; } elseif($lastPage - ($adjacents * 2) > $page && $page > ($adjacents * 2)) { $pagination.= "<li><a href='$mPage?page=1'>1</a></li>"; $pagination.= "<li><a href='$mPage?page=2'>2</a></li>"; $pagination.= "..."; for ($counter = $page - $adjacents; $counter <= $page + $adjacents; $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } $pagination.= "..."; $pagination.= "<li><a href='$mPage?page=$lpm1'>$lpm1</a></li>"; $pagination.= "<li><a href='$mPage?page=$lastPage'>$lastPage</a></li>"; } else { $pagination.= "<li><a href='$mPage?page=1'>1</a></li>"; $pagination.= "</li><a href='$mPage?page=2'>2</a></li>"; $pagination.= "..."; for ($counter = $lastPage - (2 + ($adjacents * 2)); $counter <= $lastPage; $counter++) { if ($counter == $page) { $pagination.= "<li class='active'>$counter</li>"; } else { $pagination.= "<li><a href='$mPage?page=$counter'>$counter</a></li>"; } } } } /* ------------------------------------- Next Link ------------------------------------- */ if ($page < $counter - 1) { $pagination.= "<li><a href='$mPage?page=$next'> Next </a></li>"; } else { $pagination.= "<li class='next-off'> Next </li>"; $pagination.= "</div>"; } print $pagination; } } ?> <html> <head> </head> <body> <table style="width:800px;"> <?php mysql_connect("localhost","root","games818"); mysql_select_db("iadcart"); $pagination = new pagination; $pagination->adjacents = 3; $pagination->limit = 10; $pagination->param = $_GET['page']; $pagination->mPage = "index.php"; $pagination->query = "select * from tbl_products where pVisible = 'yes'"; while($row = mysql_fetch_assoc($pagination->printData)) { print $row['pName']; } ?> </table> <br/><br/> <?php $pagination->show(); ?> </body> </html>

    Read the article

  • Mailman Error / Cpanel

    - by Faith In Unseen Things
    Mailman is giving off this error when any changes are made to the list: ======================= Bug in Mailman version 2.1.14 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. Ran: /usr/local/cpanel/bin/mailman-install --force Then it says at the end: Updating Usenet watermarks - nothing to update here Nothing to do. updating old qfiles cp: cannot remove `/usr/local/cpanel/img-sys/gnu-head-tiny.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman-large.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mm-icon.png': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/PythonPowered.png': Permission denied directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/en (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches/config (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sr (fixing) Warning: Private archive directory is other-executable (o+x). This could allow other users on your system to read private archives. If you're on a shared multiuser system, you should consult the installation manual on how to fix this. Problems found: 76 Re-run as mailman (or root) with -f flag to fix must be privileged to use -u must be privileged to use -u Unable to touch file /var/cpanel/mailman2: Permission denied at /usr/local/cpanel/bin/mailman-install line 275. [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity at /usr/local/cpanel/Cpanel/Rlimit.pm line 113 Cpanel::Rlimit::set_rlimit_to_infinity() called at /usr/local/cpanel/scripts/restartsrv_mailman line 18 eval {...} called at /usr/local/cpanel/scripts/restartsrv_mailman line 15 warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity [2012-04-13 19:33:55 -0500] warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied at /usr/local/cpanel/Cpanel/SafeDir/MK.pm line 153 Cpanel::SafeDir::MK::safemkdir('/var/run/restartsrv/startup', 0700) called at /usr/local/cpanel/Cpanel/RestartSrv.pm line 756 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup at /usr/local/cpanel/Cpanel/RestartSrv.pm line 759 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup Ran this: /usr/local/cpanel/bin/mailman-install --force -f Same message as above. root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/logs/error Apr 13 19:33:55 2012 mailmanctl(2255): PID unreadable in: /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid Apr 13 19:33:55 2012 mailmanctl(2255): [Errno 2] No such file or directory: '/usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid' Apr 13 19:33:55 2012 mailmanctl(2255): Is qrunner even running? root@server2 [~]# /scripts/restartsrv_mailman --status mailmanctl (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start) running as mailman with PID 24070 3rdparty/mailman/bin/qrunner (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s) running as mailman with PID 24078 root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid 24070 root@server2 [~]# ls -lah /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid -rw-rw---- 1 mailman mailman 6 Apr 13 19:47 /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid root@server2 [~]# ps aux | grep python mailman 4557 0.0 0.1 10484 6044 ? S 19:40 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s mailman 24070 0.0 0.1 14268 4480 ? Ss 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start mailman 24071 1.1 0.1 14052 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=ArchRunner:0:1 -s mailman 24072 1.0 0.1 13444 6112 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=BounceRunner:0:1 -s mailman 24073 1.0 0.1 13040 6108 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=CommandRunner:0:1 -s mailman 24074 1.0 0.1 13484 6104 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=IncomingRunner:0:1 -s mailman 24075 1.0 0.1 12940 6136 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=NewsRunner:0:1 -s mailman 24076 1.0 0.1 13700 6172 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=OutgoingRunner:0:1 -s mailman 24077 1.0 0.1 13416 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=VirginRunner:0:1 -s mailman 24078 0.9 0.1 13944 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s root 24177 0.0 0.0 5428 756 pts/0 S+ 19:48 0:00 grep python

    Read the article

  • How do I get spring to inject my EntityManager?

    - by Trampas Kirk
    I'm following the guide here, but when the DAO executes, the EntityManager is null. I've tried a number of fixes I found in the comments on the guide, on various forums, and here (including this), to no avail. No matter what I seem to do the EntityManager remains null. Here are the relevant files, with packages etc changed to protect the innocent. spring-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd" xmlns:p="http://www.springframework.org/schema/p"> <context:component-scan base-package="com.group.server"/> <context:annotation-config/> <tx:annotation-driven/> <bean id="propertyPlaceholderConfigurer" class="com.group.DecryptingPropertyPlaceholderConfigurer" p:systemPropertiesModeName="SYSTEM_PROPERTIES_MODE_OVERRIDE"> <property name="locations"> <list> <value>classpath*:spring-*.properties</value> <value>classpath*:${application.environment}.properties</value> </list> </property> </bean> <bean id="orderDao" class="com.package.service.OrderDaoImpl"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="MyServer"/> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="dataSource" ref="dataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="${com.group.server.vendoradapter.showsql}"/> <property name="generateDdl" value="${com.group.server.vendoradapter.generateDdl}"/> <property name="database" value="${com.group.server.vendoradapter.database}"/> </bean> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> <property name="dataSource" ref="dataSource"/> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${com.group.server.datasource.driverClassName}"/> <property name="url" value="${com.group.server.datasource.url}"/> <property name="username" value="${com.group.server.datasource.username}"/> <property name="password" value="${com.group.server.datasource.password}"/> </bean> <bean id="executorService" class="java.util.concurrent.Executors" factory-method="newCachedThreadPool"/> </beans> persistence.xml <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0"> <persistence-unit name="MyServer" transaction-type="RESOURCE_LOCAL"/> </persistence> OrderDaoImpl package com.group.service; import com.group.model.Order; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import java.util.List; @Repository @Transactional public class OrderDaoImpl implements OrderDao { private EntityManager entityManager; @PersistenceContext public void setEntityManager(EntityManager entityManager) { this.entityManager = entityManager; } @Override public Order find(Integer id) { Order order = entityManager.find(Order.class, id); return order; } @Override public List<Order> findAll() { Query query = entityManager.createQuery("select o from Order o"); return query.getResultList(); } @Override public List<Order> findBySymbol(String symbol) { Query query = entityManager.createQuery("select o from Order o where o.symbol = :symbol"); return query.setParameter("symbol", symbol).getResultList(); } }

    Read the article

  • Why Does .Hide()ing and .Show()ing Panels in wxPython Result in the Sizer Changing the Layout?

    - by MetaHyperBolic
    As referenced in my previous question, I am trying to make something slightly wizard-like in function. I have settled on a single frame with a sizer added to it. I build panels for each of the screens I would like users to see, add them to the frame's sizer, then switch between panels by .Hide()ing one panel, then calling a custom .ShowYourself() on the next panel. Obviously, I would like the buttons to remain in the same place as the user progresses through the process. I have linked together two panels in an infinite loop by their "Back" and "Next" buttons so you can see what is going on. The first panel looks great; tom10's code worked on that level, as it eschewed my initial, over-fancy attempt with borders flying every which way. And then the second panel seems to have shrunk down to the bare minimum. As we return to the first panel, the shrinkage has occurred here as well. Why does it look fine on the first panel, but not after I return there? Why is calling .Fit() necessary if I do not want a 10 pixel by 10 pixel wad of grey? And if it is necessary, why does .Fit() give inconsistent results? This infinite loop seems to characterize my experience with this: I fix the layout on a panel, only to find that switching ruins the layout for other panels. I fix that problem, by using sizer_h.Add(self.panel1, 0) instead of sizer_h.Add(self.panel1, 1, wx.EXPAND), and now my layouts are off again. So far, my "solution" is to add a mastersizer.SetMinSize((475, 592)) to each panel's master sizer (commented out in the code below). This is a cruddy solution because 1) I have had to find the numbers that work by trial and error (-5 pixels for the width, -28 pixels for the height). 2) I don't understand why the underlying issue still happens. What's the correct, non-ugly solution? Instead of adding all of the panels to the frame's sizer at once, should switching panels involve .Detach()ing that panel from the frame's sizer and then .Add()ing the next panel to the frame's sizer? Is there a .JustMakeThisFillThePanel() method hiding somewhere I have missed in both the wxWidgets and the wxPython documents online? I'm obviously missing something in my mental model of layout. Here's a TinyURL link, if I can't manage to embed the . Minimalist code pasted below. import wx import sys class My_App(wx.App): def OnInit(self): self.frame = My_Frame(None) self.frame.Show() self.SetTopWindow(self.frame) return True def OnExit(self): print 'Dying ...' class My_Frame(wx.Frame): def __init__(self, image, parent=None,id=-1, title='Generic Title', pos=wx.DefaultPosition, style=wx.CAPTION | wx.STAY_ON_TOP): size = (480, 620) wx.Frame.__init__(self, parent, id, 'Program Title', pos, size, style) sizer_h = wx.BoxSizer(wx.HORIZONTAL) self.panel0 = User_Interaction0(self) sizer_h.Add(self.panel0, 1, wx.EXPAND) self.panel1 = User_Interaction1(self) sizer_h.Add(self.panel1, 1, wx.EXPAND) self.SetSizer(sizer_h) self.panel0.ShowYourself() def ShutDown(self): self.Destroy() class User_Interaction0(wx.Panel): def __init__(self, parent, id=-1): wx.Panel.__init__(self, parent, id) # master sizer for the whole panel mastersizer = wx.BoxSizer(wx.VERTICAL) #mastersizer.SetMinSize((475, 592)) mastersizer.AddSpacer(15) # build the top row txtHeader = wx.StaticText(self, -1, 'Welcome to This Boring\nProgram', (0, 0)) font = wx.Font(16, wx.DEFAULT, wx.NORMAL, wx.BOLD) txtHeader.SetFont(font) txtOutOf = wx.StaticText(self, -1, '1 out of 7', (0, 0)) rowtopsizer = wx.BoxSizer(wx.HORIZONTAL) rowtopsizer.Add(txtHeader, 3, wx.ALIGN_LEFT) rowtopsizer.Add((0,0), 1) rowtopsizer.Add(txtOutOf, 0, wx.ALIGN_RIGHT) mastersizer.Add(rowtopsizer, 0, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # build the middle row text = 'PANEL 0\n\n' text = text + 'This could be a giant blob of explanatory text.\n' txtBasic = wx.StaticText(self, -1, text) font = wx.Font(11, wx.DEFAULT, wx.NORMAL, wx.NORMAL) txtBasic.SetFont(font) mastersizer.Add(txtBasic, 1, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # build the bottom row btnBack = wx.Button(self, -1, 'Back') self.Bind(wx.EVT_BUTTON, self.OnBack, id=btnBack.GetId()) btnNext = wx.Button(self, -1, 'Next') self.Bind(wx.EVT_BUTTON, self.OnNext, id=btnNext.GetId()) btnCancelExit = wx.Button(self, -1, 'Cancel and Exit') self.Bind(wx.EVT_BUTTON, self.OnCancelAndExit, id=btnCancelExit.GetId()) rowbottomsizer = wx.BoxSizer(wx.HORIZONTAL) rowbottomsizer.Add(btnBack, 0, wx.ALIGN_LEFT) rowbottomsizer.AddSpacer(5) rowbottomsizer.Add(btnNext, 0) rowbottomsizer.AddSpacer(5) rowbottomsizer.AddStretchSpacer(1) rowbottomsizer.Add(btnCancelExit, 0, wx.ALIGN_RIGHT) mastersizer.Add(rowbottomsizer, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # finish master sizer mastersizer.AddSpacer(15) self.SetSizer(mastersizer) self.Raise() self.SetPosition((0,0)) self.Fit() self.Hide() def ShowYourself(self): self.Raise() self.SetPosition((0,0)) self.Fit() self.Show() def OnBack(self, event): self.Hide() self.GetParent().panel1.ShowYourself() def OnNext(self, event): self.Hide() self.GetParent().panel1.ShowYourself() def OnCancelAndExit(self, event): self.GetParent().ShutDown() class User_Interaction1(wx.Panel): def __init__(self, parent, id=-1): wx.Panel.__init__(self, parent, id) # master sizer for the whole panel mastersizer = wx.BoxSizer(wx.VERTICAL) #mastersizer.SetMinSize((475, 592)) mastersizer.AddSpacer(15) # build the top row txtHeader = wx.StaticText(self, -1, 'Read about This Boring\nProgram', (0, 0)) font = wx.Font(16, wx.DEFAULT, wx.NORMAL, wx.BOLD) txtHeader.SetFont(font) txtOutOf = wx.StaticText(self, -1, '2 out of 7', (0, 0)) rowtopsizer = wx.BoxSizer(wx.HORIZONTAL) rowtopsizer.Add(txtHeader, 3, wx.ALIGN_LEFT) rowtopsizer.Add((0,0), 1) rowtopsizer.Add(txtOutOf, 0, wx.ALIGN_RIGHT) mastersizer.Add(rowtopsizer, 0, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # build the middle row text = 'PANEL 1\n\n' text = text + 'This could be a giant blob of boring text.\n' txtBasic = wx.StaticText(self, -1, text) font = wx.Font(11, wx.DEFAULT, wx.NORMAL, wx.NORMAL) txtBasic.SetFont(font) mastersizer.Add(txtBasic, 1, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # build the bottom row btnBack = wx.Button(self, -1, 'Back') self.Bind(wx.EVT_BUTTON, self.OnBack, id=btnBack.GetId()) btnNext = wx.Button(self, -1, 'Next') self.Bind(wx.EVT_BUTTON, self.OnNext, id=btnNext.GetId()) btnCancelExit = wx.Button(self, -1, 'Cancel and Exit') self.Bind(wx.EVT_BUTTON, self.OnCancelAndExit, id=btnCancelExit.GetId()) rowbottomsizer = wx.BoxSizer(wx.HORIZONTAL) rowbottomsizer.Add(btnBack, 0, wx.ALIGN_LEFT) rowbottomsizer.AddSpacer(5) rowbottomsizer.Add(btnNext, 0) rowbottomsizer.AddSpacer(5) rowbottomsizer.AddStretchSpacer(1) rowbottomsizer.Add(btnCancelExit, 0, wx.ALIGN_RIGHT) mastersizer.Add(rowbottomsizer, flag=wx.EXPAND | wx.LEFT | wx.RIGHT, border=15) # finish master sizer mastersizer.AddSpacer(15) self.SetSizer(mastersizer) self.Raise() self.SetPosition((0,0)) self.Fit() self.Hide() def ShowYourself(self): self.Raise() self.SetPosition((0,0)) self.Fit() self.Show() def OnBack(self, event): self.Hide() self.GetParent().panel0.ShowYourself() def OnNext(self, event): self.Hide() self.GetParent().panel0.ShowYourself() def OnCancelAndExit(self, event): self.GetParent().ShutDown() def main(): app = My_App(redirect = False) app.MainLoop() if __name__ == '__main__': main()

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • Saving State Dynamic UserControls...Help!

    - by Cognitronic
    I have page with a LinkButton on it that when clicked, I'd like to add a Usercontrol to the page. I need to be able to add/remove as many controls as the user would like. The Usercontrol consists of three dropdownlists. The first dropdownlist has it's auotpostback property set to true and hooks up the OnSelectedIndexChanged event that when fired will load the remaining two dropdownlists with the appropriate values. My problem is that no matter where I put the code in the host page, the usercontrol is not being loaded properly. I know I have to recreate the usercontrols on every postback and I've created a method that is being executed in the hosting pages OnPreInit method. I'm still getting the following error: The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases. Here is my code: Thank you!!!! bool createAgain = false; IList<FilterOptionsCollectionView> OptionControls { get { if (SessionManager.Current["controls"] != null) return (IList<FilterOptionsCollectionView>)SessionManager.Current["controls"]; else SessionManager.Current["controls"] = new List<FilterOptionsCollectionView>(); return (IList<FilterOptionsCollectionView>)SessionManager.Current["controls"]; } set { SessionManager.Current["controls"] = value; } } protected void Page_Load(object sender, EventArgs e) { Master.Page.Title = Title; LoadViewControls(Master.MainContent, Master.SideBar, Master.ToolBarContainer); } protected override void OnPreInit(EventArgs e) { base.OnPreInit(e); System.Web.UI.MasterPage m = Master; Control control = GetPostBackControl(this); if ((control != null && control.ClientID == (lbAddAndCondtion.ClientID) || createAgain)) { createAgain = true; CreateUserControl(control.ID); } } protected void AddAndConditionClicked(object o, EventArgs e) { var control = LoadControl("~/Views/FilterOptionsCollectionView.ascx"); OptionControls.Add((FilterOptionsCollectionView)control); control.ID = "options" + OptionControls.Count.ToString(); phConditions.Controls.Add(control); } public event EventHandler<Insight.Presenters.PageViewArg> OnLoadData; private Control FindControlRecursive(Control root, string id) { if (root.ID == id) { return root; } foreach (Control c in root.Controls) { Control t = FindControlRecursive(c, id); if (t != null) { return t; } } return null; } protected Control GetPostBackControl(System.Web.UI.Page page) { Control control = null; string ctrlname = Page.Request.Params["__EVENTTARGET"]; if (ctrlname != null && ctrlname != String.Empty) { control = FindControlRecursive(page, ctrlname.Split('$')[2]); } else { string ctrlStr = String.Empty; Control c = null; foreach (string ctl in Page.Request.Form) { if (ctl.EndsWith(".x") || ctl.EndsWith(".y")) { ctrlStr = ctl.Substring(0, ctl.Length - 2); c = page.FindControl(ctrlStr); } else { c = page.FindControl(ctl); } if (c is System.Web.UI.WebControls.CheckBox || c is System.Web.UI.WebControls.CheckBoxList) { control = c; break; } } } return control; } protected void CreateUserControl(string controlID) { try { if (createAgain && phConditions != null) { if (OptionControls.Count > 0) { phConditions.Controls.Clear(); foreach (var c in OptionControls) { phConditions.Controls.Add(c); } } } } catch (Exception ex) { throw ex; } } Here is the usercontrol's code: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="FilterOptionsCollectionView.ascx.cs" Inherits="Insight.Website.Views.FilterOptionsCollectionView" %> namespace Insight.Website.Views { [ViewStateModeById] public partial class FilterOptionsCollectionView : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { } protected override void OnInit(EventArgs e) { LoadColumns(); ddlColumns.SelectedIndexChanged += new RadComboBoxSelectedIndexChangedEventHandler(ColumnsSelectedIndexChanged); base.OnInit(e); } protected void ColumnsSelectedIndexChanged(object o, EventArgs e) { LoadCriteria(); } public void LoadColumns() { ddlColumns.DataSource = User.GetItemSearchProperties(); ddlColumns.DataTextField = "SearchColumn"; ddlColumns.DataValueField = "CriteriaSearchControlType"; ddlColumns.DataBind(); LoadCriteria(); } private void LoadCriteria() { var controlType = User.GetItemSearchProperties()[ddlColumns.SelectedIndex].CriteriaSearchControlType; var ops = User.GetItemSearchProperties()[ddlColumns.SelectedIndex].ValidOperators; ddlOperators.DataSource = ops; ddlOperators.DataTextField = "key"; ddlOperators.DataValueField = "value"; ddlOperators.DataBind(); switch (controlType) { case ResourceStrings.ViewFilter_ControlTypes_DDL: criteriaDDL.Visible = true; criteriaText.Visible = false; var crit = User.GetItemSearchProperties()[ddlColumns.SelectedIndex].SearchCriteria; ddlCriteria.DataSource = crit; ddlCriteria.DataBind(); break; case ResourceStrings.ViewFilter_ControlTypes_Text: criteriaDDL.Visible = false; criteriaText.Visible = true; break; } } public event EventHandler OnColumnChanged; public ISearchCriterion FilterOptionsValues { get; set; } } }

    Read the article

  • Modelling boost::Lockable with semaphore rather than mutex (previously titled: Unlocking a mutex fr

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect. EDIT: The kind of behavior I want is basically as follows. I have an object, and it needs to be locked whenever it is modified. I want to lock the object from one thread, and do some work on it. Then I want to keep the object locked while I tell another worker thread to complete the work. So the first thread can go on and do something else while the worker thread finishes up. When the worker thread gets done, it unlocks the mutex. And I want the transition to be seemless so nobody else can get the mutex lock in between when thread 1 starts the work and thread 2 completes it. Something like inter_thread_mutex seems like it would work, and it would also allow the program to interact with it as if it were an ordinary mutex. So it seems like a clean solution. If there's a better solution, I'd be happy to hear that also. EDIT AGAIN: The reason I need locks to begin with is that there are multiple master threads, and the locks are there to prevent them from accessing shared objects concurrently in invalid ways. So the code already uses loop-level lock-free sequencing of operations at the master thread level. Also, in the original implementation, there were no worker threads, and the mutexes were ordinary kosher mutexes. The inter_thread_thingy came up as an optimization, primarily to improve response time. In many cases, it was sufficient to guarantee that the "first part" of operation A, occurs before the "first part" of operation B. As a dumb example, say I punch object 1 and give it a black eye. Then I tell object 1 to change it's internal structure to reflect all the tissue damage. I don't want to wait around for the tissue damage before I move on to punch object 2. However, I do want the tissue damage to occur as part of the same operation; for example, in the interim, I don't want any other thread to reconfigure the object in such a way that would make tissue damage an invalid operation. (yes, this example is imperfect in many ways, and no I'm not working on a game) So we made the change to a model where ownership of an object can be passed to a worker thread to complete an operation, and it actually works quite nicely; each master thread is able to get a lot more operations done because it doesn't need to wait for them all to complete. And, since the event sequencing at the master thread level is still loop-based, it is easy to write high-level master-thread operations, as they can be based on the assumption that an operation is complete when the corresponding function call returns. Finally, I thought it would be nice to use inter_thread mutex/semaphore thingies using RAII with boost locks to encapsulate the necessary synchronization that is required to make the whole thing work.

    Read the article

  • Searching a set of data with multiple terms using Linq

    - by Cj Anderson
    I'm in the process of moving from ADO.NET to Linq. The application is a directory search program to look people up. The users are allowed to type the search criteria into a single textbox. They can separate each term with a space, or wrap a phrase in quotes such as "park place" to indicate that it is one term. Behind the scenes the data comes from a XML file that has about 90,000 records in it and is about 65 megs. I load the data into a DataTable and then use the .Select method with a SQL query to perform the searches. The query I pass is built from the search terms the user passed. I split the string from the textbox into an array using a regular expression that will split everything into a separate element that has a space in it. However if there are quotes around a phrase, that becomes it's own element in the array. I then end up with a single dimension array with x number of elements, which I iterate over to build a long query. I then build the search expression below: query = query & _ "((userid LIKE '" & tempstr & "%') OR " & _ "(nickname LIKE '" & tempstr & "%') OR " & _ "(lastname LIKE '" & tempstr & "%') OR " & _ "(firstname LIKE '" & tempstr & "%') OR " & _ "(department LIKE '" & tempstr & "%') OR " & _ "(telephoneNumber LIKE '" & tempstr & "%') OR " & _ "(email LIKE '" & tempstr & "%') OR " & _ "(Office LIKE '" & tempstr & "%'))" Each term will have a set of the above query. If there is more than one term, I put an AND in between, and build another query like above with the next term. I'm not sure how to do this in Linq. So far, I've got the XML file loading correctly. I'm able to search it with specific criteria, but I'm not sure how to best implement the search over multiple terms. 'this works but far too simple to get the job done Dim results = From c In m_DataSet...<Users> _ Where c.<userid>.Value = "XXXX" _ Select c The above code also doesn't use the LIKE operator either. So partial matches don't work. It looks like what I'd want to use is the .Startswith but that appears to be only in Linq2SQL. Any guidance would be appreciated. I'm new to Linq, so I might be missing a simple way to do this. The XML file looks like so: <?xml version="1.0" standalone="yes"?> <theusers> <Users> <userid>person1</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users> <Users> <userid>person2</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users>

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering a list using ModelMetadata

    - by rajbk
    This post looks at how to control paging, sorting and filtering when displaying a list of data by specifying attributes in your Model using the ASP.NET MVC framework and the excellent MVCContrib library. It also shows how to hide/show columns and control the formatting of data using attributes.  This uses the Northwind database. A sample project is attached at the end of this post. Let’s start by looking at a class called ProductViewModel. The properties in the class are decorated with attributes. The OrderBy attribute tells the system that the Model can be sorted using that property. The SearchFilter attribute tells the system that filtering is allowed on that property. Filtering type is set by the  FilterType enum which currently supports Equals and Contains. The ScaffoldColumn property specifies if a column is hidden or not The DisplayFormat specifies how the data is formatted. public class ProductViewModel { [OrderBy(IsDefault = true)] [ScaffoldColumn(false)] public int? ProductID { get; set; }   [SearchFilter(FilterType.Contains)] [OrderBy] [DisplayName("Product Name")] public string ProductName { get; set; }   [OrderBy] [DisplayName("Unit Price")] [DisplayFormat(DataFormatString = "{0:c}")] public System.Nullable<decimal> UnitPrice { get; set; }   [DisplayName("Category Name")] public string CategoryName { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? CategoryID { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? SupplierID { get; set; }   [OrderBy] public bool Discontinued { get; set; } } Before we explore the code further, lets look at the UI.  The UI has a section for filtering the data. The column headers with links are sortable. Paging is also supported with the help of a pager row. The pager is rendered using the MVCContrib Pager component. The data is displayed using a customized version of the MVCContrib Grid component. The customization was done in order for the Grid to be aware of the attributes mentioned above. Now, let’s look at what happens when we perform actions on this page. The diagram below shows the process: The form on the page has its method set to “GET” therefore we see all the parameters in the query string. The query string is shown in blue above. This query gets routed to an action called Index with parameters of type ProductViewModel and PageSortOptions. The parameters in the query string get mapped to the input parameters using model binding. The ProductView object created has the information needed to filter data while the PageAndSorting object is used for paging and sorting the data. The last block in the figure above shows how the filtered and paged list is created. We receive a product list from our product repository (which is of type IQueryable) and first filter it by calliing the AsFiltered extension method passing in the productFilters object and then call the AsPagination extension method passing in the pageSort object. The AsFiltered extension method looks at the type of the filter instance passed in. It skips properties in the instance that do not have the SearchFilter attribute. For properties that have the SearchFilter attribute, it adds filter expression trees to filter against the IQueryable data. The AsPagination extension method looks at the type of the IQueryable and ensures that the column being sorted on has the OrderBy attribute. If it does not find one, it looks for the default sort field [OrderBy(IsDefault = true)]. It is required that at least one attribute in your model has the [OrderBy(IsDefault = true)]. This because a person could be performing paging without specifying an order by column. As you may recall the LINQ Skip method now requires that you call an OrderBy method before it. Therefore we need a default order by column to perform paging. The extension method adds a order expressoin tree to the IQueryable and calls the MVCContrib AsPagination extension method to page the data. Implementation Notes Auto Postback The search filter region auto performs a get request anytime the dropdown selection is changed. This is implemented using the following jQuery snippet $(document).ready(function () { $("#productSearch").change(function () { this.submit(); }); }); Strongly Typed View The code used in the Action method is shown below: public ActionResult Index(ProductViewModel productFilters, PageSortOptions pageSortOptions) { var productPagedList = productRepository.GetProductsProjected().AsFiltered(productFilters).AsPagination(pageSortOptions);   var productViewFilterContainer = new ProductViewFilterContainer(); productViewFilterContainer.Fill(productFilters.CategoryID, productFilters.SupplierID, productFilters.ProductName);   var gridSortOptions = new GridSortOptions { Column = pageSortOptions.Column, Direction = pageSortOptions.Direction };   var productListContainer = new ProductListContainerModel { ProductPagedList = productPagedList, ProductViewFilterContainer = productViewFilterContainer, GridSortOptions = gridSortOptions };   return View(productListContainer); } As you see above, the object that is returned to the view is of type ProductListContainerModel. This contains all the information need for the view to render the Search filter section (including dropdowns),  the Html.Pager (MVCContrib) and the Html.Grid (from MVCContrib). It also stores the state of the search filters so that they can recreate themselves when the page reloads (Viewstate, I miss you! :0)  The class diagram for the container class is shown below.   Custom MVCContrib Grid The MVCContrib grid default behavior was overridden so that it would auto generate the columns and format the columns based on the metadata and also make it aware of our custom attributes (see MetaDataGridModel in the sample code). The Grid ensures that the ShowForDisplay on the column is set to true This can also be set by the ScaffoldColumn attribute ref: http://bradwilson.typepad.com/blog/2009/10/aspnet-mvc-2-templates-part-2-modelmetadata.html) Column headers are set using the DisplayName attribute Column sorting is set using the OrderBy attribute. The data is formatted using the DisplayFormat attribute. Generic Extension methods for Sorting and Filtering The extension method AsFiltered takes in an IQueryable<T> and uses expression trees to query against the IQueryable data. The query is constructed using the Model metadata and the properties of the T filter (productFilters in our case). Properties in the Model that do not have the SearchFilter attribute are skipped when creating the filter expression tree.  It returns an IQueryable<T>. The extension method AsPagination takes in an IQuerable<T> and first ensures that the column being sorted on has the OrderBy attribute. If not, we look for the default OrderBy column ([OrderBy(IsDefault = true)]). We then build an expression tree to sort on this column. We finally hand off the call to the MVCContrib AsPagination which returns an IPagination<T>. This type as you can see in the class diagram above is passed to the view and used by the MVCContrib Grid and Pager components. Custom Provider To get the system to recognize our custom attributes, we create our MetadataProvider as mentioned in this article (http://bradwilson.typepad.com/blog/2010/01/why-you-dont-need-modelmetadataattributes.html) protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { ModelMetadata metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);   SearchFilterAttribute searchFilterAttribute = attributes.OfType<SearchFilterAttribute>().FirstOrDefault(); if (searchFilterAttribute != null) { metadata.AdditionalValues.Add(Globals.SearchFilterAttributeKey, searchFilterAttribute); }   OrderByAttribute orderByAttribute = attributes.OfType<OrderByAttribute>().FirstOrDefault(); if (orderByAttribute != null) { metadata.AdditionalValues.Add(Globals.OrderByAttributeKey, orderByAttribute); }   return metadata; } We register our MetadataProvider in Global.asax.cs. protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   ModelMetadataProviders.Current = new MvcFlan.QueryModelMetaDataProvider(); } Bugs, Comments and Suggestions are welcome! You can download the sample code below. This code is purely experimental. Use at your own risk. Download Sample Code (VS 2010 RTM) MVCNorthwindSales.zip

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Reusable VS clean code - where's the balance?

    - by Radek Šimko
    Let's say I have a data model for a blog posts and have two use-cases of that model - getting all blogposts and getting only blogposts which were written by specific author. There are basically two ways how I can realize that. 1st model class Articles { public function getPosts() { return $this->connection->find() ->sort(array('creation_time' => -1)); } public function getPostsByAuthor( $authorUid ) { return $this->connection->find(array('author_uid' => $authorUid)) ->sort(array('creation_time' => -1)); } } 1st usage (presenter/controller) if ( $GET['author_uid'] ) { $posts = $articles->getPostsByAuthor($GET['author_uid']); } else { $posts = $articles->getPosts(); } 2nd one class Articles { public function getPosts( $authorUid = NULL ) { $query = array(); if( $authorUid !== NULL ) { $query = array('author_uid' => $authorUid); } return $this->connection->find($query) ->sort(array('creation_time' => -1)); } } 2nd usage (presenter/controller) $posts = $articles->getPosts( $_GET['author_uid'] ); To sum up (dis)advantages: 1) cleaner code 2) more reusable code Which one do you think is better and why? Is there any kind of compromise between those two?

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • REST: How to store and reuse REST call queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about? EDIT: Really, these SPARQL queries aren't anything special. Just blobs of text that I later want to grab, insert some parameters into via String.Format and send in a REST call. I suppose you could think of them the same as any SQL query that is kept in the application layer, I just never practiced keeping SQL queries in the application layer so I'm wondering if there are any "standard" practices with this type of thing.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • IOError: [Errno 32] Broken pipe

    - by khati
    I got "IOError: [Errno 32] Broken pipe" while writing files in linux. I am using python to read each line a of csv file and then write into a database table. My code is f = open(path,'r') command = command to connect to database p = Popen(command, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, env=env) query = " COPY myTable( id, name, address) FROM STDIN WITH DELIMITER ';' CSV QUOTE '"'; " p.stdin.write(query.encode('ascii')) *-->(Here exactly I got the error, p.stdin.write(query.encode('ascii')) IOError: [Errno 32] Broken pipe )* So when I run this program in linux, I got error "IOError: [Errno 32] Broken pipe" . However this works fine when I run in windows7. Do I need to do some configuration in Linux sever? Any suggestions will be appreciated. Thank you.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >