Search Results

Search found 5109 results on 205 pages for 'specify'.

Page 203/205 | < Previous Page | 199 200 201 202 203 204 205  | Next Page >

  • AuthnRequest Settings in OIF / SP

    - by Damien Carru
    In this article, I will list the various OIF/SP settings that affect how an AuthnRequest message is created in OIF in a Federation SSO flow. The AuthnRequest message is used by an SP to start a Federation SSO operation and to indicate to the IdP how the operation should be executed: How the user should be challenged at the IdP Whether or not the user should be challenged at the IdP, even if a session already exists at the IdP for this user Which NameID format should be requested in the SAML Assertion Which binding (Artifact or HTTP-POST) should be requested from the IdP to send the Assertion Which profile should be used by OIF/SP to send the AuthnRequest message Enjoy the reading! Protocols The SAML 2.0, SAML 1.1 and OpenID 2.0 protocols define different message elements and rules that allow an administrator to influence the Federation SSO flows in different manners, when the SP triggers an SSO operation: SAML 2.0 allows extensive customization via the AuthnRequest message SAML 1.1 does not allow any customization, since the specifications do not define an authentication request message OpenID 2.0 allows for some customization, mainly via the OpenID 2.0 extensions such as PAPE or UI SAML 2.0 OIF/SP allows the customization of the SAML 2.0 AuthnRequest message for the following elements: ForceAuthn: Boolean indicating whether or not the IdP should force the user for re-authentication, even if the user has still a valid session By default set to false IsPassive Boolean indicating whether or not the IdP is allowed to interact with the user as part of the Federation SSO operation. If false, the Federation SSO operation might result in a failure with the NoPassive error code, because the IdP will not have been able to identify the user By default set to false RequestedAuthnContext Element indicating how the user should be challenged at the IdP If the SP requests a Federation Authentication Method unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the NoAuthnContext error code By default missing NameIDPolicy Element indicating which NameID format the IdP should include in the SAML Assertion If the SP requests a NameID format unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the InvalidNameIDPolicy error code If missing, the IdP will generally use the default NameID format configured for this SP partner at the IdP By default missing ProtocolBinding Element indicating which SAML binding should be used by the IdP to redirect the user to the SP with the SAML Assertion Set to Artifact or HTTP-POST By default set to HTTP-POST OIF/SP also allows the administrator to configure the server to: Set which binding should be used by OIF/SP to redirect the user to the IdP with the SAML 2.0 AuthnRequest message: Redirect or HTTP-POST By default set to Redirect Set which binding should be used by OIF/SP to redirect the user to the IdP during logout with SAML 2.0 Logout messages: Redirect or HTTP-POST By default set to Redirect SAML 1.1 The SAML 1.1 specifications do not define a message for the SP to send to the IdP when a Federation SSO operation is started. As such, there is no capability to configure OIF/SP on how to affect the start of the Federation SSO flow. OpenID 2.0 OpenID 2.0 defines several extensions that can be used by the SP/RP to affect how the Federation SSO operation will take place: OpenID request: mode: String indicating if the IdP/OP can visually interact with the user checkid_immediate does not allow the IdP/OP to interact with the user checkid_setup allows user interaction By default set to checkid_setup PAPE Extension: max_auth_age : Integer indicating in seconds the maximum amount of time since when the user authenticated at the IdP. If MaxAuthnAge is bigger that the time since when the user last authenticated at the IdP, then the user must be re-challenged. OIF/SP will set this attribute to 0 if the administrator configured ForceAuthn to true, otherwise this attribute won't be set Default missing preferred_auth_policies Contains a Federation Authentication Method Element indicating how the user should be challenged at the IdP By default missing Only specified in the OpenID request if the IdP/OP supports PAPE in XRDS, if OpenID discovery is used. UI Extension Popup mode Boolean indicating the popup mode is enabled for the Federation SSO By default missing Language Preference String containing the preferred language, set based on the browser's language preferences. By default missing Icon: Boolean indicating if the icon feature is enabled. In that case, the IdP/OP would look at the SP/RP XRDS to determine how to retrieve the icon By default missing Only specified in the OpenID request if the IdP/OP supports UI Extenstion in XRDS, if OpenID discovery is used. ForceAuthn and IsPassive WLST Command OIF/SP provides the WLST configureIdPAuthnRequest() command to set: ForceAuthn as a boolean: In a SAML 2.0 AuthnRequest, the ForceAuthn field will be set to true or false In an OpenID 2.0 request, if ForceAuthn in the configuration was set to true, then the max_auth_age field of the PAPE request will be set to 0, otherwise, max_auth_age won't be set IsPassive as a boolean: In a SAML 2.0 AuthnRequest, the IsPassive field will be set to true or false In an OpenID 2.0 request, if IsPassive in the configuration was set to true, then the mode field of the OpenID request will be set to checkid_immediate, otherwise set to checkid_setup Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will require the IdP to re-challenge the user, even if the user is already authenticated: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command:configureIdPAuthnRequest(partner="AcmeIdP", forceAuthn="true") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="true" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> To display or delete the ForceAuthn/IsPassive settings, perform the following operatons: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command: To display the ForceAuthn/IsPassive settings on the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", displayOnly="true") To delete the ForceAuthn/IsPassive settings from the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", delete="true") Exit the WLST environment:exit() Requested Fed Authn Method In my earlier "Fed Authentication Method Requests in OIF / SP" article, I discussed how OIF/SP could be configured to request a specific Federation Authentication Method from the IdP when starting a Federation SSO operation, by setting elements in the SSO request message. WLST Command The OIF WLST commands that can be used are: setIdPPartnerProfileRequestAuthnMethod() which will configure the requested Federation Authentication Method in a specific IdP Partner Profile, and accepts the following parameters: partnerProfile: name of the IdP Partner Profile authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it setIdPPartnerRequestAuthnMethod() which will configure the specified IdP Partner entry with the requested Federation Authentication Method, and accepts the following parameters: partner: name of the IdP Partner authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it This applies to SAML 2.0 and OpenID 2.0 protocols. See the "Fed Authentication Method Requests in OIF / SP" article for more information. Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will request the IdP to use a mechanism mapped to the urn:oasis:names:tc:SAML:2.0:ac:classes:X509 Federation Authentication Method to authenticate the user: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerRequestAuthnMethod() command:setIdPPartnerRequestAuthnMethod("AcmeIdP", "urn:oasis:names:tc:SAML:2.0:ac:classes:X509") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/>   <samlp:RequestedAuthnContext Comparison="minimum">      <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">         urn:oasis:names:tc:SAML:2.0:ac:classes:X509      </saml:AuthnContextClassRef>   </samlp:RequestedAuthnContext></samlp:AuthnRequest> NameID Format The SAML 2.0 protocol allows for the SP to request from the IdP a specific NameID format to be used when the Assertion is issued by the IdP. Note: SAML 1.1 and OpenID 2.0 do not provide such a mechanism Configuring OIF The administrator can configure OIF/SP to request a NameID format in the SAML 2.0 AuthnRequest via: The OAM Administration Console, in the IdP Partner entry The OIF WLST setIdPPartnerNameIDFormat() command that will modify the IdP Partner configuration OAM Administration Console To configure the requested NameID format via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify In the Authentication Request NameID Format dropdown box with one of the values None The NameID format will be set Default Email Address The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress X.509 Subject The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName Windows Name Qualifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName Kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos Transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient Unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified Custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format Persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent I selected Email Address in this example Save WLST Command To configure the requested NameID format via the OIF WLST setIdPPartnerNameIDFormat() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerNameIDFormat() command:setIdPPartnerNameIDFormat("PARTNER", "FORMAT", customFormat="CUSTOM") Replace PARTNER with the IdP Partner name Replace FORMAT with one of the following: orafed-none The NameID format will be set Default orafed-emailaddress The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress orafed-x509 The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName orafed-windowsnamequalifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName orafed-kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos orafed-transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient orafed-unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified orafed-custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format orafed-persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent customFormat will need to be set if the FORMAT is set to orafed-custom An example would be:setIdPPartnerNameIDFormat("AcmeIdP", "orafed-emailaddress") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> After the changes performed either via the OAM Administration Console or via the OIF WLST setIdPPartnerNameIDFormat() command where Email Address would be requested as the NameID Format, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="false" IsPassive="false" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" AllowCreate="true"/></samlp:AuthnRequest> Protocol Binding The SAML 2.0 specifications define a way for the SP to request which binding should be used by the IdP to redirect the user to the SP with the SAML 2.0 Assertion: the ProtocolBinding attribute indicates the binding the IdP should use. It is set to: Either urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST for HTTP-POST Or urn:oasis:names:tc:SAML:2.0:bindings:Artifact for Artifact The SAML 2.0 specifications also define different ways to redirect the user from the SP to the IdP with the SAML 2.0 AuthnRequest message, as the SP can send the message: Either via HTTP Redirect Or HTTP POST (Other bindings can theoretically be used such as Artifact, but these are not used in practice) Configuring OIF OIF can be configured: Via the OAM Administration Console or the OIF WLST configureSAMLBinding() command to set the Assertion Response binding to be used Via the OIF WLST configureSAMLBinding() command to indicate how the SAML AuthnRequest message should be sent Note: the binding for sending the SAML 2.0 AuthnRequest message will also be used to send the SAML 2.0 LogoutRequest and LogoutResponse messages. OAM Administration Console To configure the SSO Response/Assertion Binding via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify Check the "HTTP POST SSO Response Binding" box to request the IdP to return the SSO Response via HTTP POST, otherwise uncheck it to request artifact Save WLST Command To configure the SSO Response/Assertion Binding as well as the AuthnRequest Binding via the OIF WLST configureSAMLBinding() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureSAMLBinding() command:configureSAMLBinding("PARTNER", "PARTNER_TYPE", binding, ssoResponseBinding="httppost") Replace PARTNER with the Partner name Replace PARTNER_TYPE with the Partner type (idp or sp) Replace binding with the binding to be used to send the AuthnRequest and LogoutRequest/LogoutResponse messages (should be httpredirect in most case; default) httppost for HTTP-POST binding httpredirect for HTTP-Redirect binding Specify optionally ssoResponseBinding to indicate how the SSO Assertion should be sent back httppost for HTTP-POST binding artifactfor for Artifact binding An example would be:configureSAMLBinding("AcmeIdP", "idp", "httpredirect", ssoResponseBinding="httppost") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration which requests HTTP-POST from the IdP to send the SSO Assertion. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> In the next article, I will cover the various crypto configuration properties in OIF that are used to affect the Federation SSO exchanges.Cheers,Damien Carru

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Saturday, October 29, 2011

    CodePlex Daily Summary for Saturday, October 29, 2011Popular Releasespatterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...DotNetNuke® Events: 05.02.01: This release fixes any know bugs from any previous version. Events 05.02.01 will work for any DNN version 5.5.0 and up. Full details on the changes can be found at http://dnnevents.codeplex.com/workitem/list/basic Please review and rate this release... (stars are welcome)BUG FIXESAdded validation around category cookie RSS feed was missing an explicit close of the file when writing. Fixed. Added extra security into detail view .ICS Files did not include correct line folding. Fixed Cha...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconNew ProjectsAsynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.Blob Tower Defense: 3D tower defense game for Windows Phone 7. School project for Brno University of Technology, computer graphics class.Booz: Booz is... An extended version of the boo shell (booish2 to be precise). Offers additional commands like cd, md, ls etc. I hope this shell can be used to take the position of/surpass the native windows shell in the near future.CIMS: a sanction infomation system for sencience and technology of hustCrystalDot - Icon Collection / Pack (LGPL): .Net / Mono freundliche Varainte der Crystal-Icons von Everaldo Icon collection / pack for .NET and Mono designed by Everaldo - KDE style http://www.everaldo.com/crystal/dotetes: dotetes adalah teka teki silang tool dikembangkan dengan bahasa c#Emoe': This Project is a Windows Phone 7.1 application.Equation Inversion: Visual Studion 2008 Add-in for equation inversions.Exploring VMR Features on WEC7: This is the sample application helps you to do alpha blending the bitmap on camera streaming in Windows Embedded Compact 7 using Directshow video Renderer (VMR). It is a VS2008 based smart device project developed on C++. I have explained the sample application in the following blog link. http://www.e-consystems.com/blog/windowsce/?p=759 EzValidation: Custom validation extensions for ASP.NET MVC 3. Includes server and client side model based validation attributes for: -- Equal To -- Not Equal To -- Greater Than -- Greater Than or Equal To -- Less Than -- Less Than or Equal To Supports validating against: -- Another Model Field -- A Specific Value -- Current Date/Yesterday/Tomorrow (for Dates and Strings) Download & Install via NuGet "package-install ezvalidation"Flu.net: Flu.net is a tool that helps you creating your own fluent syntax for .NET Framework applications in a declarative fashion. It is aimed for infrastructures and other open-source projects use.For Chess Endgames: King vs. King Opposition Calculator: You must input the locations of 2 kings on a chessboard, and whose turn it is to move. The calculator will display which king has the opposition, and how it can be used or maintained.GameTrakXNA: This project aims to create a simple library to use the unique GameTrak controller within XNA and Flash.Google Speech Recognition Example: Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed in C#Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.ISD training tasks: ISD training examples and tasksiTunesControlBar: The iTunesControlBar helps user control their iTunes Application while it is minimized. iTunesControlBar resides at the top of the screen, invisible when not used, and allows playback and volume control, library searches and media information without the need to bring up iTunes.iTurtle: A bunch of Powerscripts to automate server management in AD environment.M26WC - Mono 2.6 Wizard Control: Wizard which runs under Mono2.6 A fork of: http://aerowizard.codeplex.com/Microsoft Help Viewer 2: Help Viewer 2 is the help runtime for both Visual Studio 11 help and Windows 8 help. The code in this project will help you use and understand the HV2 runtime API.MONTRASEC: Monitoring Trafficking in human beings and Sexual Exploitation of Children: benchmarking for member state and EU reporting, turning the SIAMSECT templates into a user-friendly interface and reporting tool. MTF.NET Runtime: Managed Task Framework .NET Runtime The MTF.NET runtime software and resulting assemblies are required to run applications built using the Managed Task Framework.NET Professional (Visual Studio 2010 extension) software design editor. The MTF.NET team are committed to continuously improving the core MTF.NET runtime and ensuring it is always available free and fully transparent. Pandoras Box: A greenfield inversion of control project utilising the power and flexibility of expressions and preferring convention over configuration.Pass the Puzzle: Pass the Puzzle is a frantic word-guessing party game. The game displays a few letters, and the players must come up with words containing those letters. But beware: if the timer goes off, you lose! It is based on the folk party game Pass the Parcel and is written in C#.PerCiGal: Percigal is a project for the development of applications for managing your personal media library. It consists in - a windows application to use at home to catalog movies, TV series, cast and books, with the support of the Internet for information retrieval; - a web interface for viewing and cataloging everywhere your media; - an application for smartphones. Project Flying Carpet: Este jogo é um projeto para a cadeira Projeto de Jogos: Motores Jogos do curso de Jogos Digitais da Unisinos.proxy browser: sed leo Latin's Butterfly....Python Multiple Dispatch: Multiple dispatch (AKA multimethods) for Python 3 via a metaclass and type annotations.reDune: ?????????? ???? ? ????? «????????? ? ???????? ???????». ???????? ?? Dune2000 ?? Westwood ? Electronic Arts.Rereadable: Keep page from internet for read it latter.ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.SharePoint 2010 Audience Membership Workflow Activity (Full Trust): A simple SharePoint 2010 workflow activity / workflow condition to check whether the user initiating the workflow is a member of a specified audience. Farm-level .wsp solution, written in C#. Once installed, the workflow activity can be used in SharePoint Designer 2010 declarative workflows.SQL Server® to Firebird DB converter: Converts Microsoft SQL Server® database into Firebird database including entire structure and datastegitest: test projectSystem.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.TestAndroidGame: try dev a TestAndroidGametetribricks: block game Topographic Explorer: A project to import, convert, explore, manipulate, and save topographical maps. Looking to use C# and WPF.Trading: Under construction!!!Trombone: Trombone makes it easier for Windows Mobile Professional users to automate status reply through SMS. It's developed in Visual C# 2008.Tulsa SharePoint Interest Group: Repository for source code for the Tulsa SharePoint Interest Group's web site. The Tulsa SharePoint Interest Group is using the Community Kit for SharePoint. This project will house any modifications that are specific to our user group.World of Tanks RU tiny stats collection utilty.: Tiny utility to load players stats for World of Tanks RU server. Results saved to comma separated file.WS-Discovery Proxy: Attempt at creating general purpose WS-Discovery Proxy.Yamaha Tu?n Tr?c: This application is used to manage information for Yamaha Tu?n Tr?c

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel.

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctls, so i'm posting them with comments. Based on Igor Sysoev (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Sysctls are for 7.x FreeBSD. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. Highload web server sysctls: # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Do not use lager sockbufs on 8.0 # ( http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=262144 # Recive clusters (on amd64 7.2+ 65k is default) # For such high value vm.kmem_size must be increased to 3G #kern.ipc.nmbclusters=229376 # Jumbo pagesize(4k/8k) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=192000 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=24000 #kern.ipc.nmbjumbo16=10240 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # Turn off receive autotuning #net.inet.tcp.recvbuf_auto=0 # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This should be enabled if you going to use big spaces (>64k) #net.inet.tcp.rfc1323=1 # Turn this off on highspeed, lossless connections (LAN 1Gbit+) #net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # You can try setting it to 0 on fileserver with 1GBit+ interfaces # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) #net.inet.tcp.inflight.enable=0 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't tested it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds (default: 30000 from RFC) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=40960 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becames lower) vfs.ufs.dirhash_maxmem=67108864 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 /boot/loader.conf: # Accept filters for data, http and DNS requests # Usefull when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load= #siis_load= # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 200M) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=100 # Incresed hostcache net.inet.tcp.hostcache.hashsize="16384" net.inet.tcp.hostcache.bucketlimit="100" # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # `sysctl dev.em.0.stats=1 ; dmesg` # #Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # Nicer boot logo =) loader_logo="beastie" And finally here is my additions to GENERIC kernel # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU of 8K(amd64) # (4k for i386). This req. is only for receiving data. # Read more in man zero_copy_sockets #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron #device amdtemp # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # DTrace options KDTRACE_HOOKS # all architectures - enable general DTrace hooks options DDB_CTF # all architectures - kernel ELF linker loads CTF data #options KDTRACE_FRAME # amd64-only # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (9.x+) #options TEKEN_UTF8 #options TEKEN_XTERM # NCQ support # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ #options ATA_CAM # FreeBSD 9+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html #options DEADLKRES PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • org-sort multi: date/time (?d ?t) | priority (?p) | title (?a)

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: I believe that org-sort by deadline (?d) has a bug that cannot properly handle undated tasks. I am working on a workaround (i.e., moving the undated todo to a different heading before the deadline (?d) sort occurs), but perhaps the best thing to do would be to try and fix the original sorting function. Development of the workaround can be found in this thread (i.e., moving the undated tasks to a different heading in one fell swoop): How to automate org-refile for multiple todo EDIT: Apparently, the following code (ancient history) that I found on the internet was eventually modified and included as a part of org-sort-entries. Unfortunately, undated todo are not properly sorted when sorting by deadline -- i.e., they are mixed in with the dated todo. ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) ))))) EDIT: Here is a more modern version of multi-sort, which is likely based upon further development of the above-code: (defun org-sort-all () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "^\* " nil t) (goto-char (match-beginning 0)) (condition-case err (progn (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o) (forward-line)) (error nil))) (goto-char (point-min)) (while (re-search-forward "\* PROJECT " nil t) (goto-char (line-beginning-position)) (ignore-errors (org-sort-entries t ?a) (org-sort-entries t ?p) (org-sort-entries t ?o)) (forward-line)))) EDIT: The best option will be to fix sorting of deadlines (?d) so that undated todo are moved to the bottom of the outline, instead of mixed in with the dated todo. Here is an excerpt from the current org.el included within Emacs Trunk (as of July 1, 2013): (defun org-sort (with-case) "Call `org-sort-entries', `org-table-sort-lines' or `org-sort-list'. Optional argument WITH-CASE means sort case-sensitively." (interactive "P") (cond ((org-at-table-p) (org-call-with-arg 'org-table-sort-lines with-case)) ((org-at-item-p) (org-call-with-arg 'org-sort-list with-case)) (t (org-call-with-arg 'org-sort-entries with-case)))) (defun org-sort-remove-invisible (s) (remove-text-properties 0 (length s) org-rm-props s) (while (string-match org-bracket-link-regexp s) (setq s (replace-match (if (match-end 2) (match-string 3 s) (match-string 1 s)) t t s))) s) (defvar org-priority-regexp) ; defined later in the file (defvar org-after-sorting-entries-or-items-hook nil "Hook that is run after a bunch of entries or items have been sorted. When children are sorted, the cursor is in the parent line when this hook gets called. When a region or a plain list is sorted, the cursor will be in the first entry of the sorted region/list.") (defun org-sort-entries (&optional with-case sorting-type getkey-func compare-func property) "Sort entries on a certain level of an outline tree. If there is an active region, the entries in the region are sorted. Else, if the cursor is before the first entry, sort the top-level items. Else, the children of the entry at point are sorted. Sorting can be alphabetically, numerically, by date/time as given by a time stamp, by a property or by priority. The command prompts for the sorting type unless it has been given to the function through the SORTING-TYPE argument, which needs to be a character, \(?n ?N ?a ?A ?t ?T ?s ?S ?d ?D ?p ?P ?o ?O ?r ?R ?f ?F). Here is the precise meaning of each character: n Numerically, by converting the beginning of the entry/item to a number. a Alphabetically, ignoring the TODO keyword and the priority, if any. o By order of TODO keywords. t By date/time, either the first active time stamp in the entry, or, if none exist, by the first inactive one. s By the scheduled date/time. d By deadline date/time. c By creation time, which is assumed to be the first inactive time stamp at the beginning of a line. p By priority according to the cookie. r By the value of a property. Capital letters will reverse the sort order. If the SORTING-TYPE is ?f or ?F, then GETKEY-FUNC specifies a function to be called with point at the beginning of the record. It must return either a string or a number that should serve as the sorting key for that record. Comparing entries ignores case by default. However, with an optional argument WITH-CASE, the sorting considers case as well." (interactive "P") (let ((case-func (if with-case 'identity 'downcase)) (cmstr ;; The clock marker is lost when using `sort-subr', let's ;; store the clocking string. (when (equal (marker-buffer org-clock-marker) (current-buffer)) (save-excursion (goto-char org-clock-marker) (looking-back "^.*") (match-string-no-properties 0)))) start beg end stars re re2 txt what tmp) ;; Find beginning and end of region to sort (cond ((org-region-active-p) ;; we will sort the region (setq end (region-end) what "region") (goto-char (region-beginning)) (if (not (org-at-heading-p)) (outline-next-heading)) (setq start (point))) ((or (org-at-heading-p) (condition-case nil (progn (org-back-to-heading) t) (error nil))) ;; we will sort the children of the current headline (org-back-to-heading) (setq start (point) end (progn (org-end-of-subtree t t) (or (bolp) (insert "\n")) (org-back-over-empty-lines) (point)) what "children") (goto-char start) (show-subtree) (outline-next-heading)) (t ;; we will sort the top-level entries in this file (goto-char (point-min)) (or (org-at-heading-p) (outline-next-heading)) (setq start (point)) (goto-char (point-max)) (beginning-of-line 1) (when (looking-at ".*?\\S-") ;; File ends in a non-white line (end-of-line 1) (insert "\n")) (setq end (point-max)) (setq what "top-level") (goto-char start) (show-all))) (setq beg (point)) (if (>= beg end) (error "Nothing to sort")) (looking-at "\\(\\*+\\)") (setq stars (match-string 1) re (concat "^" (regexp-quote stars) " +") re2 (concat "^" (regexp-quote (substring stars 0 -1)) "[ \t\n]") txt (buffer-substring beg end)) (if (not (equal (substring txt -1) "\n")) (setq txt (concat txt "\n"))) (if (and (not (equal stars "*")) (string-match re2 txt)) (error "Region to sort contains a level above the first entry")) (unless sorting-type (message "Sort %s: [a]lpha [n]umeric [p]riority p[r]operty todo[o]rder [f]unc [t]ime [s]cheduled [d]eadline [c]reated A/N/P/R/O/F/T/S/D/C means reversed:" what) (setq sorting-type (read-char-exclusive)) (and (= (downcase sorting-type) ?f) (setq getkey-func (org-icompleting-read "Sort using function: " obarray 'fboundp t nil nil)) (setq getkey-func (intern getkey-func))) (and (= (downcase sorting-type) ?r) (setq property (org-icompleting-read "Property: " (mapcar 'list (org-buffer-property-keys t)) nil t)))) (message "Sorting entries...") (save-restriction (narrow-to-region start end) (let ((dcst (downcase sorting-type)) (case-fold-search nil) (now (current-time))) (sort-subr (/= dcst sorting-type) ;; This function moves to the beginning character of the "record" to ;; be sorted. (lambda nil (if (re-search-forward re nil t) (goto-char (match-beginning 0)) (goto-char (point-max)))) ;; This function moves to the last character of the "record" being ;; sorted. (lambda nil (save-match-data (condition-case nil (outline-forward-same-level 1) (error (goto-char (point-max)))))) ;; This function returns the value that gets sorted against. (lambda nil (cond ((= dcst ?n) (if (looking-at org-complex-heading-regexp) (string-to-number (match-string 4)) nil)) ((= dcst ?a) (if (looking-at org-complex-heading-regexp) (funcall case-func (match-string 4)) nil)) ((= dcst ?t) (let ((end (save-excursion (outline-next-heading) (point)))) (if (or (re-search-forward org-ts-regexp end t) (re-search-forward org-ts-regexp-both end t)) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?c) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward (concat "^[ \t]*\\[" org-ts-regexp1 "\\]") end t) (org-time-string-to-seconds (match-string 0)) (org-float-time now)))) ((= dcst ?s) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-scheduled-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?d) (let ((end (save-excursion (outline-next-heading) (point)))) (if (re-search-forward org-deadline-time-regexp end t) (org-time-string-to-seconds (match-string 1)) (org-float-time now)))) ((= dcst ?p) (if (re-search-forward org-priority-regexp (point-at-eol) t) (string-to-char (match-string 2)) org-default-priority)) ((= dcst ?r) (or (org-entry-get nil property) "")) ((= dcst ?o) (if (looking-at org-complex-heading-regexp) (- 9999 (length (member (match-string 2) org-todo-keywords-1))))) ((= dcst ?f) (if getkey-func (progn (setq tmp (funcall getkey-func)) (if (stringp tmp) (setq tmp (funcall case-func tmp))) tmp) (error "Invalid key function `%s'" getkey-func))) (t (error "Invalid sorting type `%c'" sorting-type)))) nil (cond ((= dcst ?a) 'string<) ((= dcst ?f) compare-func) ((member dcst '(?p ?t ?s ?d ?c)) '<))))) (run-hooks 'org-after-sorting-entries-or-items-hook) ;; Reset the clock marker if needed (when cmstr (save-excursion (goto-char start) (search-forward cmstr nil t) (move-marker org-clock-marker (point)))) (message "Sorting entries...done"))) (defun org-do-sort (table what &optional with-case sorting-type) "Sort TABLE of WHAT according to SORTING-TYPE. The user will be prompted for the SORTING-TYPE if the call to this function does not specify it. WHAT is only for the prompt, to indicate what is being sorted. The sorting key will be extracted from the car of the elements of the table. If WITH-CASE is non-nil, the sorting will be case-sensitive." (unless sorting-type (message "Sort %s: [a]lphabetic, [n]umeric, [t]ime. A/N/T means reversed:" what) (setq sorting-type (read-char-exclusive))) (let ((dcst (downcase sorting-type)) extractfun comparefun) ;; Define the appropriate functions (cond ((= dcst ?n) (setq extractfun 'string-to-number comparefun (if (= dcst sorting-type) '< '>))) ((= dcst ?a) (setq extractfun (if with-case (lambda(x) (org-sort-remove-invisible x)) (lambda(x) (downcase (org-sort-remove-invisible x)))) comparefun (if (= dcst sorting-type) 'string< (lambda (a b) (and (not (string< a b)) (not (string= a b))))))) ((= dcst ?t) (setq extractfun (lambda (x) (if (or (string-match org-ts-regexp x) (string-match org-ts-regexp-both x)) (org-float-time (org-time-string-to-time (match-string 0 x))) 0)) comparefun (if (= dcst sorting-type) '< '>))) (t (error "Invalid sorting type `%c'" sorting-type))) (sort (mapcar (lambda (x) (cons (funcall extractfun (car x)) (cdr x))) table) (lambda (a b) (funcall comparefun (car a) (car b))))))

    Read the article

  • Problem using a public key when connecting to a SSH server running on Cygwin

    - by binary255
    We have installed Cygwin on a Windows Server 2008 Standard server and it working pretty well. Unfortunately we still have a big problem. We want to connect using a public key through SSH which doesn't work. It always falls back to using password login. We have appended our public key to ~/.ssh/authorized_keys on the server and we have our private and public key in ~/.ssh/id_dsa respective ~/.ssh/id_dsa.pub on the client. When debugging the SSH login session we see that the key is offered by the server apparently rejects it by some unknown reason. The SSH output when connecting from an Ubuntu 9.10 desktop with debug information enabled: $ ssh -v 192.168.10.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /home/myuseraccount/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for debug1: Connecting to 192.168.10.11 [192.168.10.11] port 22. debug1: Connection established. debug1: identity file /home/myuseraccount/.ssh/identity type -1 debug1: identity file /home/myuseraccount/.ssh/id_rsa type -1 debug1: identity file /home/myuseraccount/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.10.11' is known and matches the RSA host key. debug1: Found key in /home/myuseraccount/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/myuseraccount/.ssh/id_dsa debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myuseraccount/.ssh/identity debug1: Trying private key: /home/myuseraccount/.ssh/id_rsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password [email protected]'s password: The version of Cygwin: $ uname -a CYGWIN_NT-6.0 servername 1.7.1(0.218/5/3) 2009-12-07 11:48 i686 Cygwin The installed packages: $ cygcheck -c Cygwin Package Information Package Version Status _update-info-dir 00871-1 OK alternatives 1.3.30c-10 OK arj 3.10.22-1 OK aspell 0.60.5-1 OK aspell-en 6.0.0-1 OK aspell-sv 0.50.2-2 OK autossh 1.4b-1 OK base-cygwin 2.1-1 OK base-files 3.9-3 OK base-passwd 3.1-1 OK bash 3.2.49-23 OK bash-completion 1.1-2 OK bc 1.06-2 OK bzip2 1.0.5-10 OK cabextract 1.1-1 OK compface 1.5.2-1 OK coreutils 7.0-2 OK cron 4.1-59 OK crypt 1.1-1 OK csih 0.9.1-1 OK curl 7.19.6-1 OK cvs 1.12.13-10 OK cvsutils 0.2.5-1 OK cygrunsrv 1.34-1 OK cygutils 1.4.2-1 OK cygwin 1.7.1-1 OK cygwin-doc 1.5-1 OK cygwin-x-doc 1.1.0-1 OK dash 0.5.5.1-2 OK diffutils 2.8.7-2 OK doxygen 1.6.1-2 OK e2fsprogs 1.35-3 OK editrights 1.01-2 OK emacs 23.1-10 OK emacs-X11 23.1-10 OK file 5.04-1 OK findutils 4.5.5-1 OK flip 1.19-1 OK font-adobe-dpi75 1.0.1-1 OK font-alias 1.0.2-1 OK font-encodings 1.0.3-1 OK font-misc-misc 1.1.0-1 OK fontconfig 2.8.0-1 OK gamin 0.1.10-10 OK gawk 3.1.7-1 OK gettext 0.17-11 OK gnome-icon-theme 2.28.0-1 OK grep 2.5.4-2 OK groff 1.19.2-2 OK gvim 7.2.264-1 OK gzip 1.3.12-2 OK hicolor-icon-theme 0.11-1 OK inetutils 1.5-6 OK ipc-utils 1.0-1 OK keychain 2.6.8-1 OK less 429-1 OK libaspell15 0.60.5-1 OK libatk1.0_0 1.28.0-1 OK libaudio2 1.9.2-1 OK libbz2_1 1.0.5-10 OK libcairo2 1.8.8-1 OK libcurl4 7.19.6-1 OK libdb4.2 4.2.52.5-2 OK libdb4.5 4.5.20.2-2 OK libexpat1 2.0.1-1 OK libfam0 0.1.10-10 OK libfontconfig1 2.8.0-1 OK libfontenc1 1.0.5-1 OK libfreetype6 2.3.12-1 OK libgcc1 4.3.4-3 OK libgdbm4 1.8.3-20 OK libgdk_pixbuf2.0_0 2.18.6-1 OK libgif4 4.1.6-10 OK libGL1 7.6.1-1 OK libglib2.0_0 2.22.4-2 OK libglitz1 0.5.6-10 OK libgmp3 4.3.1-3 OK libgtk2.0_0 2.18.6-1 OK libICE6 1.0.6-1 OK libiconv2 1.13.1-1 OK libidn11 1.16-1 OK libintl3 0.14.5-1 OK libintl8 0.17-11 OK libjasper1 1.900.1-1 OK libjbig2 2.0-11 OK libjpeg62 6b-21 OK libjpeg7 7-10 OK liblzma1 4.999.9beta-10 OK libncurses10 5.7-18 OK libncurses8 5.5-10 OK libncurses9 5.7-16 OK libopenldap2_3_0 2.3.43-1 OK libpango1.0_0 1.26.2-1 OK libpcre0 8.00-1 OK libpixman1_0 0.16.6-1 OK libpng12 1.2.35-10 OK libpopt0 1.6.4-4 OK libpq5 8.2.11-1 OK libreadline6 5.2.14-12 OK libreadline7 6.0.3-2 OK libsasl2 2.1.19-3 OK libSM6 1.1.1-1 OK libssh2_1 1.2.2-1 OK libssp0 4.3.4-3 OK libstdc++6 4.3.4-3 OK libtiff5 3.9.2-1 OK libwrap0 7.6-20 OK libX11_6 1.3.3-1 OK libXau6 1.0.5-1 OK libXaw3d7 1.5D-8 OK libXaw7 1.0.7-1 OK libxcb-render-util0 0.3.6-1 OK libxcb-render0 1.5-1 OK libxcb1 1.5-1 OK libXcomposite1 0.4.1-1 OK libXcursor1 1.1.10-1 OK libXdamage1 1.1.2-1 OK libXdmcp6 1.0.3-1 OK libXext6 1.1.1-1 OK libXfixes3 4.0.4-1 OK libXft2 2.1.14-1 OK libXi6 1.3-1 OK libXinerama1 1.1-1 OK libxkbfile1 1.0.6-1 OK libxml2 2.7.6-1 OK libXmu6 1.0.5-1 OK libXmuu1 1.0.5-1 OK libXpm4 3.5.8-1 OK libXrandr2 1.3.0-10 OK libXrender1 0.9.5-1 OK libXt6 1.0.7-1 OK links 1.00pre20-1 OK login 1.10-10 OK luit 1.0.5-1 OK lynx 2.8.5-4 OK man 1.6e-1 OK minires 1.02-1 OK mkfontdir 1.0.5-1 OK mkfontscale 1.0.7-1 OK openssh 5.4p1-1 OK openssl 0.9.8m-1 OK patch 2.5.8-9 OK patchutils 0.3.1-1 OK perl 5.10.1-3 OK rebase 3.0.1-1 OK run 1.1.12-11 OK screen 4.0.3-5 OK sed 4.1.5-2 OK shared-mime-info 0.70-1 OK tar 1.22.90-1 OK terminfo 5.7_20091114-13 OK terminfo0 5.5_20061104-11 OK texinfo 4.13-3 OK tidy 041206-1 OK time 1.7-2 OK tzcode 2009k-1 OK unzip 6.0-10 OK util-linux 2.14.1-1 OK vim 7.2.264-2 OK wget 1.11.4-4 OK which 2.20-2 OK wput 0.6.1-2 OK xauth 1.0.4-1 OK xclipboard 1.1.0-1 OK xcursor-themes 1.0.2-1 OK xemacs 21.4.22-1 OK xemacs-emacs-common 21.4.22-1 OK xemacs-sumo 2007-04-27-1 OK xemacs-tags 21.4.22-1 OK xeyes 1.1.0-1 OK xinit 1.2.1-1 OK xinput 1.5.0-1 OK xkbcomp 1.1.1-1 OK xkeyboard-config 1.8-1 OK xkill 1.0.2-1 OK xmodmap 1.0.4-1 OK xorg-docs 1.5-1 OK xorg-server 1.7.6-2 OK xrdb 1.0.6-1 OK xset 1.1.0-1 OK xterm 255-1 OK xz 4.999.9beta-10 OK zip 3.0-11 OK zlib 1.2.3-10 OK zlib-devel 1.2.3-10 OK zlib0 1.2.3-10 OK The ssh deamon configuration file: $ cat /etc/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh_host_rsa_key #HostKey /etc/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes StrictModes no #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no #PrintMotd yes #PrintLastLog yes TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/sbin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs #X11Forwarding yes #AllowTcpForwarding yes #ForceCommand cvs server I hope this information is enough to solve the problem. In case any more is needed please comment and I'll add it. Thank you for reading!

    Read the article

  • Problem using a public key when connecting to a SSH server running on Cygwin

    - by Deleted
    We have installed Cygwin on a Windows Server 2008 Standard server and it working pretty well. Unfortunately we still have a big problem. We want to connect using a public key through SSH which doesn't work. It always falls back to using password login. We have appended our public key to ~/.ssh/authorized_keys on the server and we have our private and public key in ~/.ssh/id_dsa respective ~/.ssh/id_dsa.pub on the client. When debugging the SSH login session we see that the key is offered by the server apparently rejects it by some unknown reason. The SSH output when connecting from an Ubuntu 9.10 desktop with debug information enabled: $ ssh -v 192.168.10.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /home/myuseraccount/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for debug1: Connecting to 192.168.10.11 [192.168.10.11] port 22. debug1: Connection established. debug1: identity file /home/myuseraccount/.ssh/identity type -1 debug1: identity file /home/myuseraccount/.ssh/id_rsa type -1 debug1: identity file /home/myuseraccount/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.10.11' is known and matches the RSA host key. debug1: Found key in /home/myuseraccount/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/myuseraccount/.ssh/id_dsa debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myuseraccount/.ssh/identity debug1: Trying private key: /home/myuseraccount/.ssh/id_rsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password [email protected]'s password: The version of Cygwin: $ uname -a CYGWIN_NT-6.0 servername 1.7.1(0.218/5/3) 2009-12-07 11:48 i686 Cygwin The installed packages: $ cygcheck -c Cygwin Package Information Package Version Status _update-info-dir 00871-1 OK alternatives 1.3.30c-10 OK arj 3.10.22-1 OK aspell 0.60.5-1 OK aspell-en 6.0.0-1 OK aspell-sv 0.50.2-2 OK autossh 1.4b-1 OK base-cygwin 2.1-1 OK base-files 3.9-3 OK base-passwd 3.1-1 OK bash 3.2.49-23 OK bash-completion 1.1-2 OK bc 1.06-2 OK bzip2 1.0.5-10 OK cabextract 1.1-1 OK compface 1.5.2-1 OK coreutils 7.0-2 OK cron 4.1-59 OK crypt 1.1-1 OK csih 0.9.1-1 OK curl 7.19.6-1 OK cvs 1.12.13-10 OK cvsutils 0.2.5-1 OK cygrunsrv 1.34-1 OK cygutils 1.4.2-1 OK cygwin 1.7.1-1 OK cygwin-doc 1.5-1 OK cygwin-x-doc 1.1.0-1 OK dash 0.5.5.1-2 OK diffutils 2.8.7-2 OK doxygen 1.6.1-2 OK e2fsprogs 1.35-3 OK editrights 1.01-2 OK emacs 23.1-10 OK emacs-X11 23.1-10 OK file 5.04-1 OK findutils 4.5.5-1 OK flip 1.19-1 OK font-adobe-dpi75 1.0.1-1 OK font-alias 1.0.2-1 OK font-encodings 1.0.3-1 OK font-misc-misc 1.1.0-1 OK fontconfig 2.8.0-1 OK gamin 0.1.10-10 OK gawk 3.1.7-1 OK gettext 0.17-11 OK gnome-icon-theme 2.28.0-1 OK grep 2.5.4-2 OK groff 1.19.2-2 OK gvim 7.2.264-1 OK gzip 1.3.12-2 OK hicolor-icon-theme 0.11-1 OK inetutils 1.5-6 OK ipc-utils 1.0-1 OK keychain 2.6.8-1 OK less 429-1 OK libaspell15 0.60.5-1 OK libatk1.0_0 1.28.0-1 OK libaudio2 1.9.2-1 OK libbz2_1 1.0.5-10 OK libcairo2 1.8.8-1 OK libcurl4 7.19.6-1 OK libdb4.2 4.2.52.5-2 OK libdb4.5 4.5.20.2-2 OK libexpat1 2.0.1-1 OK libfam0 0.1.10-10 OK libfontconfig1 2.8.0-1 OK libfontenc1 1.0.5-1 OK libfreetype6 2.3.12-1 OK libgcc1 4.3.4-3 OK libgdbm4 1.8.3-20 OK libgdk_pixbuf2.0_0 2.18.6-1 OK libgif4 4.1.6-10 OK libGL1 7.6.1-1 OK libglib2.0_0 2.22.4-2 OK libglitz1 0.5.6-10 OK libgmp3 4.3.1-3 OK libgtk2.0_0 2.18.6-1 OK libICE6 1.0.6-1 OK libiconv2 1.13.1-1 OK libidn11 1.16-1 OK libintl3 0.14.5-1 OK libintl8 0.17-11 OK libjasper1 1.900.1-1 OK libjbig2 2.0-11 OK libjpeg62 6b-21 OK libjpeg7 7-10 OK liblzma1 4.999.9beta-10 OK libncurses10 5.7-18 OK libncurses8 5.5-10 OK libncurses9 5.7-16 OK libopenldap2_3_0 2.3.43-1 OK libpango1.0_0 1.26.2-1 OK libpcre0 8.00-1 OK libpixman1_0 0.16.6-1 OK libpng12 1.2.35-10 OK libpopt0 1.6.4-4 OK libpq5 8.2.11-1 OK libreadline6 5.2.14-12 OK libreadline7 6.0.3-2 OK libsasl2 2.1.19-3 OK libSM6 1.1.1-1 OK libssh2_1 1.2.2-1 OK libssp0 4.3.4-3 OK libstdc++6 4.3.4-3 OK libtiff5 3.9.2-1 OK libwrap0 7.6-20 OK libX11_6 1.3.3-1 OK libXau6 1.0.5-1 OK libXaw3d7 1.5D-8 OK libXaw7 1.0.7-1 OK libxcb-render-util0 0.3.6-1 OK libxcb-render0 1.5-1 OK libxcb1 1.5-1 OK libXcomposite1 0.4.1-1 OK libXcursor1 1.1.10-1 OK libXdamage1 1.1.2-1 OK libXdmcp6 1.0.3-1 OK libXext6 1.1.1-1 OK libXfixes3 4.0.4-1 OK libXft2 2.1.14-1 OK libXi6 1.3-1 OK libXinerama1 1.1-1 OK libxkbfile1 1.0.6-1 OK libxml2 2.7.6-1 OK libXmu6 1.0.5-1 OK libXmuu1 1.0.5-1 OK libXpm4 3.5.8-1 OK libXrandr2 1.3.0-10 OK libXrender1 0.9.5-1 OK libXt6 1.0.7-1 OK links 1.00pre20-1 OK login 1.10-10 OK luit 1.0.5-1 OK lynx 2.8.5-4 OK man 1.6e-1 OK minires 1.02-1 OK mkfontdir 1.0.5-1 OK mkfontscale 1.0.7-1 OK openssh 5.4p1-1 OK openssl 0.9.8m-1 OK patch 2.5.8-9 OK patchutils 0.3.1-1 OK perl 5.10.1-3 OK rebase 3.0.1-1 OK run 1.1.12-11 OK screen 4.0.3-5 OK sed 4.1.5-2 OK shared-mime-info 0.70-1 OK tar 1.22.90-1 OK terminfo 5.7_20091114-13 OK terminfo0 5.5_20061104-11 OK texinfo 4.13-3 OK tidy 041206-1 OK time 1.7-2 OK tzcode 2009k-1 OK unzip 6.0-10 OK util-linux 2.14.1-1 OK vim 7.2.264-2 OK wget 1.11.4-4 OK which 2.20-2 OK wput 0.6.1-2 OK xauth 1.0.4-1 OK xclipboard 1.1.0-1 OK xcursor-themes 1.0.2-1 OK xemacs 21.4.22-1 OK xemacs-emacs-common 21.4.22-1 OK xemacs-sumo 2007-04-27-1 OK xemacs-tags 21.4.22-1 OK xeyes 1.1.0-1 OK xinit 1.2.1-1 OK xinput 1.5.0-1 OK xkbcomp 1.1.1-1 OK xkeyboard-config 1.8-1 OK xkill 1.0.2-1 OK xmodmap 1.0.4-1 OK xorg-docs 1.5-1 OK xorg-server 1.7.6-2 OK xrdb 1.0.6-1 OK xset 1.1.0-1 OK xterm 255-1 OK xz 4.999.9beta-10 OK zip 3.0-11 OK zlib 1.2.3-10 OK zlib-devel 1.2.3-10 OK zlib0 1.2.3-10 OK The ssh deamon configuration file: $ cat /etc/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh_host_rsa_key #HostKey /etc/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes StrictModes no #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no #PrintMotd yes #PrintLastLog yes TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/sbin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs #X11Forwarding yes #AllowTcpForwarding yes #ForceCommand cvs server I hope this information is enough to solve the problem. In case any more is needed please comment and I'll add it. Thank you for reading!

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • TreeViewItem - Use ControlTemplate and HierarchicalDataTemplate together

    - by CrownJ
    I'm using HierarchicalDataTemplate in my TreeView, and I wanted to also overwrite the default template for the TreeViewItem so that when an item is selected, it only highlights the text, not including the icon next to it. <TreeView.ItemTemplate> <HierarchicalDataTemplate ItemsSource="{Binding Children}"> <TreeViewItem Style="{StaticResource TreeViewItemStyle}" Header="{Binding DisplayText}" /> </HierarchicalDataTemplate> </TreeView.ItemTemplate> <TreeView.Resources> <Style x:Key="TreeViewItemFocusVisual"> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate> <Rectangle/> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style x:Key="ExpandCollapseToggleStyle" TargetType="{x:Type ToggleButton}"> <Setter Property="Focusable" Value="False"/> <Setter Property="Width" Value="19"/> <Setter Property="Height" Value="13"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ToggleButton}"> <Border Width="19" Height="13" Background="Transparent"> <Border Width="9" Height="9" SnapsToDevicePixels="true" BorderBrush="#FF7898B5" BorderThickness="1" CornerRadius="1"> <Border.Background> <LinearGradientBrush EndPoint="1,1" StartPoint="0,0"> <GradientStop Color="White" Offset=".2"/> <GradientStop Color="#FFC0B7A6" Offset="1"/> </LinearGradientBrush> </Border.Background> <Path x:Name="ExpandPath" Fill="Black" Margin="1,1,1,1" Data="M 0 2 L 0 3 L 2 3 L 2 5 L 3 5 L 3 3 L 5 3 L 5 2 L 3 2 L 3 0 L 2 0 L 2 2 Z"/> </Border> </Border> <ControlTemplate.Triggers> <Trigger Property="IsChecked" Value="True"> <Setter Property="Data" TargetName="ExpandPath" Value="M 0 2 L 0 3 L 5 3 L 5 2 Z"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style x:Key="{x:Type TreeViewItem}" TargetType="{x:Type TreeViewItem}"> <Setter Property="Background" Value="Transparent"/> <Setter Property="HorizontalContentAlignment" Value="{Binding HorizontalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/> <Setter Property="VerticalContentAlignment" Value="{Binding VerticalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/> <Setter Property="Padding" Value="1,0,0,0"/> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/> <Setter Property="FocusVisualStyle" Value="{StaticResource TreeViewItemFocusVisual}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type TreeViewItem}"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <ToggleButton x:Name="Expander" Style="{StaticResource ExpandCollapseToggleStyle}" ClickMode="Press" IsChecked="{Binding IsExpanded, RelativeSource={RelativeSource TemplatedParent}}"/> <StackPanel Orientation="Horizontal" Grid.Column="1" > <Image Width="16" Height="16" Margin="3,0" Source="{Binding Path=ImageSource}" /> <Border x:Name="Bd" SnapsToDevicePixels="true" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Margin="0" Padding="0"> <ContentPresenter x:Name="PART_Header" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" ContentSource="Header"/> </Border> </StackPanel> <ItemsPresenter x:Name="ItemsHost" Grid.Column="1" Grid.ColumnSpan="2" Grid.Row="1"/> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="false"> <Setter Property="Visibility" TargetName="ItemsHost" Value="Collapsed"/> </Trigger> <Trigger Property="HasItems" Value="false"> <Setter Property="Visibility" TargetName="Expander" Value="Hidden"/> </Trigger> <Trigger Property="IsSelected" Value="true"> <Setter Property="Background" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.HighlightBrushKey}}"/> <Setter Property="Control.Foreground" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.HighlightTextBrushKey}}"/> </Trigger> <MultiTrigger> <MultiTrigger.Conditions> <Condition Property="IsSelected" Value="true"/> <Condition Property="IsSelectionActive" Value="false"/> </MultiTrigger.Conditions> <Setter Property="Background" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.ControlBrushKey}}"/> <Setter Property="Control.Foreground" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/> </MultiTrigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="VirtualizingStackPanel.IsVirtualizing" Value="true"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <VirtualizingStackPanel/> </ItemsPanelTemplate> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> </TreeView.Resources> I couldn't find a way to use both the HierarchicalDataTemplate and ControlTemplate together so I can specify the ItemsSource and also change parts of the control's behavior. When I do the above code, it won't select the treeviewitem at all.

    Read the article

  • With a jquery modular dialog how do I stop the form values from persisting?

    - by stormist
    (Citing source at: http://jqueryui.com/demos/dialog/#modal-form) As an example, this works great but each time the form is subsequently opened the user entered values remain. How can I stop this behavior? (the form will be used multiple times on the same page. <style type="text/css"> body { font-size: 62.5%; } label, input { display:block; } input.text { margin-bottom:12px; width:95%; padding: .4em; } fieldset { padding:0; border:0; margin-top:25px; } h1 { font-size: 1.2em; margin: .6em 0; } div#users-contain { width: 350px; margin: 20px 0; } div#users-contain table { margin: 1em 0; border-collapse: collapse; width: 100%; } div#users-contain table td, div#users-contain table th { border: 1px solid #eee; padding: .6em 10px; text-align: left; } .ui-dialog .ui-state-error { padding: .3em; } .validateTips { border: 1px solid transparent; padding: 0.3em; } </style> <script type="text/javascript"> $(function() { // a workaround for a flaw in the demo system (http://dev.jqueryui.com/ticket/4375), ignore! $("#dialog").dialog("destroy"); var name = $("#name"), email = $("#email"), password = $("#password"), allFields = $([]).add(name).add(email).add(password), tips = $(".validateTips"); function updateTips(t) { tips .text(t) .addClass('ui-state-highlight'); setTimeout(function() { tips.removeClass('ui-state-highlight', 1500); }, 500); } function checkLength(o,n,min,max) { if ( o.val().length > max || o.val().length < min ) { o.addClass('ui-state-error'); updateTips("Length of " + n + " must be between "+min+" and "+max+"."); return false; } else { return true; } } function checkRegexp(o,regexp,n) { if ( !( regexp.test( o.val() ) ) ) { o.addClass('ui-state-error'); updateTips(n); return false; } else { return true; } } $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 350, modal: true, buttons: { 'Create an account': function() { var bValid = true; allFields.removeClass('ui-state-error'); bValid = bValid && checkLength(name,"username",3,16); bValid = bValid && checkLength(email,"email",6,80); bValid = bValid && checkLength(password,"password",5,16); bValid = bValid && checkRegexp(name,/^[a-z]([0-9a-z_])+$/i,"Username may consist of a-z, 0-9, underscores, begin with a letter."); // From jquery.validate.js (by joern), contributed by Scott Gonzalez: http://projects.scottsplayground.com/email_address_validation/ bValid = bValid && checkRegexp(email,/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i,"eg. [email protected]"); bValid = bValid && checkRegexp(password,/^([0-9a-zA-Z])+$/,"Password field only allow : a-z 0-9"); if (bValid) { $('#users tbody').append('<tr>' + '<td>' + name.val() + '</td>' + '<td>' + email.val() + '</td>' + '<td>' + password.val() + '</td>' + '</tr>'); $(this).dialog('close'); } }, Cancel: function() { $(this).dialog('close'); } }, close: function() { allFields.val('').removeClass('ui-state-error'); } }); $('#create-user') .button() .click(function() { $('#dialog-form').dialog('open'); }); }); </script> <div class="demo"> <div id="dialog-form" title="Create new user"> <p class="validateTips">All form fields are required.</p> <form> <fieldset> <label for="name">Name</label> <input type="text" name="name" id="name" class="text ui-widget-content ui-corner-all" /> <label for="email">Email</label> <input type="text" name="email" id="email" value="" class="text ui-widget-content ui-corner-all" /> <label for="password">Password</label> <input type="password" name="password" id="password" value="" class="text ui-widget-content ui-corner-all" /> </fieldset> </form> </div> <div id="users-contain" class="ui-widget"> <h1>Existing Users:</h1> <table id="users" class="ui-widget ui-widget-content"> <thead> <tr class="ui-widget-header "> <th>Name</th> <th>Email</th> <th>Password</th> </tr> </thead> <tbody> <tr> <td>John Doe</td> <td>[email protected]</td> <td>johndoe1</td> </tr> </tbody> </table> </div> <button id="create-user">Create new user</button> </div><!-- End demo --> <div class="demo-description"> <p>Use a modal dialog to require that the user enter data during a multi-step process. Embed form markup in the content area, set the <code>modal</code> option to true, and specify primary and secondary user actions with the <code>buttons</code> option.</p> </div><!-- End demo-description -->

    Read the article

  • does red5 read tomcat-users.xml

    - by baba
    Hi, I have been busy creating an app for Red5. Imagine what was my surprise when I tried to configure basic/digest authentication and I couldn't. What struck me as strange is that I have a running tomcat instance that works and authenticates correctly with the following xmls: web.xml (part of) <security-constraint> <web-resource-collection> <web-resource-name>A Protected Page</web-resource-name> <url-pattern>/stats.jsp</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>tomcat</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>DIGEST</auth-method> <realm-name>BLAAAAAAAAAAAAAAAAA</realm-name> </login-config> <security-role> <description/> <role-name>tomcat</role-name> </security-role> and a tomcat-users.xml in /conf that looks kinda like this: <?xml version="1.0" encoding="UTF-8"?> <tomcat-users> <role rolename="tomcat"/> <user username="ide" password="bogus" roles="tomcat"/> </tomcat-users> The annoying thing is that configuration authenticates correctly when on tomcat's servlet container, but on the red5's modified one, it just keeps asking for authentication. Am I becoming mad or it should work like a charm? Red5 is version 0_9_1 The stats.jsp is accessible in both servlet containers, the only difference is that when you input the correct password and username in tomcat, you are logged in, and in red5 you are not, it just keeps asking you for the password. Any pointers? Am I missing something? Here is a stack trace of the error I receive AT the moment I try the login: Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted [ERROR] [http-127.0.0.1-5080-1] org.apache.catalina.realm.JAASRealm - Cannot find message associated with key jaasRealm.unexpectedError java.lang.SecurityException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:93) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [na:1.6.0_22] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) [na:1.6.0_22] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) [na:1.6.0_22] at java.lang.reflect.Constructor.newInstance(Constructor.java:513) [na:1.6.0_22] at java.lang.Class.newInstance0(Class.java:355) [na:1.6.0_22] at java.lang.Class.newInstance(Class.java:308) [na:1.6.0_22] at javax.security.auth.login.Configuration$3.run(Configuration.java:247) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:242) [na:1.6.0_22] at javax.security.auth.login.LoginContext$1.run(LoginContext.java:237) [na:1.6.0_22] at java.security.AccessController.doPrivileged(Native Method) [na:1.6.0_22] at javax.security.auth.login.LoginContext.init(LoginContext.java:234) [na:1.6.0_22] at javax.security.auth.login.LoginContext.<init>(LoginContext.java:403) [na:1.6.0_22] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:394) [catalina-6.0.24.jar:na] at org.apache.catalina.realm.JAASRealm.authenticate(JAASRealm.java:357) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.findPrincipal(DigestAuthenticator.java:283) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.DigestAuthenticator.authenticate(DigestAuthenticator.java:176) [catalina-6.0.24.jar:na] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:523) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) [catalina-6.0.24.jar:na] at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) [catalina-6.0.24.jar:na] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) [catalina-6.0.24.jar:na] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) [catalina-6.0.24.jar:na] at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) [tomcat-coyote-6.0.24.jar:na] at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) [tomcat-coyote-6.0.24.jar:na] at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) [tomcat-coyote-6.0.24.jar:na] at java.lang.Thread.run(Thread.java:662) [na:1.6.0_22] Caused by: java.io.IOException: Unable to locate a login configuration at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:250) [na:1.6.0_22] at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:91) [na:1.6.0_22] ... 27 common frames omitted In addition, here is the configuration of red5-web.properties webapp.contextPath=/project Even futher information: Seems to me like it is using the right realm: MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting connector: org.apache.catalina.connector.Connector [INFO] [main] org.red5.server.tomcat.TomcatLoader - Address to bind: /127.0.0.1:5080 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Setting realm: org.apache.catalina.realm.MemoryRealm [INFO] [main] org.red5.server.tomcat.TomcatLoader - Loading tomcat context [INFO] [main] org.red5.server.tomcat.TomcatLoader - Server root: C:/Program Files/Red5 [INFO] [main] org.red5.server.tomcat.TomcatLoader - Config root: C:/Program Files/Red5/conf [INFO] [main] org.red5.server.tomcat.TomcatLoader - Application root: C:/Program Files/Red5/webapps [INFO] [main] org.red5.server.tomcat.TomcatLoader - Starting Tomcat servlet engine [INFO] [main] org.apache.catalina.startup.Embedded - Starting tomcat server [INFO] [main] org.apache.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/6.0.26 However, immediately after bootstraping Tomcat, I am presented with the following error: Exception in thread "Launcher:/administration" org.springframework.beans.factory.BeanDefinitionStoreException: Could not resolve bean definition resource pattern [/WEB-INF/red5-*.xml]; nested exception is java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:190) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:458) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:388) at org.red5.server.tomcat.TomcatLoader$1.run(TomcatLoader.java:594) Caused by: java.io.FileNotFoundException: ServletContext resource [/WEB-INF/] cannot be resolved to URL because it does not exist at org.springframework.web.context.support.ServletContextResource.getURL(ServletContextResource.java:132) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.isJarResource(PathMatchingResourcePatternResolver.java:414) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.findPathMatchingResources(PathMatchingResourcePatternResolver.java:343) at org.springframework.core.io.support.PathMatchingResourcePatternResolver.getResources(PathMatchingResourcePatternResolver.java:282) at org.springframework.context.support.AbstractApplicationContext.getResources(AbstractApplicationContext.java:1156) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:177) ... 7 more This error is kinda strange, because after this it seems that /WEB-INF/ is found by the rest of the program by the following output: [INFO] [Launcher:/SOSample] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/installer] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] [INFO] [Launcher:/LiveMedia] org.springframework.beans.factory.config.PropertyPlaceholderConfigurer - Loading properties file from ServletContext resource [/WEB-INF/red5-web.properties] What really annoys me is that, as you can see in the output, when I try to login, I get a JAASRealm-related exception, but in the debug output when Tomcat is loading, it is clear to me that it expects a MemoryRealm. I was wondering where and how in red5.xml should I specify bean properties such that I force red5 to use MemoryRealm that is under /conf/tomcat-users.xml, because it certainly doesn't do so now. It seems like the biggest question I have posted so far, but I tried to explain it as fully as possible as to avoid confusion.

    Read the article

  • WPF Button Image only showing in last control

    - by Ryan
    Hello All! I am fairly new to WPF and am probably missing something simple here. If I have 3 controls, only the last control will show the OriginalImage that I specify. Any help would be most appreciated. Thanks Ryan Main Window <Grid> <Grid.RowDefinitions> <RowDefinition Height="200*"/> <RowDefinition Height="60" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="85" /> <ColumnDefinition Width="85" /> <ColumnDefinition Width="85" /> <ColumnDefinition Width="85" /> <ColumnDefinition Width="300" /> </Grid.ColumnDefinitions> <Grid Grid.Row="1"> <but:ListButton OriginalImage="/CustomItemsPanel;component/ListBox/Images/add.png" DisableImage="/CustomItemsPanel;component/ListBox/Images/addunselect.png" /> </Grid > <Grid Grid.Row="1" Grid.Column="1" > <but:ListButton OriginalImage="/CustomItemsPanel;component/ListBox/Images/add.png" DisableImage="/CustomItemsPanel;component/ListBox/Images/addunselect.png" /> </Grid > <Grid Grid.Row="1" Grid.Column="2" > <but:ListButton OriginalImage="/CustomItemsPanel;component/ListBox/Images/add.png" DisableImage="/CustomItemsPanel;component/ListBox/Images/addunselect.png" /> </Grid> </Grid> Control XAML <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:CustomItemsPanel.ListButton"> <LinearGradientBrush x:Key="ButtonBackground" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF0E3D70"/> <GradientStop Color="#FF001832" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <LinearGradientBrush x:Key="ButtonBackgroundMouseOver" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF1E62A1" /> <GradientStop Color="#FF0A3C6D" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <LinearGradientBrush x:Key="ButtonBackgroundSelected" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.GradientStops> <GradientStop Color="Red" /> <GradientStop Color="#FF0A2A4C" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <Style x:Key="Toggle" TargetType="{x:Type Button}"> <Setter Property="Content"> <Setter.Value> <Image> <Image.Style> <Style TargetType="{x:Type Image}"> <Setter Property="Source" Value="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type local:ListButton}}, Path=OriginalImage}"/> <Style.Triggers> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Source" Value="{Binding Path=DisableImage, RelativeSource={RelativeSource TemplatedParent}}"/> </Trigger> </Style.Triggers> </Style> </Image.Style> </Image> </Setter.Value> </Setter> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> <Grid Cursor="Hand"> <Border Name="back" Margin="0,1,0,0" Background="{StaticResource ButtonBackground}"> <ContentPresenter VerticalAlignment="Center" HorizontalAlignment="Center" x:Name="content" /> </Border> <Border BorderThickness="1" BorderBrush="#FF004F92"> <Border BorderThickness="0,0,1,0" BorderBrush="#FF101D29" /> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="True" > <Setter TargetName="back" Property="Background" Value="{StaticResource ButtonBackgroundMouseOver}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="{x:Type local:ListButton}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:ListButton}"> <Button Style="{StaticResource Toggle}" /> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> Control Code Behind public class ListButton : Control { public static readonly DependencyProperty MouseOverImageProperty; public static readonly DependencyProperty OriginalImageProperty; public static readonly DependencyProperty DisableImageProperty; static ListButton() { DefaultStyleKeyProperty.OverrideMetadata(typeof(ListButton), new FrameworkPropertyMetadata(typeof(ListButton))); MouseOverImageProperty = DependencyProperty.Register("MouseOverImage", typeof(ImageSource), typeof(ListButton), new UIPropertyMetadata(null)); OriginalImageProperty = DependencyProperty.Register("OriginalImage", typeof(ImageSource), typeof(ListButton), new UIPropertyMetadata(null)); DisableImageProperty = DependencyProperty.Register("DisableImage", typeof(ImageSource), typeof(ListButton), new UIPropertyMetadata(null)); } public ImageSource MouseOverImage { get { return (ImageSource)GetValue(MouseOverImageProperty); } set { SetValue(MouseOverImageProperty, value); } } public ImageSource OriginalImage { get { return (ImageSource)GetValue(OriginalImageProperty); } set { SetValue(OriginalImageProperty, value); } } public ImageSource DisableImage { get { return (ImageSource)GetValue(DisableImageProperty); } set { SetValue(DisableImageProperty, value); } } }

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • How to know if the client has terminated in sockets

    - by shadyabhi
    Suppose, I have a connected socket after writing this code.. if ((sd = accept(socket_d, (struct sockaddr *)&client_addr, &alen)) < 0) { perror("accept failed\n"); exit(1); } How can I know at the server side that client has exited. My whole program actually does the following.. Accepts a connection from client Starts a new thread that reads messages from that particular client and then broadcast this message to all the connected clients. If you want to see the whole code... In this whole code. I am also struggling with one more problem that whenever I kill a client with Ctrl+C, my server terminates abruptly.. It would be nice if anyone could suggest what the problem is.. #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <signal.h> #include <errno.h> #include <pthread.h> /*CONSTANTS*/ #define DEFAULT_PORT 10000 #define LISTEN_QUEUE_LIMIT 6 #define TOTAL_CLIENTS 10 #define CHAR_BUFFER 256 /*GLOBAL VARIABLE*/ int current_client = 0; int connected_clients[TOTAL_CLIENTS]; extern int errno; void *client_handler(void * socket_d); int main(int argc, char *argv[]) { struct sockaddr_in server_addr;/* structure to hold server's address*/ int socket_d; /* listening socket descriptor */ int port; /* protocol port number */ int option_value; /* needed for setsockopt */ pthread_t tid[TOTAL_CLIENTS]; port = (argc > 1)?atoi(argv[1]):DEFAULT_PORT; /* Socket Server address structure */ memset((char *)&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; /* set family to Internet */ server_addr.sin_addr.s_addr = INADDR_ANY; /* set the local IP address */ server_addr.sin_port = htons((u_short)port); /* Set port */ /* Create socket */ if ( (socket_d = socket(PF_INET, SOCK_STREAM, 0)) < 0) { fprintf(stderr, "socket creation failed\n"); exit(1); } /* Make listening socket's port reusable */ if (setsockopt(socket_d, SOL_SOCKET, SO_REUSEADDR, (char *)&option_value, sizeof(option_value)) < 0) { fprintf(stderr, "setsockopt failure\n"); exit(1); } /* Bind a local address to the socket */ if (bind(socket_d, (struct sockaddr *)&server_addr, sizeof(server_addr)) < 0) { fprintf(stderr, "bind failed\n"); exit(1); } /* Specify size of request queue */ if (listen(socket_d, LISTEN_QUEUE_LIMIT) < 0) { fprintf(stderr, "listen failed\n"); exit(1); } memset(connected_clients,0,sizeof(int)*TOTAL_CLIENTS); for (;;) { struct sockaddr_in client_addr; /* structure to hold client's address*/ int alen = sizeof(client_addr); /* length of address */ int sd; /* connected socket descriptor */ if ((sd = accept(socket_d, (struct sockaddr *)&client_addr, &alen)) < 0) { perror("accept failed\n"); exit(1); } else printf("\n I got a connection from (%s , %d)\n",inet_ntoa(client_addr.sin_addr),ntohs(client_addr.sin_port)); if (pthread_create(&tid[current_client],NULL,(void *)client_handler,(void *)sd) != 0) { perror("pthread_create error"); continue; } connected_clients[current_client]=sd; current_client++; /*Incrementing Client number*/ } return 0; } void *client_handler(void *connected_socket) { int sd; sd = (int)connected_socket; for ( ; ; ) { ssize_t n; char buffer[CHAR_BUFFER]; for ( ; ; ) { if (n = read(sd, buffer, sizeof(char)*CHAR_BUFFER) == -1) { perror("Error reading from client"); pthread_exit(1); } int i=0; for (i=0;i<current_client;i++) { if (write(connected_clients[i],buffer,sizeof(char)*CHAR_BUFFER) == -1) perror("Error sending messages to a client while multicasting"); } } } } My client side is this (Maye be irrelevant while answering my question) #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <stdlib.h> void error(char *msg) { perror(msg); exit(0); } void *listen_for_message(void * fd) { int sockfd = (int)fd; int n; char buffer[256]; bzero(buffer,256); printf("YOUR MESSAGE: "); fflush(stdout); while (1) { n = read(sockfd,buffer,256); if (n < 0) error("ERROR reading from socket"); if (n == 0) pthread_exit(1); printf("\nMESSAGE BROADCAST: %sYOUR MESSAGE: ",buffer); fflush(stdout); } } int main(int argc, char *argv[]) { int sockfd, portno, n; struct sockaddr_in serv_addr; struct hostent *server; pthread_t read_message; char buffer[256]; if (argc < 3) { fprintf(stderr,"usage %s hostname port\n", argv[0]); exit(0); } portno = atoi(argv[2]); sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) error("ERROR opening socket"); server = gethostbyname(argv[1]); if (server == NULL) { fprintf(stderr,"ERROR, no such host\n"); exit(0); } bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length); serv_addr.sin_port = htons(portno); if (connect(sockfd,&serv_addr,sizeof(serv_addr)) < 0) error("ERROR connecting"); bzero(buffer,256); if (pthread_create(&read_message,NULL,(void *)listen_for_message,(void *)sockfd) !=0 ) { perror("error creating thread"); } while (1) { fgets(buffer,255,stdin); n = write(sockfd,buffer,256); if (n < 0) error("ERROR writing to socket"); bzero(buffer,256); } return 0; }

    Read the article

  • parseInt and viewflipper layout problems

    - by user1234167
    I have a problem with parseInt it throws the error: unable to parse 'null' as integer. My view flipper is also not working. Hopefully this is an easy enough question. Here is my activity: import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.InputSource; import org.xml.sax.XMLReader; import android.app.Activity; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.ViewFlipper; import xml.parser.dataset; public class XmlParserActivity extends Activity implements OnClickListener { private final String MY_DEBUG_TAG = "WeatherForcaster"; // private dataset myDataSet; private LinearLayout layout; private int temp= 0; /** Called when the activity is first created. */ //the ViewSwitcher private Button btn; private ViewFlipper flip; // private TextView tv; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); layout=(LinearLayout)findViewById(R.id.linearlayout1); btn=(Button)findViewById(R.id.btn); btn.setOnClickListener(this); flip=(ViewFlipper)findViewById(R.id.flip); //when a view is displayed flip.setInAnimation(this,android.R.anim.fade_in); //when a view disappears flip.setOutAnimation(this, android.R.anim.fade_out); // String postcode = null; // public String getPostcode { // return postcode; // } //URL newUrl = c; // myweather.setText(c.toString()); /* Create a new TextView to display the parsingresult later. */ TextView tv = new TextView(this); // run(0); //WeatherApplicationActivity postcode = new WeatherApplicationActivity(); try { /* Create a URL we want to load some xml-data from. */ URL url = new URL("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query=G41"); //String url = new String("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query="+WeatherApplicationActivity.postcode ); //URL url = new URL(url); //url.toString( ); //myString(url.toString() + WeatherApplicationActivity.getString(postcode)); // url + WeatherApplicationActivity.getString(postcode); /* Get a SAXParser from the SAXPArserFactory. */ SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); /* Get the XMLReader of the SAXParser we created. */ XMLReader xr = sp.getXMLReader(); /* Create a new ContentHandler and apply it to the XML-Reader*/ handler myHandler = new handler(); xr.setContentHandler(myHandler); /* Parse the xml-data from our URL. */ xr.parse(new InputSource(url.openStream())); /* Parsing has finished. */ /* Our ExampleHandler now provides the parsed data to us. */ dataset parsedDataSet = myHandler.getParsedData(); /* Set the result to be displayed in our GUI. */ tv.setText(parsedDataSet.toString()); } catch (Exception e) { /* Display any Error to the GUI. */ tv.setText("Error: " + e.getMessage()); Log.e(MY_DEBUG_TAG, "WeatherQueryError", e); } temp = Integer.parseInt(xml.parser.dataset.getTemp()); if(temp <0){ //layout.setBackgroundColor(Color.BLUE); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.BLUE); } else if(temp > 0 && temp < 9) { //layout.setBackgroundColor(Color.GREEN); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.GREEN); } else { //layout.setBackgroundColor(Color.YELLOW); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.YELLOW); } /* Display the TextView. */ this.setContentView(tv); } @Override public void onClick(View arg0) { // TODO Auto-generated method stub onClick(View arg0) { // TODO Auto-generated method stub flip.showNext(); //specify flipping interval //flip.setFlipInterval(1000); //flip.startFlipping(); } } this is my dataset: package xml.parser; public class dataset { static String temp = null; // private int extractedInt = 0; public static String getTemp() { return temp; } public void setTemp(String temp) { this.temp = temp; } this is my handler: public void characters(char ch[], int start, int length) { if(this.in_temp){ String setTemp = new String(ch, start, length); // myParsedDataSet.setTempUnit(new String(ch, start, length)); // myParsedDataSet.setTemp; } the dataset and handler i only pasted the code that involves the temp as i no they r working when i take out the if statement. However even then my viewflipper wont work. This is my main xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/linearlayout1" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip Example" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip" android:id="@+id/btn" android:onClick="ClickHandler" /> <ViewFlipper android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/flip"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Item1a" /> </LinearLayout> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv2" /> </ViewFlipper> </LinearLayout> this is my logcat: 04-01 18:02:24.744: E/AndroidRuntime(7331): FATAL EXCEPTION: main 04-01 18:02:24.744: E/AndroidRuntime(7331): java.lang.RuntimeException: Unable to start activity ComponentInfo{xml.parser/xml.parser.XmlParserActivity}: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1830) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1851) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.access$1500(ActivityThread.java:132) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Handler.dispatchMessage(Handler.java:99) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Looper.loop(Looper.java:150) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.main(ActivityThread.java:4293) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invoke(Method.java:507) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:607) 04-01 18:02:24.744: E/AndroidRuntime(7331): at dalvik.system.NativeStart.main(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): Caused by: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:356) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:332) 04-01 18:02:24.744: E/AndroidRuntime(7331): at xml.parser.XmlParserActivity.onCreate(XmlParserActivity.java:118) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1072) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1794) I hope I have given enough information about my problems. I will be extremely grateful if anyone can help me out.

    Read the article

  • What arguments do I send a function being called by a button in python?

    - by Jared
    I have a UI, in that UI is 4 text fields and 1 int field, then I have a function that calls to another function based on what's inside of the text fields, this function has (self, *args). My function that is being called to takes five arguments and I don't know what to put in it to make it actually work with my UI because python button's send an argument of their own. I have tried self and *args, but it doesn't work. Here is my code, didn't include most of the UI code since it is self explanatory: def crBC(self, IKJoint, FKJoint, bindJoint, xQuan, switch): ''' You should have a controller with an attribute 'ikFkBlend' - The name can be changed after the script executes. Controller should contain an enum - FK/DYN(0), IK(1). Specify the IK joint, then either the dynamic or FK joint, then the bind joint. Then a quantity of joints to pass through and connect. Tested currently on 600 joints (200 x 3), executed in less than a second. Returns nothing. Please open your script editor for details. ''' import itertools # gets children joints of the selected joint chHipIK = cmds.listRelatives(IKJoint, ad = True, type = 'joint') chHipFK = cmds.listRelatives(FKJoint, ad = True, type = 'joint') chHipBind = cmds.listRelatives(bindJoint, ad = True, type = 'joint') # list is built backwards, this reverses the list chHipIK.reverse() chHipFK.reverse() chHipBind.reverse() # appends the initial joint to the list chHipIK.append(IKJoint) chHipFK.append(FKJoint) chHipBind.append(bindJoint) # puts the last joint at the start of the list because the initial joint # was added to the end chHipIK.insert(0, chHipIK.pop()) chHipFK.insert(0, chHipFK.pop()) chHipBind.insert(0, chHipBind.pop()) # pops off the remaining joints in the list the user does not wish to be blended chHipBind[xQuan:] = [] chHipIK[xQuan:] = [] chHipFK[xQuan:] = [] # goes through the bind joints, makes a blend colors for each one, connects # the switch to the blender for a, b, c in itertools.izip(chHipBind, chHipIK, chHipFK): rotBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'rotate_BC') tranBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'tran_BC') scaleBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'scale_BC') cmds.connectAttr(switch + '.ikFkSwitch', rotBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', tranBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', scaleBC + '.blender') # goes through the ik joints, connects to the blend colors cmds.connectAttr(b + '.rotate', rotBC + '.color1', force = True) cmds.connectAttr(b + '.translate', tranBC + '.color1', force = True) cmds.connectAttr(b + '.scale', scaleBC + '.color1', force = True) # connects FK joints to the blend colors cmds.connectAttr(c + '.rotate', rotBC + '.color2') cmds.connectAttr(c + '.translate', tranBC + '.color2') cmds.connectAttr(c + '.scale', scaleBC + '.color2') # connects blend colors to bind joints cmds.connectAttr(rotBC + '.output', a + '.rotate') cmds.connectAttr(tranBC + '.output', a + '.translate') cmds.connectAttr(scaleBC + '.output', a + '.scale') ------------------- def execCrBC(self, *args): g.crBC(cmds.textField(self.ikJBC, q = True, tx = True), cmds.textField(self.fkJBC, q = True, tx = True), cmds.textField(self.bindJBC, q = True, tx = True), cmds.intField(self.bQBC, q = True, v = True), cmds.textField(self.sCBC, q = True, tx = True)) ------------------- self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) Here is the code causing the problem as requested: import maya.cmds as cmds import jtRigUI.createDummyRig as dum import jtRigUI.createSkeleton as sk import jtRigUI.generalUtilities as gu import jtRigUI.createLegRig as lr import jtRigUI.createArmRig as ar class RUI(dum.Dict, dum.Dummy, sk.Skel, sk.FiSkel, lr.LeanLocs, lr.LegRig, ar.ArmRig, gu.Gutils): def __init__(self, charNameUI, gScaleUI, fingButtonGrp, thumbCheckBox, spineButtonGrp, neckButtonGrp, ikJBC, fkJBC, bindJBC, bQBC, sCBC): rigUI = 'rigUI' if cmds.window(rigUI, exists = True): cmds.deleteUI(rigUI) rigUI = cmds.window(rigUI, t = 'JT Rigging UI', sizeable = False, tb = True, mnb = False, mxb = False, menuBar = True, tlb = True, nm = 5) form = cmds.formLayout() tabs = cmds.tabLayout(innerMarginWidth = 1, innerMarginHeight = 1) rigUIMenu = cmds.menu('Help', hm = True) aboutMenu = cmds.menuItem('about') cmds.popupMenu('about', button = 1) deleteUIMenu = cmds.menu('Delete', hm = True) cmds.menuItem('dummySkeleton') cmds.formLayout(form, edit = True, attachForm = ((tabs, 'top', 0), (tabs, 'left', 0), (tabs, 'bottom', 0), (tabs, 'right', 0)), w = 30) tab1 = cmds.rowColumnLayout('Dummy') #cmds.columnLayout(rowSpacing = 10) #cmds.setParent('..') cmds.frameLayout(l = 'A: Dummy Skeleton Setup', w = 400) self.charNameUI = cmds.textFieldGrp (label="Optional Character Name:", ann="Insert a name for the character or leave empty.", tx = '', w = 1) fingJUI = cmds.frameLayout(l = 'B: Number of Fingers', w = 10) cmds.text('\n', h = 5) self.fingButtonGrp = cmds.radioButtonGrp('fingRadio', p = fingJUI, l = 'Fingers: ', sl = 4, w = 1, numberOfRadioButtons = 4, labelArray4 = ['One', 'Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw5 = [60,60,60,60,60]) self.thumbCheckBox = cmds.checkBoxGrp(l = 'Thumb: ', v1 = True) cmds.text('\n', h = 5) spineJUI = cmds.frameLayout(l = 'C: Number of Spine Joints') cmds.text('\n', h = 5) self.spineButtonGrp = cmds.radioButtonGrp('spineRadio', p = spineJUI, l = 'Spine Joints: ', sl = 2, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Three', 'Five', 'Ten'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) neckJUI = cmds.frameLayout(l = 'D: Number of Neck Joints') cmds.text('\n', h = 5) self.neckButtonGrp = cmds.radioButtonGrp('neckRadio', p = neckJUI, l = 'Neck Joints: ', sl = 0, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.frameLayout('E: Creation') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nCreate Dummy Skeleton\n', c = self.build) # also have it make char name field grey cmds.text('Elbows and Knees must have bend.', bgc = (0.2,0.2,0.2)) cmds.columnLayout() cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab2 = cmds.rowColumnLayout('Skeleton') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Skeleton Setup') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nConvert to Skeleton - Orient - Set LRA\n', c = self.buildSkel) self.gScaleUI = cmds.textFieldGrp (label="Scale Multiplier:", ann="Scale multipler of Character: basis for all further base controllers", tx = '1.0', w = 1, ed = False, en = False, visible = True) cmds.frameLayout('B: Manual Orientation') cmds.text('You must manually check finger, thumb, leg, foot orientation specifically.\nConfirm rest of joints.\nSpine: X aim, Y point backwards from spine, Z to the side.\nFingers: X is aim, Y points upwards, Z to the side - Spread on Y, curl on Z.\nFoot: Pivots on Y, rolls on Z, leans on X.') cmds.columnLayout() cmds.setParent('..') cmds.frameLayout('C: Finalize Creation of Skeleton') cmds.button(l = '\nFinalize Skeleton\n', c = self.finishS) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab3 = cmds.rowColumnLayout('Legs') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Leg Rig Setup') cmds.button(l = '\nGenerate Foot Lean Locators\n', c = self.makeLean) cmds.text('Place on either side of the foot.\nDo not rotate: Automatic orientation in place.') cmds.frameLayout(l = 'B: Rig Legs') cmds.button(l = '\nRig Legs\n', c = self.makeLegs) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab4 = cmds.rowColumnLayout('Arms') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Arm Rig Setup') cmds.button(l = '\nA: Rig Arms\n', c = self.makeArms) cmds.setParent('..') cmds.setParent('..') tab5 = cmds.rowColumnLayout('Spine and Head') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Spine Rig Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Stretchy IK') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Stretchy Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Extras') cmds.scrollLayout(saw = 600, sah = 600, cr = True) cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'General Utitlities') cmds.text('\nHere are all my general utilities for various things') cmds.frameLayout(l = 'Automatic Blend Colors Creation and Connection') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('IK Joint:') cmds.text(l = '') cmds.text('FK/Dyn Joint:') cmds.text(l = '') cmds.text('Bind Joint:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nBlend Quantity:') cmds.text(l = '') cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.text(l = '') cmds.setParent('..') cmds.frameLayout(l = 'Make Spline IK Curve Stretch And Squash') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('Curve Name:') cmds.text(l = '') cmds.text('Setup Name:') cmds.text(l = '') cmds.text('Joint Quantity:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(' \nGlobal Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.setParent('..') cmds.showWindow(rigUI) r = RUI('charNameUI', 'gScaleUI', 'fingButtonGrp', 'thumbCheckBox', 'spineButtonGrp', 'neckButtonGrp', 'ikJBC', 'fkJBC', 'bindJBC', 'bQBC', 'sCBC') # last modified at 6.20 pm 29th June 2011

    Read the article

  • Visualising a 'Smarties' lid using XAML (WPF/Silverlight, Visual Studio/Blend)

    - by Mr. Disappointment
    Hi folks, First off, to clarify something in the title which could well be ambiguous/misleading, I'd like to inform you of my definition of 'Smarties', as I know often products are available all over - only under a different alias. Smarties are a candy product in the UK, little chocolate drops covered in a crispy shell which are distributed in a card tube, this tube used to have a plastic lid/top with an individual letter on the underside (they've taken a more economical approach as of late), the lid/top of the old-style tube is the main element of this question. Familiarisation Link Lid View Link Okay, now with the seller-type pitch out of the way (no, I don't work for Nestlé ;)), hopefully the question is becoming rather clear. Essentially, I'd like to recreate one of these lids using XAML, ultimately to be utilised in a Silverlight web application. That is, I'd like to result in a reusable control, of which the following is true: It looks like a Smarties lid. The colour can be specified. The letter can be specified. The control can be rotated to display either side. The second two seem trivial, but we must bare in mind that the background colour specified will almost, if not always, be the same as the foreground, leaving a visibility issue where the character content is concerned; as for the rotation, I'm hoping this kind of functionality is reasonably available, and acceptable to implement. So, to put this out there, consider a control named SmartiesLid which derives from ToggleButton (appropriate?) and further plotted out using a style in a resource dictionary which applies to it, as follows: <Style TargetType="local:SmartiesLid"> <Setter Property="Background" Value="Red"/> <Setter Property="Foreground" Value="Red"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:SmartiesLid"> <Grid x:Name="LayoutRoot"> <Grid.ColumnDefinitions> <ColumnDefinition Width=".05*"/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition Width=".05*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height=".05*"/> <RowDefinition/> <RowDefinition/> <RowDefinition Height=".05*"/> <RowDefinition Height=".1*"/> </Grid.RowDefinitions> <Ellipse Grid.RowSpan="4" Grid.ColumnSpan="4" Fill="{TemplateBinding Background}" Stroke="Transparent"/> <Ellipse Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Fill="{TemplateBinding Background}" Stroke="Transparent"> <Ellipse.Effect> <DropShadowEffect Direction="280" ShadowDepth="6" BlurRadius="6"/> </Ellipse.Effect> </Ellipse> <TextBlock Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Name="LetterTextBlock" Text="{TemplateBinding Content}" Foreground="{TemplateBinding Foreground}" FontSize="190" HorizontalAlignment="Center" VerticalAlignment="Center"> </TextBlock> <!-- <Path Stretch="Fill" Grid.Row="3" Grid.RowSpan="2" Grid.Column="1" Grid.ColumnSpan="2" Fill="Black" Data="..."> How to craw the lid 'tab'? </Path> --> </Grid> <ControlTemplate.Resources> <TranslateTransform x:Key="IndentTransform" X="10" /> <RotateTransform x:Key="RotateTransform" Angle="0" /> <Storyboard x:Key="MouseOver"> </Storyboard> <Storyboard x:Key="MouseLeave"> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource MouseOver}"/> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource MouseLeave}"/> </Trigger.ExitActions> </Trigger> <Trigger Property="IsPressed" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource IndentTransform}"/> </Trigger> <Trigger Property="IsChecked" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource RotateTransform}"/> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Foreground" Value="Gray"/> <Setter Property="Opacity" Value="0.5"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> With this in mind, can anyone give input on, in decreasing order of my incompetence in an area: Designing the overall look and feel of the damn thing (I'm no designer, and while I could hack away at this single control for days and potentially get something relatively useful, it's always a gamble). The particular barrier for me here is 'pathing' the tab of the lid, as you will see in the XAML as an element commented out. Should Path be used, or would it be more appropriate to transform a rectangle with rounded corners, or any specific suggestions? Bevelling the individually displayed letter; as detailed above, when the colour of both the foreground and background are the same then this will be invisible if no effects are applied, also for a decent level of realism I'd like to be able to apply such an effect/s. So far use of DropShadow and Balder3DEngine have fulfilled my requirements for graphics in XAML, how achievable is a bevel effect? Rotating the control on mouse-click, that is, showing the opposing face. Is this going to be possible using a style and XAML only for the design? Or is it that ugliness may rear it's head in the form of code-behind to show/hide embedded controls? Should the faces be separate controls and later somehow combined? Allowing the control to size dynamically. I'm supposing I will be able to convert a solid, absolute layout to a nice generic one when I actually have the former in place. Obviously this entails sizing the centralised letter and the lid 'tab', but that's it really, other than keeping the aspect ratio equal (since the ellipses grow nicely with the grid). Any suggestions to approaching this would be greatly appreciated, particularly with a dynamically growing font - I've done that before in a web-imaging scenario using code and System.Drawing, and wouldn't like to approach it in even a similar way. By the way, the reason I specify both WPF and Silverlight is that, from my current knowledge, the inputs being written targeting either of these will be fairly transferable for similar output by the other, albeit not without alterations in either scenario. The resulting application is in fact destined to be written in Silverlight, however, so I don't fancy inviting anything from WPF which will guarantee my only being able to convert 90% of it. I'll go give this little project a start, maybe in Blend(?), hopefully can catch up with some advice shortly. Thanks, Mr. D EDIT: Next question, ought this to be broken up into separate questions? :/

    Read the article

  • how to sort JTable by providing column index externally.

    - by user345940
    I would like to implement sorting on JTable by providing column index externally in program. Here is my sample code in which i have initialize JTable, Add one Column and 30 rows to JTable. After rows has been added i am sorting JTable by providing column index 0 but i could not get sorted data. how can i get my first column in sorted order? what's wrong with my code. **Why sortCTableonColumnIndex() method could not sort data for specify column index? ` public class Test { private JTable oCTable; private DefaultTableModel oDefaultTableModel; private JScrollPane oPane; private JTableHeader oTableHeader; private TableRowSorter sorter; public void adddata() { for (int i = 0; i < 30; i++) { Object[] row = new Object[1]; String sValueA = "A"; String sValueB = "A"; row[0] = ""; if (i % 2 == 0) { if (i < 15) { sValueA = sValueA + sValueA; row[1] = sValueA; } else { if (i == 16) { sValueB = "D"; row[1] = sValueA; } else { sValueB = sValueB + sValueB; row[1] = sValueA; } } } else { if (i < 15) { sValueB = sValueB + sValueB; row[1] = sValueB; } else { if (i == 17) { sValueB = "C"; row[1] = sValueB; } else { sValueB = sValueB + sValueB; row[1] = sValueB; } } } } } public void createTable() { oCTable = new JTable(); oDefaultTableModel = new DefaultTableModel(); oCTable.setModel(oDefaultTableModel); oTableHeader = oCTable.getTableHeader(); oCTable.setAutoResizeMode(oCTable.AUTO_RESIZE_OFF); oCTable.setFillsViewportHeight(true); JTable oTable = new LineNumberTable(oCTable); oPane = new JScrollPane(oCTable); oPane.setRowHeaderView(oTable); JPanel oJPanel = new JPanel(); oJPanel.setLayout(new BorderLayout()); oJPanel.add(oPane, BorderLayout.CENTER); JDialog oDialog = new JDialog(); oDialog.add(oJPanel); oDialog.setPreferredSize(new Dimension(500, 300)); oDialog.pack(); oDialog.setVisible(true); } public void insert() { oDefaultTableModel.addColumn("Name"); int iColumnPlace = ((DefaultTableModel) oCTable.getModel()).findColumn("Name"); CellRendererForRowHeader oCellRendererForRowHeader = new CellRendererForRowHeader(); TableColumn Column = oCTable.getColumn(oTableHeader.getColumnModel().getColumn(iColumnPlace).getHeaderValue()); Column.setPreferredWidth(300); Column.setMaxWidth(300); Column.setMinWidth(250); Column.setCellRenderer(oCellRendererForRowHeader); for (int i = 0; i < 30; i++) { Object[] row = new Object[1]; String sValueA = "A"; if (i % 2 == 0) { if (i < 15) { sValueA = sValueA + "a"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { if (i == 16) { sValueA = sValueA + "b"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { sValueA = sValueA + "c"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } } } else { if (i < 15) { sValueA = sValueA + "d"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { if (i == 17) { sValueA = sValueA + "e"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { sValueA = sValueA + "f"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } } } } } public void showTable() { createTable(); insert(); sortCTableonColumnIndex(0, true); } public void sortCTableonColumnIndex(int iColumnIndex, boolean bIsAsc) { sorter = new TableRowSorter(oDefaultTableModel); List<RowSorter.SortKey> sortKeys = new ArrayList<RowSorter.SortKey>(); if (bIsAsc) { sortKeys.add(new RowSorter.SortKey(iColumnIndex, SortOrder.ASCENDING)); } else { sortKeys.add(new RowSorter.SortKey(iColumnIndex, SortOrder.DESCENDING)); } sorter.setSortKeys(sortKeys); oDefaultTableModel.fireTableStructureChanged(); oCTable.updateUI(); } public static void main(String[] argu) { Test oTest = new Test(); oTest.showTable(); } class CellRendererForRowHeader extends DefaultTableCellRenderer { public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { JLabel label = null; try { label = (JLabel) super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column); if (column == 0) { label.setBackground(new JLabel().getBackground()); label.setForeground(Color.BLACK); } } catch (RuntimeException ex) { } return label; } } class LineNumberTable extends JTable { private JTable mainTable; public LineNumberTable(JTable table) { super(); mainTable = table; setAutoCreateColumnsFromModel(false); setModel(mainTable.getModel()); setAutoscrolls(false); addColumn(new TableColumn()); getColumnModel().getColumn(0).setCellRenderer(mainTable.getTableHeader().getDefaultRenderer()); getColumnModel().getColumn(0).setPreferredWidth(40); setPreferredScrollableViewportSize(getPreferredSize()); } @Override public boolean isCellEditable(int row, int column) { return false; } @Override public Object getValueAt(int row, int column) { return Integer.valueOf(row + 1); } @Override public int getRowHeight(int row) { return mainTable.getRowHeight(); } } } `

    Read the article

< Previous Page | 199 200 201 202 203 204 205  | Next Page >