Search Results

Search found 9271 results on 371 pages for 'properties'.

Page 69/371 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • ASP.NET DynamicData: Whats happening during an update?

    - by Jens A.
    I am using ASP.NET DynamicData (based on LINQ to SQL) on my site for basic scaffolding. On one table I have added additional properties, that are not stored in the table, but are retrieved from somewhere else. (Profile information for a user account, in this case). They are displayed just fine, but when editing these values and pressing "Update", they are not changed. Here's what the properties look like, the table is the standard aspnet_Users table: public String Address { get { UserProfile profile = UserProfile.GetUserProfile(UserName); return profile.Address; } set { UserProfile profile = UserProfile.GetUserProfile(UserName); profile.Address = value; profile.Save(); } } When I fired up the debugger, I've noticed that for each update the set accessor is called three times. Once with the new value, but on a newly created instance of user, then once with the old value, again on an new instance, and finally with the old value on the existing instance. Wondering a bit, I checked with the properties created by the designer, and they, too, are called three times in (almost) the same fashion. The only difference is, that the last call contains the new value for the property. I am a bit stumped here. Why three times, and why are my new properties behaving differently? I'd be grateful for any help on that matter! =)

    Read the article

  • Firing PropertyChanged event in a complex, nested type in WPF

    - by John
    Hey I have a question about the PropertyChanged vent firing in WPF whne it is used in a complex type. I have a class called DataStore and a list of Departments (an ObservableCollection), and each department again has a list of Products. Properties in the Product class that are changed also affect properties in the Department and DataStore class. How does each Product notify the Department it belongs to, and the DataStore (which is the mother class of all) that one or more of its properties have changed their values? Example: a product has a property NumberSoldToday and is bound. The Department has a property called TotalNumberOfProductsSold: public int TotalNumberOfProductsSold { get { int result = 0; foreach(Product p in this.products) result += p.NumberSoldToday; return result; } } And the data store has a property TotalProductsSold (for all departments): public int TotalProductsSold { get { int result = 0; foreach(Product p in this.deparments) result += p.TotalNumberOfProductsSold; return result; } } If all these properties are bound, and the innermost property changes, it must somehow notify that the value of the other 2 changed as well. How? The only way I can see this happening is to hook up the PropertyChanged event in each class. Th event must also fire when deleting, adding to the collection of products and deparments, respectively. Is there a better, more clever way to do this?

    Read the article

  • SQL Server Connection Timeout C#

    - by Termin8tor
    First off I'd like to let everyone know I have searched my particular problem and can't seem to find what's causing my problem. I have an SQL Server 2008 instance running on a network machine and a client I have written connecting to it. To connect I have a small segment of code that establishes a connection to an sql server 2008 instance and returns a DataTable populated with the results of whatever query I run against the server, all pretty standard stuff really. Anyway the issue is, whenever I open my program and call this method, upon the first call to my method, regardless as to what I've set my Connection Timeout value as in the connection string, it takes about 15 seconds and then times out. Bizarrely though the second or third call I make to the method will work without a problem. I have opened up the ports for SQL Server on the server machine as outlined in this article: How to Open firewall ports for SQL Server and verified that it is correctly configured. Can anyone see a particular problem in my code? string _connectionString = "Server=" + @Properties.Settings.Default.sqlServer + "; Initial Catalog=" + @Properties.Settings.Default.sqlInitialCatalog + ";User Id=" + @Properties.Settings.Default.sqlUsername + ";Password=" + @Properties.Settings.Default.sqlPassword + "; Connection Timeout=1"; private DataTable ExecuteSqlStatement(string command) { using (SqlConnection conn = new SqlConnection(_connectionString)) { try { conn.Open(); using (SqlDataAdapter adaptor = new SqlDataAdapter(command, conn)) { DataTable table = new DataTable(); adaptor.Fill(table); return table; } } catch (SqlException e) { throw e; } } } The SqlException that is caught at my catch is : "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." This occurs at the conn.Open(); line in the code snippet I have included. If anyone has any ideas that'd be great!

    Read the article

  • How to set the ActiveMQ redeliveryPolicy on a queue?

    - by edbras
    How do I set the redeliveryPolicy in ActiveMQ on a Queue? 1) In the doc, see: activeMQ Redelivery, the explain that you should set it on the ConnectionFactory or Connection. But I want to use different value's for different Queue's. 2) Apart from that, I don't seem to get it work. Setting it on the connection factory in Spring (I am using activemq 5.4.2. with Spring 3.0) like this don't seem to have any effect: <amq:connectionFactory id="amqConnectionFactory" brokerURL="${jms.factory.url}" > <amq:properties> <amq:redeliveryPolicy maximumRedeliveries="6" initialRedeliveryDelay="15000" useExponentialBackOff="true" backOffMultiplier="5"/> </amq:properties> </amq:connectionFactory> I also tried to set it as property on the defined Queue, but that also seem to be ignored as the redelivery occurs sooner that the defined values: <amq:queue id="jmsQueueDeclarationSnd" physicalName="${jms.queue.declaration.snd}" > <amq:properties> <amq:redeliveryPolicy maximumRedeliveries="6" initialRedeliveryDelay="15000" useExponentialBackOff="true" backOffMultiplier="5"/> </amq:properties> </amq:queue> Thanks

    Read the article

  • Access property of a class from within a class instantiated in the original class.

    - by Iain
    I'm not certain how to explain this with the correct terms so maybe an example is the best method... $master = new MasterClass(); $master-doStuff(); class MasterClass { var $a; var $b; var $c; var $eventProccer; function MasterClass() { $this->a = 1; $this->eventProccer = new EventProcess(); } function printCurrent() { echo '<br>'.$this->a.'<br>'; } function doStuff() { $this->printCurrent(); $this->eventProccer->DoSomething(); $this->printCurrent(); } } class EventProcess { function EventProcess() {} function DoSomething() { // trying to access and change the parent class' a,b,c properties } } My problem is i'm not certain how to access the properties of the MasterClass from within the EventProcess-DoSomething() method? I would need to access, perform operations on and update the properties. The a,b,c properties will be quite large arrays and the DoSomething() method would be called many times during the execuction of the script. Any help or pointers would be much appreciated :)

    Read the article

  • Reference properteries declared in a protocol and implemented in the anonymous category?

    - by Heath Borders
    I have the following protocol: @protocol MyProtocol @property (nonatomic, retain) NSObject *myProtocolProperty; -(void) myProtocolMethod; @end and I have the following class: @interface MyClass : NSObject { } @end I have a class extension declared, I have to redeclare my protocol properties here or else I can't implement them with the rest of my class. @interface()<MyProtocol> @property (nonatomic, retain) NSObject *myExtensionProperty; /* * This redeclaration is required or my @synthesize myProtocolProperty fails */ @property (nonatomic, retain) NSObject *myProtocolProperty; - (void) myExtensionMethod; @end @implementation MyClass @synthesize myProtocolProperty = _myProtocolProperty; @synthesize myExtensionProperty = _myExtensionProperty; - (void) myProtocolMethod { } - (void) myExtensionMethod { } @end In a consumer method, I can call my protocol methods and properties just fine. Calling my extension methods and properties produces a warning and an error respectively. - (void) consumeMyClassWithMyProtocol: (MyClass<MyProtocol> *) myClassWithMyProtocol { myClassWithMyProtocol.myProtocolProperty; // works, yay! [myClassWithMyProtocol myProtocolMethod]; // works, yay! myClassWithMyProtocol.myExtensionProperty; // compiler error, yay! [myClassWithMyProtocol myExtensionMethod]; // compiler warning, yay! } Is there any way I can avoid redeclaring the properties in MyProtocol within my class extension in order to implement MyProtocol privately?

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • Need help with workflow in Alfresco

    - by Scott Gartner
    Hello SO community, I haven't had any luck getting help in the Alfresco forums, and I'm hoping for more here. We are building an application based on Alfresco and jBPM and I have defined a workflow, but I have either defined it wrong or am missing something or there are bugs in Alfresco integration with jBPM and I need help figuring out which and fixing it. Here is the problem: I have an advanced workflow and I am trying to launch it from JavaScript. Here is the code I'm using to start the workflow: var nodeId = args.nodeid; var document = search.findNode("workspace://SpacesStore/" + nodeId); var workflowAction = actions.create("start-workflow"); workflowAction.parameters.workflowName = "jbpm$nmwf:MyWorkflow"; workflowAction.parameters["bpm:workflowDescription"] = "Please edit: " + document.name; workflowAction.parameters["bpm:assignees"] = [people.getPerson("admin"), people.getPerson("andyg")]; var futureDate = new Date(); futureDate.setDate(futureDate.getDate() + 7); workflowAction.parameters["bpm:workflowDueDate"] = futureDate; workflowAction.execute(document); This runs fine and e-mail sent from the start node's default transition fires just fine. However, when I go looking for the workflow in my task list it is not there, but it is in my completed task list. The default transition (the only transition) from the start node points at a task node which has four transitions. There are 8 tasks and 22 transitions in the workflow. When I use the workflow console to start the workflow and end the start task, it properly follows the default start node transition to the next task. The new task shows up in "show tasks" but does not show up in "show my tasks" (apparently because the task was marked completed for some reason, though it is not in the "end" node). The task is: task id: jbpm$111 , name: nmwf:submitInEditing , properties: 18 If I do "show transitions" it looks just as I would expect: path: jbpm$62-@ , node: In Editing , active: true task id: jbpm$111 , name: nmwf:submitInEditing, title: submitInEditing title , desc: submitInEditing description , properties: 18 transition id: Submit for Approval , title: Submit for Approval transition id: Request Copyediting Review , title: Request Copyediting Review transition id: Request Legal Review , title: Request Legal Review transition id: Request Review , title: Request Review I don't want to post the entire workflow as it's large, but here are the first two nodes: First the swimlanes: <swimlane name="initiator"></swimlane> <swimlane name="Content Providers"> <assignment actor-id="Content Providers"> <actor>#{bpm_assignees}</actor> </assignment> </swimlane> Now the nodes: <start-state name="start"> <task name="nmwf:submitTask" swimlane="initiator"/> <transition name="" to="In Editing"> <action> <runas>admin</runas> <script> /* Code to send e-mail that a new workflow was started. I get this e-mail. */ </script> </action> </transition> </start-state> <task-node name="In Editing"> <task name="nmwf:submitInEditing" swimlane="Content Providers" /> <!-- I put e-mail sending code in each of these transitions, but none are firing. --> <transition to="In Approval" name="Submit for Approval"></transition> <transition to="In Copyediting" name="Request Copyediting Review"></transition> <transition to="In Legal Review" name="Request Legal Review"></transition> <transition to="In Review" name="Request Review"></transition> </task-node> Here is the model for these two nodes: <type name="nmwf:submitTask"> <parent>bpm:startTask</parent> <mandatory-aspects> <aspect>bpm:assignees</aspect> </mandatory-aspects> </type> <type name="nmwf:submitInEditing"> <parent>bpm:workflowTask</parent> <mandatory-aspects> <aspect>bpm:assignees</aspect> </mandatory-aspects> </type> Here is a pseudo-log of running the workflow in the workflow console: :: deploy alfresco/extension/workflow/processdefinition.xml deployed definition id: jbpm$69 , name: jbpm$nmwf:MyWorkflow , title: nmwf:MyWorkflow , version: 28 :: var bpm:assignees* person admin,andyg set var {http://www.alfresco.org/model/bpm/1.0}assignees = [workspace://SpacesStore/73cf1b28-21aa-40ca-9dde-1cff492d0268, workspace://SpacesStore/03297e91-0b89-4db6-b764-5ada2d167424] :: var bpm:package package 1 set var {http://www.alfresco.org/model/bpm/1.0}package = workspace://SpacesStore/6e2bbbbd-b728-4403-be37-dfce55a83641 :: start bpm:assignees bpm:package started workflow id: jbpm$63 , def: nmwf:MyWorkflow path: jbpm$63-@ , node: start , active: true task id: jbpm$112 , name: nmwf:submitTask, title: submitTask title , desc: submitTask description , properties: 16 transition id: [default] , title: Task Done :: show transitions path: jbpm$63-@ , node: start , active: true task id: jbpm$112 , name: nmwf:submitTask, title: submitTask title , desc: submitTask description , properties: 17 transition id: [default] , title: Task Done :: end task jbpm$112 signal sent - path id: jbpm$63-@ path: jbpm$63-@ , node: In Editing , active: true task id: jbpm$113 , name: nmwf:submitInEditing, title: submitInEditing title , desc: submitInEditing description , properties: 17 transition id: Submit for Approval , title: Submit for Approval transition id: Request Copyediting Review , title: Request Copyediting Review transition id: Request Legal Review , title: Request Legal Review transition id: Request Review , title: Request Review :: show tasks task id: jbpm$113 , name: nmwf:submitInEditing , properties: 18 :: show my tasks admin: [there is no output here] I have been making the assumption that the bpm:assignees that I am setting before starting the workflow initially are getting passed to the first task node "In Editing". Clearly the assignees are on the task object and not on the workflow object. I added the assignees aspect to the start-state task so that it could hold them (after I had a problem; initially they were not there) and possibly they are still sitting there, but the start-state has ended before I even get control back from the web script (not that it would help if it wasn't ended, I need it to be in "In Editing" as the start-state is only used to log that the workflow was started). It has always confused me that the properties that I need to set on each task need to be requested before the task is entered (when you choose a transition you must provide the data for the next task before you can actually move to the next task as you have to validate that you have all of the required data first and then signal the transition). However, the code to start the workflow is asynchronous and therefore does not return either the started workflow or the current task (which in my case would be "In Editing"). So, either way you cannot set variables such as bpm:assignees and bpm:dueDate. I wonder if this is the problem with the user task list. I'm setting the assignees in the property list, but maybe those assignees are going to the start-state task and are not getting passed to the "In Editing" task? Note that this is my first jBPM workflow, so please don't assume I know what I'm doing. If you see something that looks off, it probably is and I just don't know it. Thanks in advance for any advice or help,

    Read the article

  • What is "Disable class based route addition" good for?

    - by id.roppert.dejroppert
    In the advanced TCP/IP settings of a VPN connection, i found a checkbox labeled with "Disable class based route addition". The checkbox is only enabled as long as "Use default gateway on remote network" is switched off. What is "Disable class based route addition" good for? Detailed instructions to find the settings: Open Properties of VPN connection Go to Networking tab Open Properties of "Internet Protocol Version 4 (TCP/IPv4)" (and/or TCP/IPv6) Click "Advanced..." Button Change to "IP Settings" tab Here you can find the checkboxes mentioned above

    Read the article

  • Firewall predefined rule property cannot be modified

    - by Sami-L
    Using a stand alone Windows Server 2012 Standard edition (no Active Directory), I Tried to establish a simple remote desktop with a custom port number, but could not modify the port number in the Firewall inbound rule, when I open the inbound property I get the next message: "This is a predefined rule and some of its properties cannot be modified" I have tried to set it up like this: New rule - predifined drop down list - Remote Desktop - check mark rules - Allow the connection. but still get "This is a predefined rule and some of its properties cannot be modified" Thank you in advance.

    Read the article

  • Windows cannot access the specified device, path, or file.

    - by Pia
    I have recently installed Windows XP Pro SP2. Whenevr I try to run any .exe file I get the following error: Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item. I read some where that if we right click on the file and in Properties-Security click the unblock option, this problem is solved but this is not working for me as I dont get Security tab in Properties.Please help!! Due to this I am not able to install any software in my computer

    Read the article

  • Cisco VPN stops Windows 7 Browsing

    - by Sharjeel Sayed
    My browsing and other 'internet' activity (dropbox,digsby etc) halts when I connect to my office VPN using Cisco Systems VPN client Version 5.0.04.0300 on my Windows 7 Ultimate.The only option left for me at this time is to use my office proxy to enable the connection back. I tried doing the ucheck "Use default gateway on remote network" solution as mentioned on a previous post Windows 7 VPN stops web browser but I don't see that option on the properties of "Cisco systems VPN adapter" connection properties. Here is the screenshot

    Read the article

  • Apache/Jboss Issue - is this connection timeout?

    - by user115391
    We have an application. The architecture is as below 1 load balancer (apache), which redirects to 2 app servers (jboss). The site is working fine and I am able to access it fine. But sometimes, randomly the homepage takes a while (like 30-40 secs) to load. I tried checking the logs but could not figure out why. I used the httptraffic analyzer, fiddler to see the traffic, but it just says the request/response took 30 secs or so. I checked the apache access logs, mod_jk.log. My configurations are below mod-jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log #JkLogLevel info #JkLogLevel debug JkLogLevel error # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T %P %{tid}P %D" JkMount /__application__/* loadbalancer JkUnMount /__application__/images/* loadbalancer <VirtualHost *:8080 > JkMountFile conf/uriworkermap.properties </VirtualHost> JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ----------------------------- uriworkermap.properties Simple worker configuration file # Mount the Servlet context to the ajp13 worker /=loadbalancer /*=loadbalancer ----------------------------- workers.properties worker.list=loadbalancer,status worker.template.port=8009 worker.template.type=ajp13 worker.template.lbfactor=1 worker.template.prepost_timeout=10000 worker.template.connect_timeout=10000 worker.template.ping_mode=A worker.worker1.reference=worker.template worker.worker1.host=hostname1 worker.worker2.reference=worker.template worker.worker2.host=hostname2 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.status.type=status ----------------------------- my jboss server.xml - $JBOSS_HOME/server/default/deploy/jbossweb.sar/server.xml --------------------------------- The logs from access log is below The issue where it took time - look at the seconds column [23/Mar/2012:12:10:38 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:10:49 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:11:10 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:11:31 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:11:52 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - The one after the issue x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - Would it be a cache or timeout issue? Any help is appreciated. Thanks.

    Read the article

  • TomCat starts, but does not load properly

    - by user37136
    Hey guys, I've been working on this for a day now and still don't know what's wrong. I am essentially building a second environment for our web and app server. I got apache to load up just fine, but tomcat is proving to be difficult. It appears to start and load just fine, but when it comes to loading our application, its just got stuck for 2-5 minutes and then shut down. Here is the log on the original machine where it works fine: 2010-02-12 11:52:40,506 INFO Web application servlet context is initializing... 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobType=[{1,Undefined}, {100,Completion}, {200,Plugging}, {300,R+M}, {400,Workover}, {500,Swab - tubing}, {600,Swab - fluid}] 2010-02-12 11:52:40,540 DEBUG Servlet context attribute added: select_jobTaskType=[{1,Undefined}, {100,Rod part}, {200,Tubing leak}, {300,Pump change}, {400,Stripping job}, {500,Long stroke}, {600,A/L optimization}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_wellType=[{1,Undefined}, {100,Rod pump}, {200,ESP}, {300,Injector}, {400,PC pump}, {500,Co-Rod}, {600,Flowing}, {700,Storage}] 2010-02-12 11:52:40,541 DEBUG Servlet context attribute added: select_assetType=[{1,Rig}, {100,Disabled rig}] 2010-02-12 11:52:40,542 DEBUG Servlet context attribute added: select_state=[{AL,Alabama}, {AK,Alaska}, {AZ,Arizona}, {AR,Arkansas}, {CA,California}, {CO,Colorado}, {CT,Connecticut}, {DE,Delaware}, {FL,Florida}, {GA,Georgia}, {HI,Hawaii}, {ID,Idaho}, {IL,Illinois}, {IN,Indiana}, {IA,Iowa}, {KS,Kansas}, {KY,Kentucky}, {LA,Louisiana}, {ME,Maine}, {MD,Maryland}, {MA,Massachusetts}, {MI,Michigan}, {MN,Minnesota}, {MS,Mississippi}, {MO,Missouri}, {MT,Montana}, {NE,Nebraska}, {NV,Nevada}, {NH,New Hampshire}, {NJ,New Jersey}, {NM,New Mexico}, {NY,New York}, {NC,North Carolina}, {ND,North Dakota}, {OH,Ohio}, {OK,Oklahoma}, {OR,Oregon}, {PA,Pennsylvania}, {RI,Rhode Island}, {SC,South Carolina}, {SD,South Dakota}, {TN,Tennessee}, {TX,Texas}, {UT,Utah}, {VT,Vermont}, {VA,Virginia}, {WA,Washington}, {WV,West Virginia}, {WI,Wisconsin}, {WY,Wyoming}, {ACO,Atlantic Coast Offshore}, {FOAK,Federal Offshore Alaska}, {NGOM,Northern Gulf of Mexico}, {PCO,Pacific Coastal Offshore}] 2010-02-12 11:52:40,542 INFO KeyviewContextMonitor.contextInitialized: Loaded drop-down lists:com/key/portal/web/common/lists.properties 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.SERVLET_MAPPING=*.do 2010-02-12 11:52:40,937 DEBUG Servlet context attribute added: org.apache.struts.action.ACTION_SERVLET=org.apache.struts.action.ActionServlet@155d578 2010-02-12 11:52:41,939 DEBUG Servlet context attribute added: org.apache.struts.action.MODULE=org.apache.struts.config.impl.ModuleConfigImpl@e08e9d 2010-02-12 11:52:41,962 DEBUG Servlet context attribute added: org.apache.struts.action.FORM_BEANS=org.apache.struts.action.ActionFormBeans@b31c3c 2010-02-12 11:52:41,967 DEBUG Servlet context attribute added: org.apache.struts.action.FORWARDS=org.apache.struts.action.ActionForwards@102c646 2010-02-12 11:52:41,973 DEBUG Servlet context attribute added: org.apache.struts.action.MAPPINGS=org.apache.struts.action.ActionMappings@127276a 2010-02-12 11:52:41,974 DEBUG Servlet context attribute added: org.apache.struts.action.MESSAGE=org.apache.struts.util.PropertyMessageResources@18cae13 2010-02-12 11:52:41,984 DEBUG Servlet context attribute added: org.apache.struts.action.PLUG_INS=[Lorg.apache.struts.action.PlugIn;@f875ae 2010-02-12 11:52:46,816 INFO Sucessfully loaded application properties com/key/core/properties/application On my second environment, it didn't execute the last line. I start tomcat with the exact same command line !/bin/ksh export JAVA_HOME=/app/java export CATALINA_HOME=/app/tomcat export CATALINA_BASE=/app/keyview/appserver CATALINA_OPTS=" -Xms128m -Xmx800m -Dapplication.props=com/key/core/properties/application -Dlog4j.configuration=com/key/core/log/log4j.xml -Djava.awt.headless=true -Dlog4j.debug" export CATALINA_OPTS ${CATALINA_HOME}/bin/startup.sh I bolded the line that I think are in error. Thanks

    Read the article

  • VS2008 VB project - Changing application type automatically adds references

    - by Stijn
    Visual Basic Create a new project with the Empty Project template (Visual Basic - Windows) Go to the project properties, and change the Application type by choosing something else or reselecting Windows Forms Application. When reselecting, Visual Studio will automatically add references to System.Deployment, System.Drawing and System.Windows.Forms C# Create a new project with the Empty Project template (Visual C# - Windows) Go to the project properties, and change the Application type to any of the choices. Visual studio will not add references. Question Is there a way to change this behaviour for Visual Basic?

    Read the article

  • Visio 2010 Professional No Database Tab

    - by JFB
    I am trying to create a ERD using VISIO Professional 2010 (full install) but I am having problems. I can't open the table properties windows at the bottom of the screen when I double click on a table, and I can't right-click on the table and choose properties because the choice is not there... The Database tab is missing from the ribbon on top. What can cause this? Here is what I already tried: File/Options/Trust Center/ Trust Center Settings.../Add-ins/Disable all Application Add-ins is UNCHECKED

    Read the article

  • Exchange 2003 default permissions for ANONYMOUS LOGON and Everyone

    - by Make it useful Keep it simple
    ANONYMOUS LOGON and Everyone have the following top level permissions in our Exchange 2003 Server: Read Execute Read permissions List contents Read properties List objects Create public folder Create named properties in the information store Are these the "default" settings? In particular, are the "Read" and "Execute" permissions a problem? We have a simple small business setup, Outlook clients connect to the server on the local network, OWA is used from outside the network for browser and smartphone access. Thanks

    Read the article

  • Windows 8 Speech Recognition Language

    - by Greg
    I've got Windows 8 Pro installed (RTM version from MSDN). For an application I use I need to have the speech recognition language set to English - US. The only option I have is English - UK. I have tried going to Language in Control Panel and setting the only language to English - US, however English - UK is still the only option in speech properties. How can I add a language to the Speech Properties?

    Read the article

  • Excel 2007 - "The Document Information Panel was unable to load"

    - by Ruffles
    In Excel 2007, if I go to Office button - Prepare - Properties instead of showing document properties, I get a message "The Document Information Panel was unable to load." I have come across a number of posts suggesting I copy ipedintl.dll from C:\Program Files\Microsoft Web Designer Tools\Office12\1033 to C:\Program Files\Microsoft Office\Office12\1033, however I have tried this, and the problem is still occurring. Does anyone have any other ideas of what this might be? Thanks.

    Read the article

  • Infotips for Word Documents in Windows XP in Network Drives

    - by Knight Samar
    Hi, MS Word 2007 files have a property page for entering details like summary and title. This is displayed when you hover over the documents on Desktop. Now on my Windows XP SP2 computer, inside Windows Explorer, it shows the special properties for all such files from Desktop, but not from the Network Drives. This is a big problem when I have a large collection of Word documents all in one folder. How can I display these special properties (infotips) for documents in my network drives ? Thanks :)

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >