Search Results

Search found 8288 results on 332 pages for 'proxy models'.

Page 116/332 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Django: getting the list of related records for a list of objects

    - by Silver Light
    Hello! I have two models related one-to many: class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); class Dog(models.Model): name = models.CharField(max_length=255); owner = models.ForeignKey('Person'); I want to output a list of persons below each person a list of dogs he has. Here's how I can do it: in view: persons = Person.objects.all()[0:100]; in template: {% for p in persons %} {{ p.name }} has dogs:<br /> {% for d in persons.dog_set.all %} - {{ d.name }}<br /> {% endfor %} {% endfor %} But if I do it like that, Django will execute 101 SQL queries which is very inefficient. I tried to make a custom manager, which will get all the persons, then all the dogs and links them in python, but then I can't use paginator (my another question: http://stackoverflow.com/questions/2532475/django-paginator-raw-sql-query ) and it looks quite ugly. Is there a more graceful way doing this?

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • How to use a nested form for multiple models in one form?

    - by Magicked
    I'm struggling to come up with the proper way to design a form that will allow me to input data for two different models. The form is for an 'Incident', which has the following relationships: belongs_to :customer belongs_to :user has_one :incident_status has_many :incident_notes accepts_nested_attributes_for :incident_notes, :allow_destroy => false So an incident is assigned to a 'Customer' and a 'User', and the user is able to add 'Notes' to the incident. I'm having trouble with the notes part of the form. Here how the form is being submitted: {"commit"=>"Create", "authenticity_token"=>"ECH5Ziv7JAuzs53kt5m/njT9w39UJhfJEs2x0Ms2NA0=", "customer_id"=>"4", "incident"=>{"title"=>"Something bad", "incident_status_id"=>"2", "user_id"=>"2", "other_id"=>"AAA01-042310-001", "incident_note"=>{"note"=>"This is a note"}}} It appears to be attempting to add the incident_note as a field under 'Incident', rather than creating a new entry in the incident_note table with an incident_id foreign key linking back to the incident. Here is the 'IncidentNote' model: belongs_to :incident belongs_to :user Here is the form for 'Incident': <% form_for([@customer,@incident]) do |f| %> <%= f.error_messages %> <p> <%= f.label :other_id, "ID" %><br /> <%= f.text_field :capc_id %> </p> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= label_tag 'user', 'Assign to user?' %> <%= f.select :user_id, @users.collect {|u| [u.name, u.id]} %> </p> <p> <%= f.label :incident_status, 'Status?' %> <%= f.select :incident_status_id, @statuses.collect {|s| [s.name, s.id]} %> </p> <p> <% f.fields_for :incident_note do |inote_form| %> <%= inote_form.label :note, 'Add a Note' %> <%= inote_form.text_area :note, :cols => 40, :rows => 20 %> <% end %> </p> <p> <%= f.submit "Create" %> </p> <% end %> And finally, here are the incident_controller entries for New and Create. New: def new @customer = current_user.customer @incident = Incident.new @users = @customer.users @statuses = IncidentStatus.find(:all) @incident_note = IncidentNote.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @incident } end end Create: def create @users = @customer.users @statuses = IncidentStatus.find(:all) @incident = Incident.new(params[:incident]) @incident.customer = @customer @incident_note = @incident.incident_note.build(params[:incident_note]) @incident_note.user = current_user respond_to do |format| if @incident.save flash[:notice] = 'Incident was successfully created.' format.html { redirect_to(@incident) } format.xml { render :xml => @incident, :status => :created, :location => @incident } else format.html { render :action => "new" } format.xml { render :xml => @incident.errors, :status => :unprocessable_entity } end end end I'm not really sure where to look at this point. I'm sure it's just a limitation of my current Rails skill (I don't know much). So if anyone can point me in the right direction I would be very appreciative. Please let me know if more information is needed! Thanks!

    Read the article

  • How can I keep my MVC Views, models, and model binders as clean as possible?

    - by MBonig
    I'm rather new to MVC and as I'm getting into the whole framework more and more I'm finding the modelbinders are becoming tough to maintain. Let me explain... I am writing a basic CRUD-over-database app. My domain models are going to be very rich. In an attempt to keep my controllers as thin as possible I've set it up so that on Create/Edit commands the parameter for the action is a richly populated instance of my domain model. To do this I've implemented a custom model binder. As a result, though, this custom model binder is very specific to the view and the model. I've decided to just override the DefaultModelBinder that ships with MVC 2. In the case where the field being bound to my model is just a textbox (or something as simple), I just delegate to the base method. However, when I'm working with a dropdown or something more complex (the UI dictates that date and time are separate data entry fields but for the model it is one Property), I have to perform some checks and some manual data munging. The end result of this is that I have some pretty tight ties between the View and Binder. I'm architecturally fine with this but from a code maintenance standpoint, it's a nightmare. For example, my model I'm binding here is of type Log (this is the object I will get as a parameter on my Action). The "ServiceStateTime" is a property on Log. The form values of "log.ServiceStartDate" and "log.ServiceStartTime" are totally arbitrary and come from two textboxes on the form (Html.TextBox("log.ServiceStartTime",...)) protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { if (propertyDescriptor.Name == "ServiceStartTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceStartDate").ConvertTo(typeof (string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceStartTime").ConvertTo(typeof (string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } if (propertyDescriptor.Name == "ServiceEndTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceEndDate").ConvertTo(typeof(string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceEndTime").ConvertTo(typeof(string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } The Log.ServiceEndTime is a similar field. This doesn't feel very DRY to me. First, if I refactor the ServiceStartTime or ServiceEndTime into different field names, the text strings may get missed (although my refactoring tool of choice, R#, is pretty good at this sort of thing, it wouldn't cause a build-time failure and would only get caught by manual testing). Second, if I decided to arbitrarily change the descriptors "log.ServiceStartDate" and "log.ServiceStartTime", I would run into the same problem. To me, runtime silent errors are the worst kind of error out there. So, I see a couple of options to help here and would love to get some input from people who have come across some of these issues: Refactor any text strings in common between the view and model binders out into const strings attached to the ViewModel object I pass from controller to the aspx/ascx view. This pollutes the ViewModel object, though. Provide unit tests around all of the interactions. I'm a big proponent of unit tests and haven't started fleshing this option out but I've got a gut feeling that it won't save me from foot-shootings. If it matters, the Log and other entities in the system are persisted to the database using Fluent NHibernate. I really want to keep my controllers as thin as possible. So, any suggestions here are greatly welcomed! Thanks

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • .NET HttpListener - no traffic when listening to "https://*.8080" when browser proxy is set???

    - by Greg
    Hi, Background - I can get HttpListener working fine for HTTP traffic. I'm having trouble with HTTPS traffic however. QUESTION: How can I change the code below so that a browser request to a "https" URL will actually be picked up by my HttpListener? Notes - At the moment with firefox's proxy settings set to "localhost:8080", when I listen to traffic on port 8080 ("https://*:8080/"), and I enter a HTTPS url in firefox, I am getting no traffic being picked up? (when I listen to just http and enter normal http url's it works fine) _httpListener = new HttpListener(); _httpListener.Prefixes.Add("https://*:8080/"); _httpListener.Start(); thanks

    Read the article

  • When should I open and close a website's cached WCF proxy?

    - by Brandon Linton
    I've browsed around the other articles on StackOverflow that relate to caching WCF proxies for reuse, and I've read this article explaining why I should explicitly open the proxy before calling anything on it. I'm still a little hazy on the best implementation details. My question is: when should I open and close proxies for service calls on a website, and what should their lifetime be (per call, per request, or per web app)? We aren't planning on leveraging cached security contexts at the moment (but it's not unforeseeable). Thanks!

    Read the article

  • Using LINQ to query database through a proxy server of some kind?

    - by Mustafakidd
    Hey All Sorry for using (perhaps) the wrong lingo, but my question may be clearer if you view this diagram as you read it. http://dl.dropbox.com/u/13256/DIAGRAM.PNG Our client is requiring us to adhere to the server configuration (poorly) diagrammed in the above image. The web server is accessible over port 80 and is where our web application is hosted - a second firewall permits this web server to access a second server which in turn is the only server permitted to access the database server. My question is: How do I deploy a web application that uses LINQ-to-SQL in this environment? Is there a way to proxy my LINQ queries through the app server so that the database connection goes through that server? This is uncharted territory for me, as we always have had access to the DB server directly from our web server in the past. Any help is appreciated. Thanks Mustafa

    Read the article

  • Is it possible to change WCF service without regenerating & recompiling client proxy?

    - by Buu Nguyen
    Let's say I have a WCF service which has a method returning object Person. In one of the clients of this service, I can add service reference to the service and start using its method. Now, let's say the Person class is changed on the server, having a new DataMember added. Other clients will make use of this new DataMember, but my client doesn't. Therefore, this client shouldn't even be aware that the service returns s/t "more" than what it needs. Is there any way that my client can still work with the service without having to update the service reference (which, as I understand, means regenerating the proxy & compiling it)?

    Read the article

  • Can I extends a sub class of Proxy class?

    - by KCBérenger
    I want to create a complete (and real) 2-dimensional array. In order to use a maximum of Adobe code, I want to use ListCollectionView which can manage sort and filters. But to use a second dimension, I need to override getProperty method, like following code. package { import flash.utils.flash_proxy; import mx.collections.ListCollectionView; public class SubClass extends ListCollectionView /* extends Proxy */ { override flash_proxy function getProperty(name : *) : * { ... } override flash_proxy function setProperty(name : *, value : *) : void { ... } } } This code doesn't work. Flash Builder 4 said to me: 1004 Namespace was not found or is not a compile-time constant. If anyone has a solution or a clue...

    Read the article

  • Create proxy to Java app so it can work on the iPhone? [closed]

    - by Kovu
    okay guys, these are the facts: I have a java chat. I can NOT have any develop-revelant information. So this all is static. It's not my chat. So fact 2: IPad does not have Java and it will not have it in future. So now, the very hard task: bring this both together. My idea: Develope a proxy server for this. 1) The data from the IPad will send to a server. 2) the server recieve data and call the JRE and the complete chat-client 3) The server gets the "chat" himself and send it back to the Ipad Is that and possible idea or bad? Did I oversee something? Anyone have a idea where to start?

    Read the article

  • Is there a CPU that can be described as "Celeron D 4xx model"?

    - by romkyns
    The "D" letter after Celeron appears to only be used for processors numbered with 3xx. Celerons of the 4xx series do not seem to have the "D". And yet I am looking at a motherboard described as supporting these processors: Intel Celeron D 3xx and 4xx models Intel Pentium 4 5xx and 6xx models Intel Pentium D 8xx and 9xx models Intel Core 2 Duo models with LGA775 Is this compatible with a Celeron 450, sSpec SLAFZ, despite not having a "D" in its name?

    Read the article

  • Common DataAnnotations in ASP.Net MVC2

    - by Scott Mayfield
    Howdy, I have what should be a simple question. I have a set of validations that use System.CompontentModel.DataAnnotations . I have some validations that are specific to certain view models, so I'm comfortable with having the validation code in the same file as my models (as in the default AccountModels.cs file that ships with MVC2). But I have some common validations that apply to several models as well (valid email address format for example). When I cut/paste that validation to the second model that needs it, of course I get a duplicate definition error because they're in the same namespace (projectName.Models). So I thought of removing the common validations to a separate class within the namespace, expecting that all of my view models would be able to access the validations from there. Unexpectedly, the validations are no longer accessible. I've verified that they are still in the same namespace, and they are all public. I wouldn't expect that I would have to have any specific reference to them (tried adding using statement for the same namespace, but that didn't resolve it, and via the add references dialog, a project can't reference itself (makes sense). So any idea why public validations that have simply been moved to another file in the same namespace aren't visible to my models? CommonValidations.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Text.RegularExpressions; namespace ProjectName.Models { public class CommonValidations { [AttributeUsage(AttributeTargets.Field | AttributeTargets.Property, AllowMultiple = true, Inherited = true)] public sealed class EmailFormatValidAttribute : ValidationAttribute { public override bool IsValid(object value) { if (value != null) { var expression = @"^[a-zA-Z][\w\.-]*[a-zA-Z0-9]@[a-zA-Z0-9][\w\.-]*[a-zA-Z0-9]\.[a-zA-Z][a-zA-Z\.]*[a-zA-Z]$"; return Regex.IsMatch(value.ToString(), expression); } else { return false; } } } } } And here's the code that I want to use the validation from: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ComponentModel; using System.ComponentModel.DataAnnotations; using Growums.Models; namespace ProjectName.Models { public class PrivacyModel { [Required(ErrorMessage="Required")] [EmailFormatValid(ErrorMessage="Invalid Email")] public string Email { get; set; } } }

    Read the article

  • What is the relationship between WebProxy & IWebProxy with respect to WebClient?

    - by Streamline
    I am creating an app (.NET 2.0) that uses WebClient to connect (downloaddata, etc) to/from a http web service. I am adding a form now to handle allowing proxy information to either be stored or set to use the defaults. I am a little confused about some things. First, some of the methods & properties available in either WebProxy or IWebProxy are not in both. What is the difference here with respect to setting up how WebClient will be have when it is called? Secondly, do I have to tell WebClient to use the proxy information if I set it using either WebProxy or IWebProxy class elsewhere? Or is it automatically inherited? Thirdly, when giving the option for the user to use the default proxy (whatever is set in IE) and using the default credentials (I assume also whatever is set in IE) are these two mutually exclusive? Or you only use default credentials when you have also used default proxy? This gets me to the whole difference between WebProxy and IWebProxy. WebRequest.DefaultProxy is a IWebPRoxy class but UseDefaultCredentials is not a method on the IWebProxy class, rather it is only on WebProxy and in turn, How to set the proxy to the WebRequest.DefautlProxy if they are two different classes? Here is my current method to read the stored form settings by the user - but I am not sure if this is correct, not enough, overkill, or just wrong because of the mix of WebProxy and IWebProxy: private WebProxy _proxyInfo = new WebProxy(); private WebProxy SetProxyInfo() { if (UseProxy) { if (UseIEProxy) { // is doing this enough to set this as default for WebClient? IWebProxy iProxy = WebRequest.DefaultWebProxy; if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { // is doing this enough to set this as default credentials for WebClient? WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } else { // is doing this enough to set this as default for WebClient? WebRequest.DefaultWebProxy = new WebProxy(ProxyAddress, ParseLib.StringToInt(ProxyPort)); if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } } // Do I need to WebClient to absorb this returned proxy info if I didn't set or use defaults? return _proxyInfo; } Is there any reason to not just scrap storing app specific proxy information and only allow the app the ability to use the default proxy information & credentials for the logged in user? Will this ever not be enough if using HTTP? Part 2 Question: How can I test that the WebClient instance is using the proxy information or not?

    Read the article

  • Adding a generic image field onto a ModelForm in django

    - by Prairiedogg
    I have two models, Room and Image. Image is a generic model that can tack onto any other model. I want to give users a form to upload an image when they post information about a room. I've written code that works, but I'm afraid I've done it the hard way, and specifically in a way that violates DRY. Was hoping someone who's a little more familiar with django forms could point out where I've gone wrong. Update: I've tried to clarify why I chose this design in comments to the current answers. To summarize: I didn't simply put an ImageField on the Room model because I wanted more than one image associated with the Room model. I chose a generic Image model because I wanted to add images to several different models. The alternatives I considered were were multiple foreign keys on a single Image class, which seemed messy, or multiple Image classes, which I thought would clutter my schema. I didn't make this clear in my first post, so sorry about that. Seeing as none of the answers so far has addressed how to make this a little more DRY I did come up with my own solution which was to add the upload path as a class attribute on the image model and reference that every time it's needed. # Models class Image(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') image = models.ImageField(_('Image'), height_field='', width_field='', upload_to='uploads/images', max_length=200) class Room(models.Model): name = models.CharField(max_length=50) image_set = generic.GenericRelation('Image') # The form class AddRoomForm(forms.ModelForm): image_1 = forms.ImageField() class Meta: model = Room # The view def handle_uploaded_file(f): # DRY violation, I've already specified the upload path in the image model upload_suffix = join('uploads/images', f.name) upload_path = join(settings.MEDIA_ROOT, upload_suffix) destination = open(upload_path, 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() return upload_suffix def add_room(request, apartment_id, form_class=AddRoomForm, template='apartments/add_room.html'): apartment = Apartment.objects.get(id=apartment_id) if request.method == 'POST': form = form_class(request.POST, request.FILES) if form.is_valid(): room = form.save() image_1 = form.cleaned_data['image_1'] # Instead of writing a special function to handle the image, # shouldn't I just be able to pass it straight into Image.objects.create # ...but it doesn't seem to work for some reason, wrong syntax perhaps? upload_path = handle_uploaded_file(image_1) image = Image.objects.create(content_object=room, image=upload_path) return HttpResponseRedirect(room.get_absolute_url()) else: form = form_class() context = {'form': form, } return direct_to_template(request, template, extra_context=context)

    Read the article

  • Django: query with ManyToManyField count?

    - by AP257
    In Django, how do I construct a COUNT query for a ManyToManyField? My models are as follows, and I want to get all the people whose name starts with A and who are the lord or overlord of at least one Place, and order the results by name. class Manor(models.Model): lord = models.ManyToManyField(Person, null=True, related_name="lord") overlord = models.ManyToManyField(Person, null=True, related_name="overlord") class Person(models.Model): name = models.CharField(max_length=100) So my query should look something like this... but how do I construct the third line? people = Person.objects.filter( Q(name__istartswith='a'), Q(lord.count > 0) | Q(overlord.count > 0) # pseudocode ).order_by('name'))

    Read the article

  • Django loaddata throws ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] on null=true f

    - by datakid
    When I run: django-admin.py loaddata ../data/library_authors.json the error is: ... ValidationError: [u'Enter a valid date in YYYY-MM-DD format.'] The model: class Writer(models.Model): first = models.CharField(u'First Name', max_length=30) other = models.CharField(u'Other Names', max_length=30, blank=True) last = models.CharField(u'Last Name', max_length=30) dob = models.DateField(u'Date of Birth', blank=True, null=True) class Meta: abstract = True ordering = ['last'] unique_together = ("first", "last") class Author(Writer): language = models.CharField(max_length=20, choices=LANGUAGES, blank=True) class Meta: verbose_name = 'Author' verbose_name_plural = 'Authors' Note that the dob DateField has blank=True, null=True The json file has structure: [ { "pk": 1, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Carey", "language": "", "first": "Peter" } }, { "pk": 3, "model": "books.author", "fields": { "dob": "", "other": "", "last": "Brown", "language": "", "first": "Carter" } } ] The backing mysql database has the relevent date field in the relevant table set to NULL as default and Null? = YES. Any ideas on what I'm doing wrong or how I can get loaddata to accept null date values?

    Read the article

  • Why is django admin not accepting Nullable foreign keys?

    - by p.g.l.hall
    Here is a simplified version of one of my models: class ImportRule(models.Model): feed = models.ForeignKey(Feed) name = models.CharField(max_length=255) feed_provider_category = models.ForeignKey(FeedProviderCategory, null=True) target_subcategories = models.ManyToManyField(Subcategory) This class manages a rule for importing a list of items from a feed into the database. The admin system won't let me add an ImportRule without selecting a feed_provider_category despite it being declared in the model as nullable. The database (SQLite at the moment) even checks out ok: >>> .schema ... CREATE TABLE "someapp_importrule" ( "id" integer NOT NULL PRIMARY KEY, "feed_id" integer NOT NULL REFERENCES "someapp_feed" ("id"), "name" varchar(255) NOT NULL, "feed_provider_category_id" integer REFERENCES "someapp_feedprovidercategory" ("id"), ); ... I can create the object in the python shell easily enough: f = Feed.objects.get(pk=1) i = ImportRule(name='test', feed=f) i.save() ...but the admin system won't let me edit it, of course. How can I get the admin to let me edit/create objects without specifying that foreign key?

    Read the article

  • django m2m how can i get m2m table elements in a view

    - by dana
    i have a model using m2m feature: class Classroom(models.Model): user = models.ForeignKey(User, related_name = 'classroom_creator') classname = models.CharField(max_length=140, unique = True) date = models.DateTimeField(auto_now=True) open_class = models.BooleanField(default=True) members = models.ManyToManyField(User,related_name="list of invited members", through = 'Membership') and i want to take all members of one class in a view and display them using the template system. In the view, i'm trying to take all the members from a classroom like that: def inside_classroom(request,classname): try: theclass = Classroom.objects.get(classname = classname) members = Members.objects.all() etc but it doesn't work,(though the db_table is named Classroom_Members) i guess i have to use another query for getting all the members from the classroom classname. also, i want to verify if the request.user is a member using (if request.user in members) how can i het those members? Thanks in advance!

    Read the article

  • Django QuerySet ordering by expression

    - by Andrew
    How can i use order_by like order_by('field1'*'field2') For example i have items with price listed in different currencies, so to order items - i have to make currency conversion. class Currency(models.Model): code = models.CharField(max_length=3, primary_key=True) rateToUSD = models.DecimalField(max_digits=20,decimal_places=10) class Item(models.Model): priceRT = models.DecimalField(max_digits=15, decimal_places=2, default=0) cur = models.ForeignKey(Currency) I would like to have something like: Item.objects.all().order_by(F('priceRT')*F('cur__rateToUSD')) But unfortunately it doesnt work, i also faild with annotate. How can i permorm QuerySet ordering by result of value multiplication of 2 model's fields.

    Read the article

  • what is this 'content_type' mean..

    - by zjm1126
    content_type = ContentType.objects.get_for_model(Map) maps = maps.extra(select=SortedDict([ ('member_count', MEMBER_COUNT_SQL), ('topic_count', TOPIC_COUNT_SQL), ]), select_params=(content_type.id,)) and the ContentType is: class ContentType(models.Model): name = models.CharField(max_length=100) app_label = models.CharField(max_length=100) model = models.CharField(_('python model class name'), max_length=100) objects = ContentTypeManager() class Meta: verbose_name = _('content type') verbose_name_plural = _('content types') db_table = 'django_content_type' ordering = ('name',) unique_together = (('app_label', 'model'),) def __unicode__(self): return self.name def model_class(self): "Returns the Python model class for this type of content." from django.db import models return models.get_model(self.app_label, self.model) def get_object_for_this_type(self, **kwargs): """ Returns an object of this type for the keyword arguments given. Basically, this is a proxy around this object_type's get_object() model method. The ObjectNotExist exception, if thrown, will not be caught, so code that calls this method should catch it. """ return self.model_class()._default_manager.using(self._state.db).get(**kwargs) def natural_key(self): return (self.app_label, self.model) i want to know: what is the 'content_type' used for ??

    Read the article

  • FieldError when annotating over foreign keys

    - by X_9
    I have a models file that looks similar to the following: class WithDate(models.Model): adddedDate = models.DateTimeField(auto_now_add=True) modifiedDate = models.DateTimeField(auto_now=True) class Meta: abstract = True class Match(WithDate): ... class Notify(WithDate): matchId = models.ForeignKey(Match) headline = models.CharField(null=True, blank=True, max_length=10) For each Match I'm trying to get a count of notify records that have a headline. So my call looks like matchObjs = Match.objects.annotate(notifies_made=Count('notify__headline__isnull')) This keeps throwing a FieldError. I've simplified the query down to matchObjs = Match.objects.annotate(notifies_made=Count('notify')) And I still get the same FieldError... I've seen this work in other cases (other documentation, other SO questions like this one) but I can't figure out why I'm getting an error. The specific error that is returned is as follows: Cannot resolve keyword 'notify' into field. Choices are: (all fields from Match model) Does anyone have a clue as to why I can't get this annotation to work across tables? I'm baffled after looking at the other SO question and various Django docs where I've seen this done. Edit: I am using Django 1.1.1

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >