Search Results

Search found 15035 results on 602 pages for 'request'.

Page 92/602 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • C# How to download files from FTP Server

    - by user3696888
    I'm trying to download a file (all kinds of files, exe dll txt etc.). And when I try to run it an error comes up on: using (FileStream ws = new FileStream(destination, FileMode.Create)) This is the error message: Access to the path 'C:\Riot Games\League of Legends\RADS\solutions \lol_game_client_sln\releases\0.0.1.41\deploy'(which is my destination, where I want to save it) is denied. Here is my code void download(string url, string destination) { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(url); request.Method = WebRequestMethods.Ftp.DownloadFile; request.Credentials = new NetworkCredential("user", "password"); request.UseBinary = true; using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) { using (Stream rs = response.GetResponseStream()) { using (FileStream ws = new FileStream(destination, FileMode.Create)) { byte[] buffer = new byte[2048]; int bytesRead = rs.Read(buffer, 0, buffer.Length); while (bytesRead > 0) { ws.Write(buffer, 0, bytesRead); bytesRead = rs.Read(buffer, 0, buffer.Length); } } } }

    Read the article

  • can't save form content to database, help plsss!!

    - by dana
    i'm trying to save 100 caracters form user in a 'microblog' minimal application. my code seems to not have any mystakes, but doesn't work. the mistake is in views.py, i can't save the foreign key to user table models.py looks like this: class NewManager(models.Manager): def create_post(self, post, username): new = self.model(post=post, created_by=username) new.save() return new class New(models.Model): post = models.CharField(max_length=120) date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, blank=True) objects = NewManager() class NewForm(ModelForm): class Meta: model = New fields = ['post'] # widgets = {'post': Textarea(attrs={'cols': 80, 'rows': 20}) def save_new(request): if request.method == 'POST': created_by = User.objects.get(created_by = user) date = request.POST.get('date', '') post = request.POST.get('post', '') new_obj = New(post=post, date=date, created_by=created_by) new_obj.save() return HttpResponseRedirect('/') else: form = NewForm() return render_to_response('news/new_form.html', {'form': form},context_instance=RequestContext(request)) i didn't mention imports here - they're done right, anyway. my mistake is in views.py, when i try to save it says: local variable 'created_by' referenced before assignment it i put created_py as a parameter, the save needs more parameters... it is really weird help please!!

    Read the article

  • Django: Staff Decorator

    - by Mark
    I'm trying to write a "staff only" decorator for Django, but I can't seem to get it to work: def staff_only(error='Only staff may view this page.'): def _dec(view_func): def _view(request, *args, **kwargs): u = request.user if u.is_authenticated() and u.is_staff: return view_func(request, *args, **kwargs) messages.error(request, error) return HttpResponseRedirect(request.META.get('HTTP_REFERER', reverse('home'))) _view.__name__ = view_func.__name__ _view.__dict__ = view_func.__dict__ _view.__doc__ = view_func.__doc__ return _view return _dec Trying to follow lead from here. I'm getting: 'WSGIRequest' object has no attribute '__name__' But if I take those 3 lines out, I just get a useless "Internal Server Error". What am I doing wrong here?

    Read the article

  • Defining xml in an xsd where an attribute determines the possible contents

    - by SeanJA
    How would one go about defining something like this in an xsd? <start> <request type="typeA"> <elementOnlyFoundInA /> </request> <request type="typeB"> <elementOnlyFoundInB /> </request> </start> I ran xsd.exe just to get an idea of what it might look like, but it does not appear recognize the relationships between the value of type and the contents of the request. Is it even possible to define contents based on an attribute like this in an xsd file?

    Read the article

  • asp.net could a half submitted web page be processed?

    - by c00ke
    Having a weird bug in production and just wondering if it's possible for a half submitted web page to processed by the server? The page has no view state just using plain old html controls and accessing data displayed in repeater on the back end via Request.Form[name] etc. Is it possible for a request to be truncated perhaps due to lost internet connection and the page still processed by the server. Therefore if field not part of the request Request.Form[name] could result in null? I know can use fiddler to modify request but unfortunately we are not allowed to change group policy and change the proxy! Many Thanks

    Read the article

  • How do I make "simple" throughput j2ee-filter?

    - by Tommy
    I'm looking to create a filter that can give me two things: number of request pr minute, and average responsetime pr minute. I already got the individual readings, I'm just not sure how to add them up. My filter captures every request, and it records the time each request takes: public void doFilter(ServletRequest request, ...() { long start = System.currentTimeMillis(); chain.doFilter(request, response); long stop = System.currentTimeMillis(); String time = Util.getTimeDifferenceInSec(start, stop); } This information will be used to create some pretty Google Chart charts. I don't want to store the data in any database. Just a way to get current numbers out when requested As this is a high volume application; low overhead is essential. I'm assuming my applicationserver doesn't provide this information.

    Read the article

  • asp.net: moving from session variables to cookies

    - by P a u l
    My forms are losing session variables on shared hosting very quickly (webhost4life), and I think I want to replace them with cookies. Does the following look reasonable for tracking an ID from form to form: if(Request.Cookies["currentForm"] == null) return; projectID = new Guid(Request.Cookies["currentForm"]["selectedProjectID"]); Response.Cookies["currentForm"]["selectedProjectID"] = Request.Cookies["currentForm"]["selectedProjectID"]; Note that I am setting the Response cookie in all the forms after I read the Request cookie. Is this necessary? Do the Request cookies copy to the Response automatically? I'm setting no properties on the cookies and create them this way: Response.Cookies["currentForm"]["selectedProjectID"] = someGuid.ToString(); The intention is that these are temporary header cookies, not persisted on the client any longer than the browser session. I ask this since I don't often write websites.

    Read the article

  • Django Find Out if User is Authenticated in Custom Tag

    - by greggory.hz
    I'm trying to create a custom tag. Inside this custom tag, I want to be able to have some logic that checks if the user is logged in, and then have the tag rendered accordingly. This is what I have: def user_actions(context): request = template.Variable('request').resolve(context) return { 'auth': request['user'].is_athenticated() } register.inclusion_tag('layout_elements/user_actions.html', takes_context=True)(user_actions) When I run this, I get this error: Caught VariableDoesNotExist while rendering: Failed lookup for key [request] in u'[{}]' The view that renders this ends like this: return render_to_response('start/home.html', {}, context_instance=RequestContext(request)) Why doesn't the tag get a RequestContext object instead of the Context object? How can I get the tag to receive the RequestContext instead of the Context? EDIT: Whether or not it's possible to get a RequestContext inside a custom tag, I'd still be interested to know the "correct" or best way to determine a user's authentication state from within the custom tag. If that's not possible, then perhaps that kind of logic belongs elsewhere? Where?

    Read the article

  • Removing the XML Formatter from ASP.NET Web API Applications

    - by Rick Strahl
    ASP.NET Web API's default output format is supposed to be JSON, but when I access my Web APIs using the browser address bar I'm always seeing an XML result instead. When working on AJAX application I like to test many of my AJAX APIs with the browser while working on them. While I can't debug all requests this way, GET requests are easy to test in the browser especially if you have JSON viewing options set up in your various browsers. If I preview a Web API request in most browsers I get an XML response like this: Why is that? Web API checks the HTTP Accept headers of a request to determine what type of output it should return by looking for content typed that it has formatters registered for. This automatic negotiation is one of the great features of Web API because it makes it easy and transparent to request different kinds of output from the server. In the case of browsers it turns out that most send Accept headers that look like this (Chrome in this case): Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Web API inspects the entire list of headers from left to right (plus the quality/priority flag q=) and tries to find a media type that matches its list of supported media types in the list of formatters registered. In this case it matches application/xml to the Xml formatter and so that's what gets returned and displayed. To verify that Web API indeed defaults to JSON output by default you can open the request in Fiddler and pop it into the Request Composer, remove the application/xml header and see that the output returned comes back in JSON instead. An accept header like this: Accept: text/html,application/xhtml+xml,*/*;q=0.9 or leaving the Accept header out altogether should give you a JSON response. Interestingly enough Internet Explorer 9 also displays JSON because it doesn't include an application/xml Accept header: Accept: text/html, application/xhtml+xml, */* which for once actually seems more sensible. Removing the XML Formatter We can't easily change the browser Accept headers (actually you can by delving into the config but it's a bit of a hassle), so can we change the behavior on the server? When working on AJAX applications I tend to not be interested in XML results and I always want to see JSON results at least during development. Web API uses a collection of formatters and you can go through this list and remove the ones you don't want to use - in this case the XmlMediaTypeFormatter. To do this you can work with the HttpConfiguration object and the static GlobalConfiguration object used to configure it: protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // remove default Xml handler var matches = config.Formatters .Where(f = f.SupportedMediaTypes .Where(m = m.MediaType.ToString() == "application/xml" || m.MediaType.ToString() == "text/xml") .Count() 0) .ToList() ; foreach (var match in matches) config.Formatters.Remove(match); } } That LINQ code is quite a mouthful of nested collections, but it does the trick to remove the formatter based on the content type. You can also look for the specific formatter (XmlMediatTypeFormatter) by its type name which is simpler, but it's better to search for the supported types as this will work even if there are other custom formatters added. Once removed, now the browser request results in a JSON response: It's a simple solution to a small debugging task that's made my life easier. Maybe you find it useful too…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Parallelism in .NET – Part 20, Using Task with Existing APIs

    - by Reed
    Although the Task class provides a huge amount of flexibility for handling asynchronous actions, the .NET Framework still contains a large number of APIs that are based on the previous asynchronous programming model.  While Task and Task<T> provide a much nicer syntax as well as extending the flexibility, allowing features such as continuations based on multiple tasks, the existing APIs don’t directly support this workflow. There is a method in the TaskFactory class which can be used to adapt the existing APIs to the new Task class: TaskFactory.FromAsync.  This method provides a way to convert from the BeginOperation/EndOperation method pair syntax common through .NET Framework directly to a Task<T> containing the results of the operation in the task’s Result parameter. While this method does exist, it unfortunately comes at a cost – the method overloads are far from simple to decipher, and the resulting code is not always as easily understood as newer code based directly on the Task class.  For example, a single call to handle WebRequest.BeginGetResponse/EndGetReponse, one of the easiest “pairs” of methods to use, looks like the following: var task = Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The compiler is unfortunately unable to infer the correct type, and, as a result, the WebReponse must be explicitly mentioned in the method call.  As a result, I typically recommend wrapping this into an extension method to ease use.  For example, I would place the above in an extension method like: public static class WebRequestExtensions { public static Task<WebResponse> GetReponseAsync(this WebRequest request) { return Task.Factory.FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); } } This dramatically simplifies usage.  For example, if we wanted to asynchronously check to see if this blog supported XHTML 1.0, and report that in a text box to the user, we could do: var webRequest = WebRequest.Create("http://www.reedcopsey.com"); webRequest.GetReponseAsync().ContinueWith(t => { using (var sr = new StreamReader(t.Result.GetResponseStream())) { string str = sr.ReadLine();; this.textBox1.Text = string.Format("Page at {0} supports XHTML 1.0: {1}", t.Result.ResponseUri, str.Contains("XHTML 1.0")); } }, TaskScheduler.FromCurrentSynchronizationContext());   By using a continuation with a TaskScheduler based on the current synchronization context, we can keep this request asynchronous, check based on the first line of the response string, and report the results back on our UI directly.

    Read the article

  • Ubuntu-one syncs single files, but not directories [closed]

    - by Luiz Cláudio Duarte
    I'm using Ubuntu 10.10, fully updated. I have tried to sync my ~/Documents and ~/Pictures folders; U1 replicates the directory structure, but no files are uploaded. Next I tried to sync a single file inside ~/Ubuntu One and it was synced. Then I tried to put a directory inside ~/Ubuntu One and, again, the directory structure was replicated, but no files were synced. All the files have the "syncing" icon, however. The latest syncdaemon.log is below: 2011-03-30 07:41:50,752 - ubuntuone.SyncDaemon.fsm - INFO - loading updated metadata 2011-03-30 07:41:55,081 - ubuntuone.SyncDaemon.fsm - INFO - initialized: idx_path: 266, idx_node_id: 266, shares: 1 2011-03-30 07:41:55,082 - ubuntuone.SyncDaemon.GeneralINotProc - INFO - Ignoring files: ['\\A#.*\\Z', '\\A.*~\\Z', '\\A.*\\.py[oc]\\Z', '\\A.*\\.sw[nopx]\\Z', '\\A.*\\.swpx\\Z', '\\A\\..*\\.tmp\\Z'] 2011-03-30 07:41:55,083 - ubuntuone.SyncDaemon.HQ - INFO - HashQueue: _hasher started 2011-03-30 07:41:55,902 - ubuntuone.SyncDaemon.DBus - INFO - DBusInterface initialized. 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/Ubuntu One' as root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/syncdaemon' as data dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/shares' as shares root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'INIT' (queues IDLE connection 'Not User Not Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1 miss=266) ---- 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan starting... 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.local_rescan - INFO - start scan all volumes 2011-03-30 07:41:55,906 - ubuntuone.SyncDaemon.local_rescan - INFO - processing trash 2011-03-30 07:41:56,044 - ubuntuone.SyncDaemon.local_rescan - INFO - processing move limbo 2011-03-30 07:41:56,491 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan finished! 2011-03-30 07:41:56,492 - ubuntuone.SyncDaemon.Main - INFO - hash queue empty. We are ready! 2011-03-30 07:42:15,583 - ubuntuone.SyncDaemon.DBus - INFO - u'CredentialsFound': callbacking with credentials (token_name: None). 2011-03-30 07:42:15,584 - ubuntuone.SyncDaemon.DBus - INFO - connect: credential request was successful, pushing SYS_USER_CONNECT. 2011-03-30 07:42:15,617 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-1.one.ubuntu.com, port 443. 2011-03-30 07:42:15,977 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2011-03-30 07:42:15,978 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2011-03-30 07:42:16,581 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' finished OK. 2011-03-30 07:42:16,774 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:16,966 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:17,722 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'oauth_authenticate' finished OK. 2011-03-30 07:42:17,723 - ubuntuone.SyncDaemon.ActionQueue - NOTE - Session ID: '563bc960-35fa-4f44-b9b6-125819656dc3' 2011-03-30 07:42:19,258 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'list_volumes' finished OK. 2011-03-30 07:43:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:45:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:47:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:49:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:51:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ----

    Read the article

  • Asp.net Session State Revisited

    - by karan@dotnet
    Every now and then I see doubts and queries which I believe is the most discussed topic in the .net environment - Asp.net Sessions. So what really are they, why are they needed and what does browser and .net do with it. These and some of the other questions I hope to answer with this post. Because of the stateless nature of the HTTP protocol there is always a need of state management in a web application. There are many other ways to store data but I feel Session state is amongst the most powerful one. The ASP.NET session state is a technology that lets you store server-side, user-specific data. Our web applications can then use data to process request from the user for which the session state was instantiated. So when does a session is first created? When we start a asp.net application a non-expiring cookie is created and its called as ASP.NET_SessionId. Basically there are two methods for this depending upon how you configure this setting in your config file. The session ID can be a part of cookie as discussed above(called as ASP.NET_SessionId) or it is embedded in the browser’s URL. For the latter part we have to set cookie-less session in our web.config file. These Session ID’s are 120-bit random number that is represented by 20-character string. The cookie will be alive until you close your browser. If you browse from one app to another within the same domain, then both the apps will use the same session ID to track the session state. Why reuse? so that you don’t have to create a new session ID for each request. One can abandon one particular Session by calling Session.Abandon() which will stop the page processing and clear out the session data. A subsequent page request causes a brand new session object to be instantiated. So what happened to my cookie? Well the session cookie is still there even when one Session.Abandon() is called and another session object is created. The Session.Abandon() lets you clear out your session state without waiting for session timeout. By default, this time-out is a 20-minute sliding expiration. This expiration is refreshed every time that the user makes a request to the Web site and presents the session ID cookie. The Abandon method sets a flag in the session state object that indicates that the session state should be abandoned. If your app does not have global.asax then your session cookie will be killed at the end of each page request. So you need to have a global.asax file and Session_Start() handler to make sure that the session cookie will remain intact once its issued after the first page hit. The runtime invokes global.asax’s Session_OnEnd() when you call Session.Abandon() or the session times out. The session manager stores session data in HttpCache with sliding expiration where this timeout can be configured in the <sessionState> of web.config file. When the timeout is up the HttpCache will remove the session state object. Sometimes we want particular pages not to time out as compared to other pages in our applications. We can handle this in two ways. First, we can set a timer or may be a JavaScript function that refreshes the page after fixed intervals of time. The only thing being the page being cached locally and then the request is not made to the server so to prevent that you can add this to your page: <%@ OutputCache Location="None" VaryByParam="None" %> Second approach is to move your page into its own folder and then add a web.config to that folder to control the timeout. Also not all pages in your application will need access to session state. For those pages that do not, you can indicate that session state is not needed and prevent session data from being fetched from the store in requests to these pages. You can disable the session state at page level like this:<%@ Page EnableSessionState="False" %>tbc…

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • Using Handlebars.js issue

    - by Roland
    I'm having a small issue when I'm compiling a template with Handlebars.js . I have a JSON text file which contains an big array with objects : Source ; and I'm using XMLHTTPRequest to get it and then parse it so I can use it when compiling the template. So far the template has the following structure : <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper"> <img src="{{ThumbnailUrl}}"> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}">Google Maps</a> </div> </div> </div> <div class="right-side-content"></div> </div> And the following block of code would be the way I'm handling the JS part : $(document).ready(function() { /* Default Javascript Options ~a javascript object which contains all the variables that will be passed to the cluster class */ var default_cluster_options = { animations : ['flash', 'bounce', 'shake', 'tada', 'swing', 'wobble', 'wiggle', 'pulse', 'flip', 'flipInX', 'flipOutX', 'flipInY', 'flipOutY', 'fadeIn', 'fadeInUp', 'fadeInDown', 'fadeInLeft', 'fadeInRight', 'fadeInUpBig', 'fadeInDownBig', 'fadeInLeftBig', 'fadeInRightBig', 'fadeOut', 'fadeOutUp', 'fadeOutDown', 'fadeOutLeft', 'fadeOutRight', 'fadeOutUpBig', 'fadeOutDownBig', 'fadeOutLeftBig', 'fadeOutRightBig', 'bounceIn', 'bounceInUp', 'bounceInDown', 'bounceInLeft', 'bounceInRight', 'bounceOut', 'bounceOutUp', 'bounceOutDown', 'bounceOutLeft', 'bounceOutRight', 'rotateIn', 'rotateInDownLeft', 'rotateInDownRight', 'rotateInUpLeft', 'rotateInUpRight', 'rotateOut', 'rotateOutDownLeft', 'rotateOutDownRight', 'rotateOutUpLeft', 'rotateOutUpRight', 'lightSpeedIn', 'lightSpeedOut', 'hinge', 'rollIn', 'rollOut'], json_data_url : 'data.json', template_data_url : 'template.php', base_maps_api_url : 'https://maps.googleapis.com/maps/api/js?sensor=false', cluser_wrapper_id : '#content-wrapper', maps_wrapper_class : '.google-maps', }; /* Cluster ~main class, handles all javascript operations */ var Cluster = function(environment, cluster_options) { var self = this; this.options = $.extend({}, default_cluster_options, cluster_options); this.environment = environment; this.animations = this.options.animations; this.json_data_url = this.options.json_data_url; this.template_data_url = this.options.template_data_url; this.base_maps_api_url = this.options.base_maps_api_url; this.cluser_wrapper_id = this.options.cluser_wrapper_id; this.maps_wrapper_class = this.options.maps_wrapper_class; this.test_environment_mode(this.environment); this.initiate_environment(); this.test_xmlhttprequest_availability(); this.initiate_gmaps_lib_load(self.base_maps_api_url); this.initiate_data_processing(); }; /* Test Environment Mode ~adds a modernizr test which looks wheater the cluster class is initiated in development or not */ Cluster.prototype.test_environment_mode = function(environment) { var self = this; return Modernizr.addTest('test_environment', function() { return (typeof environment !== 'undefined' && environment !== null && environment === "Development") ? true : false; }); }; /* Test XMLHTTPRequest Availability ~adds a modernizr test which looks wheater the xmlhttprequest class is available or not in the browser, exception makes IE */ Cluster.prototype.test_xmlhttprequest_availability = function() { return Modernizr.addTest('test_xmlhttprequest', function() { return (typeof window.XMLHttpRequest === 'undefined' || window.XMLHttpRequest === null) ? true : false; }); }; /* Initiate Environment ~depending on what the modernizr test returns it puts LESS in the development mode or not */ Cluster.prototype.initiate_environment = function() { return (Modernizr.test_environment) ? (less.env = "development", less.watch()) : true; }; Cluster.prototype.initiate_gmaps_lib_load = function(lib_url) { return Modernizr.load(lib_url); }; /* Initiate XHR Request ~prototype function that creates an xmlhttprequest for processing json data from an separate json text file */ Cluster.prototype.initiate_xhr_request = function(url, mime_type) { var request, data; var self = this; (Modernizr.test_xmlhttprequest) ? request = new ActiveXObject('Microsoft.XMLHTTP') : request = new XMLHttpRequest(); request.onreadystatechange = function() { if(request.readyState == 4 && request.status == 200) { data = request.responseText; } }; request.open("GET", url, false); request.overrideMimeType(mime_type); request.send(); return data; }; Cluster.prototype.initiate_google_maps_action = function() { var self = this; return $(this.maps_wrapper_class).each(function(index, element) { return $(element).on('click', function(ev) { var html = $('<div id="map-canvas" class="map-canvas"></div>'); var latitude = $(element).attr('data-latitude'); var longitude = $(element).attr('data-longitude'); log("LAT : " + latitude); log("LON : " + longitude); $.lightbox(html, { "width": 900, "height": 250, "onOpen" : function() { } }); ev.preventDefault(); }); }); }; Cluster.prototype.initiate_data_processing = function() { var self = this; var json_data = JSON.parse(self.initiate_xhr_request(self.json_data_url, 'application/json; charset=ISO-8859-1')); var source_data = self.initiate_xhr_request(self.template_data_url, 'text/html'); var template = Handlebars.compile(source_data); for(var i = 0; i < json_data.length; i++ ) { var result = template(json_data[i]); $(result).appendTo(self.cluser_wrapper_id); } self.initiate_google_maps_action(); }; /* Cluster ~initiate the cluster class */ var cluster = new Cluster("Development"); }); My problem would be that I don't think I'm iterating the JSON object right or I'm using the template the wrong way because if you check this link : http://rolandgroza.com/labs/valtech/ ; you will see that there are some numbers there ( which represents latitude and longitude ) but they are all the same and if you take only a brief look at the JSON object each number is different. So what am I doing wrong that it makes the same number repeat ? Or what should I do to fix it ? I must notice that I've just started working with templates so I have little knowledge it.

    Read the article

  • c# WebRequest to connect to wikipedia API

    - by NickJ
    Hey, This may be a pathetically simple problem but I cannot seem to format the post webrequest/response to get data from the wikipedia api. I have posted my code below if anyone can help me see my problem. string pgTitle = txtPageTitle.Text; Uri address = new Uri("http://en.wikipedia.org/w/api.php"); HttpWebRequest request = WebRequest.Create(address) as HttpWebRequest; request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; string action = "query"; string query = pgTitle; StringBuilder data = new StringBuilder(); data.Append("action=" + HttpUtility.UrlEncode(action)); data.Append("&query=" + HttpUtility.UrlEncode(query)); byte[] byteData = UTF8Encoding.UTF8.GetBytes(data.ToString()); request.ContentLength = byteData.Length; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteData, 0, byteData.Length); } using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { // Get the response stream StreamReader reader = new StreamReader(response.GetResponseStream()); divWikiData.InnerText = reader.ReadToEnd(); }

    Read the article

  • Asp.net MVC Route class that supports catch-all parameter anywhere in the URL

    - by Robert Koritnik
    the more I think about it the more I believe it's possible to write a custom route that would consume these URL definitions: {var1}/{var2}/{var3} Const/{var1}/{var2} Const1/{var1}/Const2/{var2} {var1}/{var2}/Const as well as having at most one greedy parameter on any position within any of the upper URLs like {*var1}/{var2}/{var3} {var1}/{*var2}/{var3} {var1}/{var2}/{*var3} There is one important constraint. Routes with greedy segment can't have any optional parts. All of them are mandatory. Example This is an exemplary request http://localhost/Show/Topic/SubTopic/SubSubTopic/123/This-is-an-example This is URL route definition {action}/{*topicTree}/{id}/{title} Algorithm Parsing request route inside GetRouteData() should work like this: Split request into segments: Show Topic SubTopic SubSubTopic 123 This-is-an-example Process route URL definition starting from the left and assigning single segment values to parameters (or matching request segment values to static route constant segments). When route segment is defined as greedy, reverse parsing and go to the last segment. Parse route segments one by one backwards (assigning them request values) until you get to the greedy catch-all one again. When you reach the greedy one again, join all remaining request segments (in original order) and assign them to the greedy catch-all route parameter. Questions As far as I can think of this, it could work. But I would like to know: Has anyone already written this so I don't have to (because there are other aspects to parsing as well that I didn't mention (constraints, defaults etc.) Do you see any flaws in this algorithm, because I'm going to have to write it myself if noone has done it so far. I haven't thought about GetVirtuaPath() method at all.

    Read the article

  • httpModules not working on iis7

    - by roncansan
    Hi, I have the following module public class LowerCaseRequest : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(this.OnBeginRequest); } public void Dispose() { } public void OnBeginRequest(Object s, EventArgs e) { HttpApplication app = (HttpApplication)s; if (app.Context.Request.Url.ToString().ToLower().EndsWith(".aspx")) { if (app.Context.Request.Url.ToString() != app.Context.Request.Url.ToString().ToLower()) { HttpResponse response = app.Context.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = app.Context.Request.Url.ToString().ToLower(); response.SuppressContent = true; response.End(); } if (!app.Context.Request.Url.ToString().StartsWith(@"http://zeeprico.com")) { HttpResponse response = app.Context.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = app.Context.Request.Url.ToString().ToLower().Replace(@"http://zeeprico.com", @"http://www.zeeprico.com"); response.SuppressContent = true; response.End(); } } } } the web.config looks like <system.web> <httpModules> <remove name="WindowsAuthentication" /> <remove name="PassportAuthentication" /> <remove name="AnonymousIdentification" /> <remove name="UrlAuthorization" /> <remove name="FileAuthorization" /> <add name="LowerCaseRequest" type="LowerCaseRequest" /> <add name="UrlRewriter" type="Intelligencia.UrlRewriter.RewriterHttpModule, Intelligencia.UrlRewriter" /> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> It works grate on my PC running XP and IIS 5.1 but on my webserver running IIS7 and WS 2008 dosn't works, please help I don't know how to work this out. Thanks

    Read the article

  • HTTP 500 ERROR on CAS Server while setting SSLVerifyClent as "required"

    - by Huiyu.Bird
    I have 3 servers, a Apache Server, a JBOSS Server and a CAS Server for SSO. The Apache Server resolve all request with a domain such as www.request.com, and the path of CAS Server is www.request.com/cas, and JBOSS Server is www.request.com/jboss (This app got a CAS client). My problem is if I set SSLVerifyClient require for the NameVirtualHost of www.request.com in my Apache Server, I got a HTTP 500 error during the redirecting to the JBOSS Server(http://www.request.com/jboss), after logined in the CAS login page successfully. But everything goes successfully if there is no SSLVerifyClient require . Error logs of my Apache Server : [Mon Apr 19 17:07:25 2010] [error] Re-negotiation handshake failed: Not accepted by client!? Error logs of my JBOSS Server : 2010-04-19 17:29:57,263 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/jboss].[jsp]] (ajp-0.0.0.0-8009-1) Servlet.service() for servlet jsp threw exception org.jasig.cas.client.validation.TicketValidationException: The CAS server returned no response. at org.jasig.cas.client.validation.AbstractUrlBasedTicketValidator.validate(AbstractUrlBasedTicketValidator.java:162) at org.jasig.cas.client.validation.AbstractTicketValidationFilter.doFilter(AbstractTicketValidationFilter.java:129) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.authentication.AuthenticationFilter.doFilter(AuthenticationFilter.java:103) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.session.SingleSignOutFilter.doFilter(SingleSignOutFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:75) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.in Any tips will be highly appreciated. Thanks in advance.

    Read the article

  • UIWebView is ignoring cookies?!

    - by A. Miladinovic
    Hello everybody, I am creating an app for the iPhone that fetches a specific page from a website, but to be able to view this site you need to log in with a username and a password. I've created this code: NSString *urlAddress = @"http://www.wdg-hamburg.de"; NSURL *url = [NSURL URLWithString:urlAddress]; NSString *post = @"name=loginname&pass=pass&form_id=user_login&op=Anmelden"; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request setHTTPMethod: @"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; [vPlan loadRequest:request]; But I am not logged in at the UIWebView! I am logging NSHTTPCookieStorage to the console and it shows me the necessary cookie, but I dont get logged in. Does anybody knows how I can fix that? Kind regards, A. Miladinovic

    Read the article

  • The server returned an address in response to the PASV command that is different than the address to

    - by senzacionale
    {System.Net.WebException: The server returned an address in response to the PASV command that is different than the address to which the FTP connection was made. at System.Net.FtpWebRequest.CheckError() at System.Net.FtpWebRequest.SyncRequestCallback(Object obj) at System.Net.CommandStream.Abort(Exception e) at System.Net.FtpWebRequest.FinishRequestStage(RequestStage stage) at System.Net.FtpWebRequest.GetRequestStream() at BackupDB.Program.FTPUploadFile(String serverPath, String serverFile, FileInfo LocalFile, NetworkCredential Cred) in D:\PROJEKTI\BackupDB\BackupDB\Program.cs:line 119} code: FTPMakeDir(new Uri(serverPath + "/"), Cred); FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverPath + serverFile); request.UsePassive = true; request.Method = WebRequestMethods.Ftp.UploadFile; request.Credentials = Cred; byte[] buffer = new byte[10240]; // Read/write 10kb using (FileStream sourceStream = new FileStream(LocalFile.ToString(), FileMode.Open)) { using (Stream requestStream = request.GetRequestStream()) { int bytesRead; do { bytesRead = sourceStream.Read(buffer, 0, buffer.Length); requestStream.Write(buffer, 0, bytesRead); } while (bytesRead > 0); } response = (FtpWebResponse)request.GetResponse(); response.Close(); }

    Read the article

  • JSON and jQuery.ajax

    - by Andreas
    Hello, im trying to use the jQuery UI autocomplete to communitate with a webservice with responseformate JSON, but i am unable to do so. My webservice is not even executed, the path should be correct since the error message does not complain about this. What strikes me is the headers, response is soap but request is json, is it supposed to be like this? Response Headersvisa källkod Content-Type application/soap+xml; charset=utf-8 Request Headersvisa källkod Accept application/json, text/javascript, */* Content-Type application/json; charset=utf-8 The error message i get is as follows (sorry for the huge message, but it might be of importance): soap:ReceiverSystem.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Xml.XmlException: Data at the root level is invalid. Line 1, position 1. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, String arg) at System.Xml.XmlTextReaderImpl.ParseRootLevelWhitespace() at System.Xml.XmlTextReaderImpl.ParseDocumentContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.Read() at System.Xml.XmlReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocolHelper.GetRequestElement() at System.Web.Services.Protocols.Soap12ServerProtocolHelper.RouteRequest() at System.Web.Services.Protocols.SoapServerProtocol.RouteRequest(SoapServerMessage message) at System.Web.Services.Protocols.SoapServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) --- End of inner exception stack trace --- This is my code: $('selector').autocomplete({ source: function(request, response) { $.ajax({ url: "../WebService/Member.asmx", dataType: "json", contentType: "application/json; charset=utf-8", type: "POST", data: JSON.stringify({prefixText: request.term}), success: function(data) { alert('success'); }, error: function(XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }) }, minLength: 1, select: function(event, ui) { } }); And my webservice looks like this: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] [ScriptService] public class Member : WebService { [WebMethod(EnableSession = true)] [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string[] GetMembers(string prefixText) { code code code } } What am i doing wrong? Thanks in advance :)

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >