Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 295/683 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Unable to display wap version during "Extended permission"

    - by Mickey Cheong
    Hi, Im trying to redirect to facebook to request permission to publish stream. However, it only works when i did not specify any display parameters, it shows me a web version of it. I wanted a wap version instead. What should I do? Here's the code: <form method="post" action="https://graph.facebook.com/oauth/authorize"> <input type="hidden" name="client_id" value="1XXXXXXXXXXXXXX" /> <input type="hidden" name="scope" value="publish_stream" /> <input type="hidden" name="redirect_uri" value="http://www.redirect.com/" /> <input class="button" type="submit" value="Request..." /> </form> When i submit this form, it will redirect to http://www.facebook.com/connect/prompt_ … y=page.... If I were to change the "redirected url"'s display param to "wap" it will works. However, if I were to submit to https://graph.facebook.com/oauth/authorize?display=wap. Nothing will happen and it will redirect back to the source url. Any help/hint will be grateful. Thanks a mil, Mickey

    Read the article

  • iPhone app development : How can i get a table view in which whn i click on a cell a text view shoul

    - by Vivek Sharma
    How can i get a table view in which whn i click on a cell a text view should expand just below that cell it should look like clicking on a button an it will create a text view in between clicked button and its below button... i m trying to create a pictorial view... tabel view: button 1 button 2 button 3 table view: button 1 button 2 v . . . text view . button 3 plz solve my problem i m new for iphone app development.... reply me on [email protected] advance thnx Vivek Sharma India

    Read the article

  • WCF events in server-side

    - by Eisenrich
    Hi all, I'm working on an application in WCF and want to receive events in the server side. I have a web that upon request needs to register a fingerprint. The web page request the connection of the device and then every second for 15 seconds requests the answer. The server-side code is apparently "simple" but doesn't work. Here is it: [ServiceContract] interface IEEtest { [OperationContract] void EEDirectConnect(); } class EETest : IEEtest { public void EEDirectConnect() { CZ ee = new CZ(); // initiates the device dll ee.Connect_Net("192.168.1.200", 4011); ee.OnFinger += new _IEEEvents_OnFingerEventHandler(ee_OnFinger); } public void ee_OnFinger() { //here i have a breakpoint; } } every time I put my finger, it should fire the event. in fact if I static void Main() { EETest pp = new EETest(); pp.EEDirectConnect(); } It works fine. but from my proxy it doesn't fire the event. do you have any tips, recommendations, or can you see the error? Thanks everyone.

    Read the article

  • Can't log in a user in MVC!

    - by devlife
    I have been scratching my head on this for a while now but still can't get it. I'm trying to simply log in a user in an MVC2 application. I have tried everything that I know to try but still can't figure out what I'm doing wrong. Here are a few things that I have tried: FormsAuthentication.SetAuthCookie( emailAddress, rememberMe ); var cookie = FormsAuthentication.GetAuthCookie( emailAddress, rememberMe ); HttpContext.Response.Cookies.Add( cookie ); FormsAuthenticationTicket ticket = new FormsAuthenticationTicket( emailAddress, rememberMe, 15 ); FormsIdentity identity = new FormsIdentity( ticket ); GenericPrincipal principal = new GenericPrincipal(identity, new string[0]); HttpContext.User = principal; I'm not sure if any of this is the right thing to do (as it's not working). After setting HttpContext.User = principal then Request.IsAuthenticated == true. However, in Global.asax I have this: HttpCookie authenCookie = Context.Request.Cookies.Get( FormsAuthentication.FormsCookieName ); The only cookie that ever is available is the aspnet session cookie. Any ideas at all would be much appreciated!

    Read the article

  • User must have accepted TOS - Facebook Graph API error when posting photos to group page

    - by user370309
    Hi all, I've been struggling to upload an image from the user's computer and posted to our group page using the Facebook Graph API. I was able to send a post request to facebook with the image however, I'm getting this error back: ERROR: (#200) User must have accepted TOS. To some extent, I don't believe that I need the user to authorize himself as the photo is being uploaded to our group page. This below, is the code i'm using: if($albumId != null) { $args = array( 'message' = $description ); $args[basename($photoPath)] = '@' . realpath($photoPath); $ch = curl_init(); $url = 'https://graph.facebook.com/'.$albumId.'/photos?'.$token; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $args); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $data = curl_exec($ch); $photoId = json_decode($data, true); if(isset($photoId['error'])) die('ERROR: '.$photoId['error']['message']); $temp = explode('.', sprintf('%f', $photoId['id'])); $photoId = $temp[0]; return $photoId; } Can somebody tell me if I need to request extra permissions from the user or what i'm doing wrong? Thanks very much!

    Read the article

  • Qt 4.x: how to implement drag-and-drop onto the desktop or into a folder?

    - by Jeremy Friesner
    Hi all, I've written a little file-transfer application written in C++ using Qt 4.x ... it logs into a server, shows the user a list of files available on the server, and lets the user upload or download files. This all works fine; you can even drag a file in from the desktop (or from an open folder), and when you drop the file icon into the server-files-list-view, the dropped file gets uploaded to the server. Now I have a request for the opposite action as well... my users would like to be able to drag a file out of the server-files-list-view and onto the desktop, or into an open folder window, and have that file get downloaded into that location. That seems like a reasonable request, but I don't know how to implement it. Is there a way for a Qt application to find out the directory corresponding to where "drop event" occurred, when the icon was dropped onto the desktop or into an open folder window? Ideally this would be a Qt-based platform-neutral mechanism, but if that doesn't exist, then platform-specific mechanisms for MacOS/X and Windows (XP or higher) would suffice. Any ideas?

    Read the article

  • Rest WebService error handling.

    - by Pratik
    Hi there, I am using RestWebservice for few basic operations , like creating/searching. The request xml looks something like this <customer> <name/> ..... </customer> For a successful operation I return the same customer XML with extra fields populated in it(eg. systemId etc which we blank in the request) . with Response.Status=2000 For an unsuccessful operation i return something like this with different error codes . e.g Response.Status = 422(Unprocessable entity) Response.Status= 500(Internal Server Error) and few others.. <errors> <error> An exception occurred while creating the customer</error> <error> blah argument is not valid.</error> </errors> Now i am not sure , whether this is the correct way of sending the errors to the client. Maybe it should be present in the header of the response. I will really appreciate any help. Thanks!

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • Implementing the tree with reference to the root for each leaf

    - by AntonAL
    Hi, i implementing a products catalog, that looks, like this: group 1 subgroup 1 subgroup 2 item 1 item 2 ... item n ... subgroup n group 2 subgroup 1 ... subgroup n group 3 ... group n The Models: class CatalogGroup < ActiveRecord::Base has_many: catalog_items has_many :catalog_items_all, :class_name => "CatalogItem", :foreign_key => "catalog_root_group_id" end class CatalogItem < ActiveRecord::Base belongs_to :catalog_group belongs_to :catalog_root_group, :class_name => "CatalogGroup" end Migrations: class CreateCatalogItems < ActiveRecord::Migration def self.up create_table :catalog_items do |t| t.integer :catalog_group_id t.integer :catalog_root_group_id t.string :code t.timestamps end end For convenience, i referenced each CatalogItem to it's top-most CatalogGroup and named this association "catalog_root_group". This will give us the simple implementation of search request, like "show me all items in group 1". We will have a deal only with CatalogModel.catalog_root_group The problem is - this association does't work. I always get "catalog_root_group" equals to nil Also, i have tried to overcome the using of reference to root group ("catalog_root_group"), but i cannot construct appropriate search request in ruby ... Do you know, how to do it ?

    Read the article

  • jQuery autocomplete for dynamically created inputs

    - by Jamatu
    Hi all! I'm having an issue using jQuery autocomplete with dynamically created inputs (again created with jQuery). I can't get autocomplete to bind to the new inputs. Autocomplete $("#description").autocomplete({ source: function(request, response) { $.ajax({ url: "../../works_search", dataType: "json", type: "post", data: { maxRows: 15, term: request.term }, success: function(data) { response($.map(data.works, function(item) { return { label: item.description, value: item.description } })) } }) }, minLength: 2, }); New table row with inputs var i = 1; var $table = $("#works"); var $tableBody = $("tbody",$table); $('a#add').click(function() { var newtr = $('<tr class="jobs"><td><input type="text" name="item[' + i + '][quantity]" /></td><td><input type="text" id="description" name="item[' + i + '][works_description]" /></td></tr>'); $tableBody.append(newtr); i++; }); I'm aware that the problem is due to the content being created after the page has been loaded but I can't figure out how to get around it. I've read several related questions and come across the jQuery live method but I'm still in a jam! Any advice?

    Read the article

  • QueryString malformed after URLDecode

    - by pdavis
    I'm trying to pass in a Base64 string into a C#.Net web application via the QueryString. When the string arrives the "+" (plus) sign is being replaced by a space. It appears that the automatic URLDecode process is doing this. I have no control over what is being passed via the QueryString. Is there any way to handle this server side? Example: http://localhost:3399/Base64.aspx?VLTrap=VkxUcmFwIHNldCB0byAiRkRTQT8+PE0iIHBsdXMgb3IgbWludXMgNSBwZXJjZW50Lg== Produces: VkxUcmFwIHNldCB0byAiRkRTQT8 PE0iIHBsdXMgb3IgbWludXMgNSBwZXJjZW50Lg== People have suggested URLEncoding the querystring: System.Web.HttpUtility.UrlEncode(yourString) I can't do that as I have no control over the calling routine (which is working fine with other languages). There was also the suggestion of replacing spaces with a plus sign: Request.QueryString["VLTrap"].Replace(" ", "+"); I had though of this but my concern with it, and I should have mentioned this to start, is that I don't know what other characters might be malformed in addition to the plus sign. My main goal is to intercept the QueryString before it is run through the decoder. To this end I tried looking at Request.QueryString.toString() but this contained the same malformed information. Is there any way to look at the raw QueryString before it is URLDecoded? After further testing it appears that .Net expects everything coming in from the QuerString to be URL encoded but the browser does not automatically URL encode GET requests.

    Read the article

  • UnicodeDecodeError on attempt to save file through django default filebased backend

    - by Ivan Kuznetsov
    When i attempt to add a file with russian symbols in name to the model instance through default instance.file_field.save method, i get an UnicodeDecodeError (ascii decoding error, not in range (128) from the storage backend (stacktrace ended on os.exist). If i write this file through default python file open/write all goes right. All filenames in utf-8. I get this error only on testing Gentoo, on my Ubuntu workstation all works fine. class Article(models.Model): file = models.FileField(null=True, blank=True, max_length = 300, upload_to='articles_files/%Y/%m/%d/') Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 24. return view_func(request, *args, **kwargs) File "/var/www/localhost/help/wiki/views.py" in edit_article 338. new_article.file.save(fp, fi, save=True) File "/usr/lib/python2.6/site-packages/django/db/models/fields/files.py" in save 92. self.name = self.storage.save(name, content) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in save 47. name = self.get_available_name(name) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in get_available_name 73. while self.exists(name): File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in exists 196. return os.path.exists(self.path(name)) File "/usr/lib/python2.6/genericpath.py" in exists 18. st = os.stat(path) Exception Type: UnicodeEncodeError at /edit/ Exception Value: ('ascii', u'/var/www/localhost/help/i/articles_files/2010/03/17/\u041f\u0440\u0438\u0432\u0435\u0442', 52, 58, 'ordinal not in range(128)')

    Read the article

  • Controlling ASP.NET output cache memory usage

    - by Josh Einstein
    I would like to use output caching with WCF Data Services and although there's nothing specifically built in to support caching, there is an OnStartProcessingRequest method that allows me to hook in and set the cacheability of the request using normal ASP.NET mechanisms. But I am worried about the worker process getting recycled due to excessive memory consumption if large responses are cached. Is there a way to specify an upper limit for the ASP.NET output cache so that if this limit is exceeded, items in the cache will be discarded? I've seen the caching configuration settings but I get the impression from the documentation that this is for explicit caching via the Cache object since there is a separate outputCacheSettings which has no memory-related attributes. Here's a code snippet from Scott Hanselman's post that shows how I'm setting the cacheability of the request. protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); //Cache for a minute based on querystring HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; }

    Read the article

  • Problem with detecting the value of the drop down list on the server (servlet) side

    - by Harry Pham
    Client code is pretty simple: <form action="DDServlet" method="post"> <input type="text" name="customerText"> <select id="customer"> <option name="customerOption" value="3"> Tom </option> <option name="customerOption" value="2"> Harry </option> </select> <input type="submit" value="send"> </form> Here is the code on the Servlet Enumeration paramNames = request.getParameterNames(); while(paramNames.hasMoreElements()){ String paramName = (String)paramNames.nextElement(); //get the next element System.out.println(paramName); } When I print out, I only see, customerText, but not customerOption. Any idea why guys? What I hope is, if I select Tom in my option, once I submit, on my servlet I should able to do this: String paramValues[] = request.getParameterValues(paramName); and get back value of 3

    Read the article

  • Is it possible to use OAuth starting from the service provider website?

    - by Brian Armstrong
    I want to let people create apps that use my API and authenticate them with OAuth. Normally this process starts from the consumer service website (say TwitPic) and they request an access token from the service provider (Twitter). The user is then taken to the service provider website where they have to allow/deny access to to the consumer. I'm wondering if it's possible to initiate this process from the service provider website instead. So in this example you would start on Twitter's site, and maybe there is a section marked "do you want to turn on access for TwitPic?". If you click yes, it passes the access token directly to TwitPic which now has access to your account. Basically, fewer steps. I'm looking at the OAuth docs and it looks like the request token is generated on the consumer side and used later to turn it into an access token. So it's not really designed with what I described above in mind, but I thought there might be a way. http://oauth.net/core/1.0/ (Search for "steps") Thanks!

    Read the article

  • Should I work with multiple recruiters for a job search or just pick one?

    - by BuckeyeSoftwareGuy
    So I posted my resume on dice and monster last night and today I've received 5 voicemails and about 6 emails from different recruiters wanting to talk to me. What's the best way to handle this situation. I want to reply to them all of course as I'm very interested in making an imminent job change. Is there some sort of rule of thumb here? It seems to me they would want to work with me exclusively if they could. I don't want to limit my options though..

    Read the article

  • post commit hook fail

    - by jarad mayers
    I have Master/Slave setup using Win2k8R with SVN 1.6.9 and using TortoiseSVN 1.6.7. The access is through Apache and using http. Everything works but when I commit I get the following message: Error: post-commit hook failed (exit code 1) with output: Error: The process cannot access the file because it is being used by another process. This happen when using multiple TortoiseSVN dialog for committing the files in rapid succession. If I use one TortoiseSVN dialog and wait till the commit reply is back then I won't see the problem. In other words, committing one at the time cause no issue. The post-commit script output is logged. Even though I get the above error but when I check the Master and Slave repository the files have been replicated okay with no issue. I am wondering how this issue can be solved.

    Read the article

  • Ways to polling server status

    - by Yijinsei
    Hi guys, I am try to create a JSP page that will show all the status in a group of local servers. Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Connection details & timeouts in a java web service client

    - by f1sh
    Hello fellow Coders, I have to implement a webservice client to a given WSDL file. I used the SDK's 'wsimport' tool to create Java classes from the WSDL as well as a class that wrap's the webservice's only method (enhanceAddress(auth, param, address)) into a simple java method. So far, so good. The webservice is functional and returning results correcty. The code looks like this: try { EnhancedAddressList uniservResponse = getWebservicePort().enhanceAddress(m_auth, m_param, uniservAddress); //Where the Port^ is the HTTP Soap 1.2 Endpoint }catch (Throwable e) { throw new AddressValidationException("Error during uniserv webservice request.", e); } The Problem now: I need to get Information about the connection and any error that might occur in order to populate various JMX values (such as COUNT_READ_TIMEOUT, COUNT_CONNECT_TIMEOUT, ...) Unfortunately, the method does not officially throw any Exceptions, so in order to get details about a ConnectException, i need to use getCause() on the ClientTransportException that will be thrown. Even worse: I tried to test the read timeout value, but there is none. I changed the service's location in the wsdl file to post the request to a php script that simply waits forever and does not return. Guess what: The web service client does not time out but waits forever as well (I killed the app after 30+ minutes of waiting). That is not an option for my application as i eventually run out of tcp connections if some of them get 'stuck'. The enhanceAddress(auth, param, address) method is not implemented but annotated with javax.jws.* Annotations, meaning that i cannot see/change/inspect the code that is actually executed. Do i have any option but to throw the whole wsimport/javax.jsw-stuff away and implement my own soap client?

    Read the article

  • Calculation of charged traffic in GPRS network

    - by TyBoer
    I am working with a distributed application communicating over GPRS. I use UDP packets to send business data and ICMP pings to verify connectivity. And now I have a problem with calculating a traffic for which I will be charged by the provider. I have to consider following factors: UDP payload: that is obvious. UDP overhead: UDP header + IP header = 8 + 20 bytes. ICMP echo request without data: IP header + ICMP payload = 28 bytes. ICMP echo reply: as in 3. Above means that for evey data packet I am charged for payload + 28 bytes and for every ping 56 bytes. Am I right or I am missing/misunderstanding something?

    Read the article

  • Passing JSON object from Controller to View(jsp)

    - by user752233
    I am trying to send JSON object to my view from controller but unable to send it. Please help me out! I am using following code public class SystemController extends AbstractController{ protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView mav = new ModelAndView("SystemInfo", "System", "S"); response.setContentType("application/json;charset=UTF-8"); response.setHeader("Cache-Control", "no-cache"); JSONObject jsonResult = new JSONObject(); jsonResult.put("JVMVendor", System.getProperty("java.vendor")); jsonResult.put("JVMVersion", System.getProperty("java.version")); jsonResult.put("JVMVendorURL", System.getProperty("java.vendor.url")); jsonResult.put("OSName", System.getProperty("os.name")); jsonResult.put("OSVersion", System.getProperty("os.version")); jsonResult.put("OSArchitectire", System.getProperty("os.arch")); response.getWriter().write(jsonResult.toString()); // response.getWriter().close(); return mav; // return modelandview object } } and in the view side I am using <script type="text/javascript"> Ext.onReady(function(response) { //Ext.MessageBox.alert('Hello', 'The DOM is ready!'); var showExistingScreen = function () { Ext.Ajax.request({ url : 'system.htm', method : 'POST', scope : this, success: function ( response ) { alert('1'); var existingValues = Ext.util.JSON.decode(response.responseText); alert('2'); } }); }; return showExistingScreen(); });

    Read the article

  • Destroying nested resources in restful way

    - by Alex
    I'm looking for help destroying a nested resource in Merb. My current method seems near correct, but the controller raise an InternalServerError during the destruction of the nested object. Here comes all the details concerning the request, don't hesitate to ask for more :) Thanks, Alex I'm trying to destroy a nested resources using the following route in router.resources :events, Orga::Events do |event| event.resources :locations, Orga::Locations end Which gives in jQuery request (delete_ method is a implementation of $.ajax with "DELETE"): $.delete_("/events/123/locations/456"); In the Location controller side, I've got: def delete(id) @location = Location.get(id) raise NotFound unless @location if @location.destroy redirect url(:orga_locations) else raise InternalServerError end end And the log: merb : worker (port 4000) ~ Routed to: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} merb : worker (port 4000) ~ Params: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} ~ (0.000025) SELECT `id`, `class_type`, `name`, `prefix`, `type`, `capacity`, `handicap`, `export_name` FROM `entities` WHERE (`class_type` IN ('Location') AND `id` = 456) ORDER BY `id` LIMIT 1 ~ (0.000014) SELECT `id`, `streetname`, `phone`, `lat`, `lng`, `country_region_city_id`, `location_id`, `organisation_id` FROM `country_region_city_addresses` WHERE `location_id` = 456 ORDER BY `id` LIMIT 1 merb : worker (port 4000) ~ Merb::ControllerExceptions::InternalServerError - (Merb::ControllerExceptions::InternalServerError)

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • PF, load balanced gateways, and Squid

    - by Santa
    Hi, So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1). Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected. How do I make Squid behave with the load balancing rule that we have set up? Here's the related settings in /etc/pf.conf that I have: ext_if1="em1" # DSL ext_if2="em0" # T1 int_if="re0" ext_gw1="x.x.x.1" ext_gw2="y.y.y.1" int_addr="10.0.0.1" int_net="10.0.0.0/16" dsl_ports = "1024:65535" t1_ports = "1:1023" ... squid=3128 rdr on $int_if inet proto tcp from $int_net \ to any port 80 -> 127.0.0.1 port $squid pass in quick on $int_if route-to lo0 inet proto tcp \ from $int_net to 127.0.0.1 port $squid keep state ... # load balancing pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto tcp from $int_net to any port $dsl_ports keep state pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto udp from $int_net to any port $dsl_ports pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto tcp from $int_net to any port $t1_ports keep state pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto udp from $int_net to any port $t1_ports Thanks!

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >