Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 6/683 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • SoapUI JMS Connections

    - by Damo
    I am using SoapUI to do performance testing of some services over JMS using WebSphere MQ as the JMS Provider. SoapUI uses HermesJMS to provide the JMS Connection details for the JMS Endpoint. I've noticed that when I call a request from SoapUI the JMS Connection is never closed. This results in hundreds of SYSTEM.DEF.SVRCONN channel connections. It seems to be specific to SoapUI as HermeJMS doesn't exhibit this behaviour. Has anyone else seen this?

    Read the article

  • ASP.NET 4 - IIS 7 - Request timed out - Request timed out

    - by sharru
    My website is running on Asp.net v4 , IIS 7 , Windows server 2008. My cpu is running on 20-30% and the site is responding quickly. Every 2-5 mins i'm receiving the following error: Event code: 3001 Event message: The request has been aborted. Exception type: HttpException Exception message: Request timed out. , Request information: Request URL: http://www.xxxx.com/Services/AxRefresh.asmx/AxUpdate Request path: /Services/AxRefresh.asmx/AxUpdate User host address: 84.110.251.198 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE i read that the error is related to the maximum concurrent requests limit http://support.microsoft.com/kb/821268 but then i found out that on IIS 7 this limitation is changed and not relevant. http://msdn.microsoft.com/en-us/library/dd560842(VS.100).aspx Any other ideas what can be the problem or where to start looking ? Thx!

    Read the article

  • HTTP request stream not readable outside of request handler

    - by Jason Young
    I'm writing a fairly complicated multi-node proxy, and at one point I need to handle an HTTP request, but read from that request outside of the "http.Server" callback (I need to read from the request data and line it up with a different response at a different time). The problem is, the stream is no longer readable. Below is some simple code to reproduce the issue. Is this normal, or a bug? function startServer() { http.Server(function (req, res) { req.pause(); checkRequestReadable(req); setTimeout(function() { checkRequestReadable(req); }, 1000); setTimeout(function() { res.end(); }, 1100); }).listen(1337); console.log('Server running on port 1337'); } function checkRequestReadable(req) { //The request is not readable here! console.log('Request writable? ' + req.readable); } startServer();

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • webservice request issue with dynamic request inputs

    - by nanda
    try { const string siteURL = "http://ops.epo.org/2.6.1/soap-services/document-retrieval"; const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; var request = (HttpWebRequest)WebRequest.Create(siteURL); request.Method = "POST"; request.Headers.Add("SOAPAction", "\"document-retrieval\""); request.ContentType = " text/xml; charset=utf-8"; Stream stm = request.GetRequestStream(); byte[] binaryRequest = Encoding.UTF8.GetBytes(docRequest); stm.Write(binaryRequest, 0, docRequest.Length); stm.Flush(); stm.Close(); var memoryStream = new MemoryStream(); WebResponse resp = request.GetResponse(); var buffer = new byte[4096]; Stream responseStream = resp.GetResponseStream(); { int count; do { count = responseStream.Read(buffer, 0, buffer.Length); memoryStream.Write(buffer, 0, count); } while (count != 0); } resp.Close(); byte[] memoryBuffer = memoryStream.ToArray(); System.IO.File.WriteAllBytes(@"E:\sample12.pdf", memoryBuffer); } catch (Exception ex) { throw ex; } The code above is to retrieve the pdf webresponse.It works fine as long as the request remains canstant, const string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id='EP 1000000A1 I ' page-number='1' document-format='SINGLE_PAGE_PDF' system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; but how to retrieve the same with dynamic requests. When the above code is changed to accept dynamic inputs like, [WebMethod] public string DocumentRetrivalPDF(string docid, string pageno, string docFormat, string fileName) { try { ........ ....... string docRequest = "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:xsd='http://www.w3.org/2001/XMLSchema'><soap:Body><document-retrieval id=" + docid + " page-number=" + pageno + " document-format=" + docFormat + " system='ops.epo.org' xmlns='http://ops.epo.org' /></soap:Body></soap:Envelope>"; ...... ........ return "responseTxt"; } catch (Exception ex) { return ex.Message; } } It return an "INTERNAL SERVER ERROR:500" can anybody help me on this???

    Read the article

  • 502: proxy: pass request body failed

    - by Apikot
    Sometimes I get the following error (in apache's error.log) when viewing my site over https: (502)Unknown error 502: proxy: pass request body failed to xxx.xxx.xxx.xxx:443 I'm not entirely sure what this is and why it happens, it's also not consistent. The request route is: Browser Proxy server (apache with mod_proxy + mod_ssl) Load balancer (aws) Web server (apache with mod_ssl) The configuration on the proxy server is as follows: <VirtualHost *:443> ProxyRequests Off ProxyVia On ServerName www.xxx.co.uk ServerAlias xxx.co.uk <Directory proxy:*> Order deny,allow Allow from all </Directory> <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass / balancer://cluster:443/ lbmethod=byrequests ProxyPassReverse / balancer://cluster:443/ ProxyPreserveHost off SSLProxyEngine On SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.cert SSLCertificateKeyFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.key <Proxy balancer://cluster> BalancerMember https://xxx.eu-west-1.elb.amazonaws.com </Proxy> </VirtualHost> Any idea what the issue might be?

    Read the article

  • Please help! request compression

    - by Naor
    Hi, I wrote an IHttpModule that compress my respone using gzip (I return a lot of data) in order to reduce response size. It is working great as long as the web service doesn't throws an exception. In case exception is thrown, the exception gzipped but the Content-encoding header is disappear and the client doesn't know to read the exception. How can I solve this? Why the header is missing? I need to get the exception in the client. Here is the module: public class JsonCompressionModule : IHttpModule { public JsonCompressionModule() { } public void Dispose() { } public void Init(HttpApplication app) { app.BeginRequest += new EventHandler(Compress); } private void Compress(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; HttpRequest request = app.Request; HttpResponse response = app.Response; try { //Ajax Web Service request is always starts with application/json if (request.ContentType.ToLower(CultureInfo.InvariantCulture).StartsWith("application/json")) { //User may be using an older version of IE which does not support compression, so skip those if (!((request.Browser.IsBrowser("IE")) && (request.Browser.MajorVersion <= 6))) { string acceptEncoding = request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(acceptEncoding)) { acceptEncoding = acceptEncoding.ToLower(CultureInfo.InvariantCulture); if (acceptEncoding.Contains("gzip")) { response.AddHeader("Content-encoding", "gzip"); response.Filter = new GZipStream(response.Filter, CompressionMode.Compress); } else if (acceptEncoding.Contains("deflate")) { response.AddHeader("Content-encoding", "deflate"); response.Filter = new DeflateStream(response.Filter, CompressionMode.Compress); } } } } } catch (Exception ex) { int i = 4; } } } Here is the web service: [WebMethod] public void DoSomething() { throw new Exception("This message get currupted on the client because the client doesn't know it gzipped."); } I appriciate any help. Thanks!

    Read the article

  • Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

    - by lindelof
    I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues. When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner. Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?

    Read the article

  • What is the "opposite" of request serialization called?

    - by Adam Lindberg
    For example, if a request is made to a resource and another identical request is made before the first has returned a result, the server returns the result of the first request for the second request as well. This to avoid unnecessary processing on the resource. This is not the same thing as caching/memoization since it only concerns identical requests ongoing in parallel. Is there a term for the reuse of results for currently ongoing requests to a resource for the purpose of minimizing processing?

    Read the article

  • How to make per- http Request cache in ASP.NET 3.5

    - by Artem
    We using ASP.NET 3.5 (Controls-based apporach) and need to have storage specific for one http request only. Thread-specific cache with keys from session id won't work because threads are supposed to be pooled and therefore I have a chance to have data from some previous request in cache, which is undesireble in my case. I always need to have brand new storage for each request available through whole request. Any ideas how to do it in ASP.NET 3.5?

    Read the article

  • copying the request header from request object to urlConnection object

    - by Bunny Rabbit
    protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub URL url = new URL("http://localhost:8080/testy/Out"); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setDoOutput(true); connection.setRequestMethod("POST"); PrintWriter out=response.getWriter(); for(Enumeration e=request.getHeaderNames();e.hasMoreElements();){ Object o=e.nextElement(); String value=request.getHeader(o.toString()); out.println(o+"--is--"+value+"<br>"); connection.setRequestProperty((String) o, value); } connection.connect(); } i wrote the above code in a servlet to post form so some alternate locations than this servlet,but its not working.is it okay to use connection.setRequestProperty to set the header fields to what they are in the incoming request to servlet.

    Read the article

  • How to validate HTTP request headers before receiving request body using WCF

    - by anelson
    I'm implementing a REST service using WCF which will be used to upload very large files. The HTTP headers in this request will communicate information which will be validated prior to allowing the upload to proceed (things like permissions, available disk space, etc). It's possible this validation will fail resulting in an error response. I'd like to do this validation prior to the client sending the body of the request, so it has a chance to detect failure before uploading potentially gigabytes of data. RESTful web services use the HTTP 1.1 Expect: 100-continue in the request to implement this. For example Amazon S3's REST API can validate your key and ACLs in response to an object PUT operation, returning 100 Continue if all is well, indicating you may proceed to send your data. I've rummaged around the WCF documentation and I just can't see a way to accomplish this without doing some pretty low-level hooking into the HTTP request processing pipeline. How would you suggest I solve this problem?

    Read the article

  • Spiceworks versus Request Tracker?

    - by dmackey
    We currently utilize Request Tracker for help desk ticketing, we utilize Spiceworks for asset inventorying. I am pondering whether it might be worthwhile to move from RT to Spiceworks for help desk as well. Has anyone used both systems and can provide some insight into any benefits/problems with either system? Or has general philosophical reasons why one should use one solution over the other? Of course, RT is open source and Spiceworks is not - and usually this would be a major item for me - but since Spiceworks is free and takes community involvement fairly actively its not as major of a concern for me (personally).

    Read the article

  • Apache Connection vs. Request

    - by user101570
    I apologize in advance if this is a basic question, but I am quite confused after reading the Apache documentation and other tutorials. Does a single Apache prefork process serve all HTTP requests for a given client? That's what I thought, but when I reduce maxclients down to a low number, my page load times go to a crawl. This despite the fact I'm the only client on the server in question. This would suggest each process serves a single HTTP request at a time, rather than serving all requests within the TimeOut window. So if a single webpage requires 15 HTTP requests to load fully, do I require 15 prefork Apache processes to optimally serve it?

    Read the article

  • Classic ASP Request.Form removes spaces?

    - by alex
    I'm trying to figure this oddity out... in classic ASP i seem to be losing spaces in Request.Form values... ie, Request.Form("json") is {"project":{"...","administrator":"AlexGorbatchev", "anonymousViewUrl":null,"assets":[],"availableFrom":"6/10/20104:15PM"... However, CStr(Request.Form) is json={"project":{"__type":"...":"Alex Gorbatchev", "anonymousViewUrl":null,"assets":[],"availableFrom":"6/10/2010 4:15 PM"... Here's the entire code :) <%@ language="VBSCRIPT"%> <% Response.Write(CStr(Request.Form("json"))) Response.Write(CStr(Request.Form)) %> Somebody please tell me I haven't lost all my marbles...

    Read the article

  • ASP.NET binding object to Request in asp.net mvc

    - by Alxandr
    I've created a object that I'd like to have accessible from wherever the request-object is accessible, and to "die" with the request, more or less like how you always in a mvc-application has access to the RouteData-collection. Especially it's important that I have access to this object in the execution of action-filters. And also there need to be created a new object of my class whenever a new request is made to the page (the object needs to be request-safe, ie. only one request modifies that one object). Any thoughts about how to achieve this?

    Read the article

  • Django: request object to template context transparancy

    - by anars
    Hi! I want to include an initialized data structure in my request object, making it accessible in the context object from my templates. What I'm doing right now is passing it manually and tiresome within all my views: render_to_response(...., ( {'menu': RequestContext(request)})) The request object contains the key,value pair which is injected using a custom context processor. While this works, I had hoped there was a more generic way of passing selected parts of the request object to the template context. I've tried passing it by generic views, but as it turns out the request object isn't instantiated when parsing the urlpatterns list.

    Read the article

  • Request Tracker 4: Ticket Escalation

    - by Randy
    I am running Request Tracker 4 on a Debian Squeeze Server. I have to implement a priority escalation. Actually escalation is not the right term for this since the the ticket priority should be set linear via rt-crontool (or any other tool that can be run via a cronjob) dependent on the time that has been passed between the „Started“ and „Due“ to a number between 0 (starting priority) and the „Final Priority“ (eg. 100) while the value of the „Final Priority“ should be reached exactly the moment the „Due“-Date is passed. This already implies that the search condition should be all tickets of a certain queue that have „Started“ AND „Due“ AND „Final Priority“. The cronjob should be called very frequently for excample any 5 or 10 minutes so that the call should be indempotent and not depentent on the frequency of the rt-crontool invocations. One Example: A Ticket is Started at 2012-12-23 0am and Due is 2012-12-23 11.59pm while the Final Priority is 100. When the call is made at noon the priority should be set to 50. Could anybody help me with this? Thank you for reading this to the bottom!

    Read the article

  • IIS Request Filtering Rule for User Agent

    - by alexp
    I'm trying to block requests from a certain bot. I've added a request filtering rule, but I know it is still hitting the site because it shows up in Google Analytics. Here is the filtering rule I added: <security> <requestFiltering> <filteringRules> <filteringRule name="Block GomezAgent" scanUrl="false" scanQueryString="false"> <scanHeaders> <add requestHeader="User-Agent" /> </scanHeaders> <denyStrings> <add string="GomezAgent+3.0" /> </denyStrings> </filteringRule> </filteringRules> </requestFiltering> </security> This is an example of the user agent I'm trying to block. Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:13.0;+GomezAgent+3.0)+Gecko/20100101+Firefox/13.0.1 In some ways it seems to work. If I use Chrome to spoof my user agent, I get a 404, as expected. But the bot traffic is still showing up in my analytics. What am I missing?

    Read the article

  • Oracle Service Bus JMS Deployments Utility by Mike Muller

    - by JuergenKress
    For proxy services utilizing the JMS transport, OSB receives messages from destinations by using an MDB. These MDBs get generated and deployed during activation of the service configuration. OSB creates a random, unique name for the J2EE application that gets deployed to WLS. The name starts with “_ALSB_” and ends in a unique series of digits. The EAR files are written to the sbgen subdirectory of the domain home directory. You will see these applications on the WLS console page for “Deployments”. For various operational reasons, there are times when the application name for a given proxy service needs to be determined. Since the generated name of the application doesn’t reflect the name of the service, it becomes difficult to determine the relationship between the service and its EAR file. In fact, it can not be discerned from either the OSB or WLS consoles.Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Service bus,OSB,JMS,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • JMS : Specifying Message Paging Directory on Weblogic server.

    - by adejuanc
    Two ways to configure or modify Paging directory, here the examples : 1.- Via config.xml file. <paging-directory>C:\temp</paging-directory> <jms-server> <name>JMSServerMS1</name> <target>MS1</target> <persistent-store xsi:nil="true"></persistent-store> <hosting-temporary-destinations>true</hosting-temporary-destinations> <temporary-template-resource xsi:nil="true"></temporary-template-resource> <temporary-template-name xsi:nil="true"></temporary-template-name> <message-buffer-size>-1</message-buffer-size> <paging-directory>C:\temp</paging-directory> <paging-file-locking-enabled>true</paging-file-locking-enabled> <expiration-scan-interval>30</expiration-scan-interval> </jms-server> ------------------------------------------------------- 2 .- Via WLST (Weblogic scripting tool) startEdit() cd('/Deployments/JMSServerMS1') cmo.setPagingDirectory('C:\\temp') activate()

    Read the article

  • JAX-WS SOAP over JMS by Edwin Biemond

    - by JuergenKress
    With WebLogic 12.1.2 Oracle now also supports JAX-WS SOAP over JMS. Before 12.1.2 we had to use JAX-RPC and without any JDeveloper support. We need to use ANT to generate all the web service code. See this blogpost for all the details. In this blogpost I will show you all the necessary JDeveloper steps to create a SOAP over JMS JAX-WS Web Service (Bottom up approach) and generate a Web Service Proxy client to invoke this service, plus let you know what works and what not. We start with a simple HelloService class with a sayHello method. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Edwin Biemond,SOAP,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • i want to send 20000 messages from JMeter to JMS Queue through web methods and get/capture responses

    - by sam
    Blockquote Hi i'm trying to post JMS messages to JMS queue through web methods,JNDI. i want to post 20000 messages using one connection. i want to read the responses back once returned by wMethods. i want to capture the request & response for all 20000 messages i'm using JMeter is there any other opensource, easily usable tool available for this testing? thanks in advance. regards, Sam Blockquote

    Read the article

  • Kernel, dpkg, sudo and apt-get corrupted

    - by TECH4JESUS
    Here are some errors that I am getting: 1) A proper configuration for Firestarter was not found. If you are running Firestarter from the directory you built it in, run make install-data-local to install a configuration, or simply make install to install the whole program. Firestarter will now close. root@p:/# firestarter ** (firestarter:5890): WARNING **: The connection is closed (firestarter:5890): GnomeUI-WARNING **: While connecting to session manager: None of the authentication protocols specified are supported. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. (firestarter:5890): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. ^C 2) Also I cannot apt-get install sudo root@p:/# apt-get install sudo Reading package lists... Done Building dependency tree Reading state information... Done sudo is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-rb-3.0 gir1.2-gstreamer-0.10 libntfs10 python-mako libdmapsharing-3.0-2 rhythmbox-data libx264-116 rhythmbox libiso9660-7 librhythmbox-core5 libvpx0 libmatroska4 gir1.2-gst-plugins-base-0.10 rhythmbox-mozilla rhythmbox-plugin-zeitgeist libattica0 libgpac0.4.5 python-markupsafe libmusicbrainz4c2a rhythmbox-plugin-cdrecorder rhythmbox-plugins libaudiofile0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 9 not fully installed or removed. Need to get 0 B/76.3 MB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y /bin/sh: 1: /usr/sbin/dpkg-preconfigure: not found (Reading database ... 495741 files and directories currently installed.) Preparing to replace linux-image-3.2.0-24-generic 3.2.0-24.39 (using .../linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Preparing to replace linux-image-3.2.0-25-generic 3.2.0-25.40 (using .../linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb) ... dpkg (subprocess): unable to execute old pre-removal script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.prerm): No such file or directory dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg - trying script from the new package instead ... dpkg (subprocess): unable to execute new pre-removal script (/var/lib/dpkg/tmp.ci/prerm): No such file or directory dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 2 dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/linux-image-3.2.0-25-generic.postinst): No such file or directory dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.2.0-24-generic_3.2.0-24.39_amd64.deb /var/cache/apt/archives/linux-image-3.2.0-25-generic_3.2.0-25.40_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • jQuery getJSON request returning empty on a valid request

    - by mikemike
    I'm trying to grab some JSON from Apple's iTunes JSON service. The request is simple: http://ax.phobos.apple.com.edgesuite.net/WebObjects/MZStoreServices.woa/wa/wsSearch?term=jac&limit=25 If you visit the URL in your browser you will see some well-formed (backed up by jsonlint.com) JSON. When I use the following jQuery to make the request, however, the request finds nothing: $("#soundtrack").keypress(function(){ $.getJSON("http://ax.phobos.apple.com.edgesuite.net/WebObjects/MZStoreServices.woa/wa/wsSearch",{'term':$(this).val(), 'limit':'25'}, function(j){ var options = ''; for (var i = 0; i < j.results.length; i++) { options += '<option value="' + j.results[i].trackId + '">' + j.results[i].artistName + ' - ' + j.results[i].trackName + '</option>'; } $("#track_id").html(options); }); }); Firebug sees the request, but only receives an empty response. Any help would be appreciated here, as I'm at my whits end trying to solve it. You can view the script here: http://rnmtest.co.uk/gd/drives_admin/add_drive (soundtrack input box is at the bottom of the page). Thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >