Search Results

Search found 5024 results on 201 pages for 'sending'.

Page 60/201 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • How do I get google protocol buffer messages over a socket connection without disconnecting the clie

    - by Dan
    Hi there, I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows: iPhone: Person *person = [[[[Person builder] setId:1] setName:@"Bob"] build]; RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build]; NSData *data = [request data]; AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self]; if (![socket connectToHost:@"192.168.0.6" onPort:6666 error:nil]){ [self updateLabel:@"Problem connecting to socket!"]; } else { [self updateLabel:@"Sending data to server..."]; [socket writeData:data withTimeout:-1 tag:0]; [self updateLabel:@"Data sent, disconnecting"]; //[socket disconnect]; } Java: try { RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream()); Person person = wrapper.getPerson(); if (person != null) { System.out.println("Persons name is " + person.getName()); socket.close(); } On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with [request writeToOutputStream:[socket getCFWriteStream]]; Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method. Any help on the matter would be greatly appreciated! Cheers! Dan (EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)

    Read the article

  • KeepAlive packets over a Soap request

    - by Nycto
    I've been debugging some Soap requests we are making between two servers on the same VLAN. The app on one server is written in PHP, the app on the other is written in Java. I can control and make changes to the PHP code, but I can't affect the Java server. The PHP app forms the XML using the DOMDocument objects, then sends the request using the cURL extension. When the soap request took longer than 5 minutes to complete, it would always wait until the max timeout limit and exit with a message like this: Operation timed out after 900000 milliseconds with 0 bytes received After sniffing the packets that were being sent, it turns out that the problem was caused by a 5 minute timeout in the network that was closing what it thought was a stale connection. There were two ways to fix it: bump up the timeout in iptables, or start sending KeepAlive packets over the request. To be thorough, I would like to implement both solutions. Bumping up the timeout was easy for ops to do, but sending KeepAlive packets is turning out to be difficult. The cURL library itself supports this (see the --keepalive-time flag for the CLI app), but it doesn't appear that this has been implemented in the PHP cURL library. I even checked the source to make sure it wasn't an undocumented feature. So my question is this: How the heck can I get these packets sent? I see a few clear options, but I don't like any of them: Write a wrapper that will kick off the request by shell_execing the CLI app. This is a hack that I just don't like Update the cURL extension to support this. This is a non-option according to Ops. Open the socket myself. I know just enough to be dangerous. I also haven't seen a way to do this with fsockopen, but I could be missing something. Switch to another library. What exists that supports this? Thanks for any help you can offer.

    Read the article

  • After Filter redirects to login.jsp, proper servlet doesnt get called

    - by gnomeguru
    My simple project structure is shown in this link. I am using Eclipse and Tomcat 6. There is login.jsp which submits its form to login_servlet. The login_servlet sets a session variable and then redirects to home.jsp. The home.jsp file has links to the 4 JSP files under a directory called /sam. In web.xml I have given the url-pattern as /sam/* for the LogFiler filter. The LogFilter just reads the session variable and does doChain(request,resposne) if valid, else it redirects to /login.jsp. RequestDispatcher rd = request.getRequestDispatcher("/login.jsp"); rd.forward(request,response); Basically I don't want anyone to access files inside /sam directory directly. Now let's say, I try to directly access a file inside /sam directory, the filter kicks in and the redirection to login.jsp works and even the broswers contents are that of login.jsp, but the url in the browser doesn't change. When I enter details and press submit, instead of sending the data to login_servlet, it sends it to sam/login_servlet and then tomcat tells me there is no such servlet here! Obviously there isn't. My doubt is why is it sending it so sam/login_servlet instead of /login_servlet which is usually what it does when I start running the login.jsp on my own. One more thing, is there a way I can apply the servlet to ONLY .jsp files inside /sam diectory? I tried giving the url-pattern like /sam/*.jsp, but Tomcat was refusing to accept that url-pattern.

    Read the article

  • Programatically send SMS to email using Verizon Motorola Droid on Android

    - by Dave
    Hi, I was wondering if anyone knew the proper way to send an SMS message to an e-mail address using Verizon's CDMA Motorola Droid phone. The internal messaging application appears to automagically do this. While 3rd party applications like SMSPopup don't seem to be able to properly reply to e-mail addresses unless you compose the message inside the messaging application. When the internal messaging application sends a SMS message there's a corresponding 'RIL_REQUEST_CDMA_SEND_SMS' entry in the logcat (adb logcat -b radio). When you send a SMS to an e-mail address it prints the same thing, so behind the scenes it looks as though it is sending an sms. The interesting thing is that if you look at the content provider sent box the messages are addressed to various 1270XX-XXX-XXXX numbers. On other services you can send e-mail addresses by sending a SMS to a predefined short sms number. And then formatting your SMS as emailaddress subject message i.e. http://en.wikipedia.org/wiki/SMS_gateway#Carrier-Provided_SMS_to_E-Mail_Gateways For example, using T-mobile's number (500) you can send a SMS to an e-mail using the following: SmsManager smsMgr = SmsManager.getDefault(); smsMgr.sendTextMessage("500", null, "[email protected] message sent to an e-mail address from a SMS", null, null); Does anyone know if It is possible to programatically send SMS to email messages from a CDMA Android phone? Does Verizon actually send your replies as SMS messages or are they actually sent as MMS or normal http email messages? Any ideas about how to intercept what the raw message being sent to see what's going on? It might be possible that Verizon somehow generates a fake number temporarily tied to an e-mail address (since repeated messages are not sent to the same number). But, that seems pretty heavy handed. Thanks!

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Uploadify and rails 3 authenticity tokens

    - by Ceilingfish
    Hi chaps, I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $("#zip_input").uploadify({ 'uploader': '/flash/uploadify.swf', 'script': $("#upload").attr('action'), 'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') }, 'fileDataName': "world[zip]", //'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work 'auto': true, 'fileDesc': 'Zip files only', 'fileExt': '*.zip', 'width': 120, 'height': 24, 'cancelImg': '/images/cancel.png', 'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page 'displayData': 'speed' }); }); </script> But I am getting this response from rails: Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44 ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms) This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

    Read the article

  • Posting messages in two RabbitMQ queue, instead of one (using py-amqp)

    - by Khelben
    I've got this strange problem using py-amqp and the Flopsy module. I have written a publisher that sends messages to a RabbitMQ server, and I wanted to be able to send it to a specified queue. On the Flopsy module that is not possible, so I tweaked it adding a parameter and a line to declare the queue on the init_ method of the Publisher object def __init__(self, routing_key=DEFAULT_ROUTING_KEY, exchange=DEFAULT_EXCHANGE, connection=None, delivery_mode=DEFAULT_DELIVERY_MODE, queue=DEFAULT_QUEUE): self.connection = connection or Connection() self.channel = self.connection.connection.channel() self.channel.queue_declare(queue) # ADDED TO SET UP QUEUE self.exchange = exchange self.routing_key = routing_key self.delivery_mode = delivery_mode The channel object is part of the py-amqplib library The problem I've got it's that, even if it's sending the messages to the specified queue, it's ALSO sending the messages to the default queue. AS in this system we expect to send quite a lot of messages, we don't want to stress the system making useless duplicates... I've tried to debug the code and go inside the py-amqplib library, but I'm not able figure out any error or lacking step. Also, I'm not able to find any documentation form py-amqplib outside the code. Any ideas on why is this happening and how to correct it?

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • MFMailComposeViewController hangs my app

    - by Neal L
    Hi, I am trying to add email functionality to my app. I can get the MFMailComposeViewController to display correctly and pre-populate its subject and body, but for some reason when the user clicks on the "Cancel" or "Send" buttons in the nav bar the app just hangs. I inserted a NSLog() statement into the first line of mailComposeController"didFinishWithResult:error and it doesn't even print that line out to the console. Does anybody have an idea what would cause the MFMailComposeViewController to hang like that? Here is my code from the header: #import "ManagedObjectEditor.h" #import <MessageUI/MessageUI.h> @interface MyManagedObjectEditor : ManagedObjectEditor <MFMailComposeViewControllerDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate> { } - (IBAction)emailObject; @end from the implementation file: if ([MFMailComposeViewController canSendMail]) { MFMailComposeViewController *mailComposer = [[MFMailComposeViewController alloc] init]; mailComposer.delegate = self; [mailComposer setSubject:NSLocalizedString(@"An email from me", @"An email from me")]; [mailComposer setMessageBody:emailString isHTML:YES]; [self presentModalViewController:mailComposer animated:YES]; [mailComposer release]; } [error release]; [emailString release]; and here is the code from the callback: #pragma mark - #pragma mark Mail Compose Delegate Methods - (void)mailComposeController:(MFMailComposeViewController *)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError *)error { NSLog(@"in didFinishWithResult:"); switch (result) { case MFMailComposeResultCancelled: NSLog(@"cancelled"); break; case MFMailComposeResultSaved: NSLog(@"saved"); break; case MFMailComposeResultSent: NSLog(@"sent"); break; case MFMailComposeResultFailed: { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:NSLocalizedString(@"Error sending email!",@"Error sending email!") message:[error localizedDescription] delegate:nil cancelButtonTitle:NSLocalizedString(@"Bummer",@"Bummer") otherButtonTitles:nil]; [alert show]; [alert release]; break; } default: break; } [self dismissModalViewControllerAnimated:YES]; } Thanks!

    Read the article

  • ASP.NET Emails blocked by Spam filter on Exchange

    - by Amadiere
    I'm trying to send an email via some C# ASP.NET code. This is being sent to our internal mailrelay server, with our standard "from" address (e.g. [email protected]). In some instances, this is getting through OK, in others, it's getting blocked by the Spam Filter. An example of our Web.config <mailSettings> <smtp from="[email protected]"> <network host="mailrelay.domain.com" defaultCredentials="true" /> </smtp> </mailSettings> I've spoken with our Exchange Server team and they inform me that on occasions, our mail looks sufficiently like spam and is automatically blocked. The algorithm appears to be points based and blocks on a score of 45. 20 points are instantly added because our system is not sending the hostname with the domain name suffixed. e.g. the server is hoping for myServerName.domain.com, but despite being part of that domain, the server is sending from myServerName. I've been asked to look at altering the EHLO string that is sent and/or influencing the host so that it is its fully qualified name. However, this makes little sense to me, and although I understand the concept of what I need to change - I don't know where to begin looking for the fix.

    Read the article

  • how to store JSON into POJO using Jackson

    - by user2963680
    I am developing a module where i am using rest service to get data. i am not getting how to store JSON using Jackson and store it which has Queryparam also. Any help is really appreciated as I am new to this.I am trying to do server side filtering in extjs infinte grid which is sending the below request to rest service. when the page load first time, it sends http://myhost/mycontext/rest/populateGrid?_dc=9999999999999&page=1&start=0&limit=500 when you select filter on name and place, it sends http://myhost/mycontext/rest/populateGrid?_dc=9999999999999&filter=[{"type":"string","value":"Tom","field":"name"},{"type":"string","value":"London","field":"Location"}]&page=1&start=0&limit=500 I am trying to save this in POJO and then sending this to database to retrieve data. For this on rest side i have written something like this @Provider @Path("/rest") public interface restAccessPoint { @GET @Path("/populateGrid") @Produces({MediaType.APPLICATION_JSON}) public Response getallGridData(FilterJsonToJava filterparam,@QueryParam("page") String page,@QueryParam("start") String start,@QueryParam("limit") String limit); } public class FilterJsonToJava { @JsonProperty(value ="filter") private List<Filter> data; .. getter and setter below } public class Filter { @JsonProperty("type") private String type; @JsonProperty("value") private String value; @JsonProperty("field") private String field; ...getter and setters below } I am getting the below error The following warnings have been detected with resource and/or provider classes: WARNING: A HTTP GET method, public abstract javax.ws.rs.core.Response com.xx.xx.xx.xxxxx (com.xx.xx.xx.xx.json.FilterJsonToJava ,java.lang.String,java.lang.String,java.lang.String), should not consume any entity. com.xx.xx.xx.xx.json.FilterJsonToJava, and Java type class com.xx.xx.xx.FilterJsonToJava, and MIME media type application/octet-stream was not found [11/6/13 17:46:54:065] 0000001c ContainerRequ E The registered message body readers compatible with the MIME media type are: application/octet-stream com.sun.jersey.core.impl.provider.entity.ByteArrayProvider com.sun.jersey.core.impl.provider.entity.FileProvider com.sun.jersey.core.impl.provider.entity.InputStreamProvider com.sun.jersey.core.impl.provider.entity.DataSourceProvider com.sun.jersey.core.impl.provider.entity.RenderedImageProvider */* -> com.sun.jersey.core.impl.provider.entity.FormProvider ...

    Read the article

  • What is the best way to process XML sent to WCF 3.5

    - by CRM Junkie
    I have to develop a WCF application in 3.5. The input will be sent in the form of XML and the response would be sent in the form of XML as well. A ASP.NET application will be consuming the WCF and sending/receiving data in XML format. Now, as per my understanding, when consuming WCF from an ASP.NET application, we just add a reference to the service, create an object of the service, pack all the necessary data(Data Members in WCF) into the input object (object of the Data Contract) and call the necessary function. It happens that the ASP.NET application is being developed by a separate party and they are hell bent on receiving and sending data in XML format. What I can perceive from this is that the WCF will take the XML string (a single Data Member string type) as input and send out a XML string (again a single Data Member string type) as output. I have created WCF applications earlier where requests and responses were sent out in XML/JSON format when it was consumed by jQuery ajax calls. In those cases, the XML tags were automatically mapped to the different Data Members defined. What approach should I take in this case? Should I just take a string as input (basically the XML string) or is there any way WCF/.NET 3.5 will automatically map the XML tags with the Data Members for requests and responses and I would not need to parse the XML string separately?

    Read the article

  • Game login authentication and security.

    - by Charles
    First off I will say I am completely new to security in coding. I am currently helping a friend develop a small game (in Python) which will have a login server. I don't have much knowledge regarding security, but I know many games do have issues with this. Everything from 3rd party applications (bots) to WPE packet manipulation. Considering how small this game will be and the limited user base, I doubt we will have serious issues, but would like to try our best to limit problems. I am not sure where to start or what methods I should use, or what's worth it. For example, sending data to the server such as login name and password. I was told his information should be encrypted when sending, so in-case someone was viewing it (with whatever means), that they couldn't get into the account. However, if someone is able to capture the encrypted string, wouldn't this string always work since it's decrypted server side? In other words, someone could just capture the packet, reuse it, and still gain access to the account? The main goal I am really looking for is to make sure the players are logging into the game with the client we provide, and to make sure it's 'secure' (broad, I know). I have looked around at different methods such as Public and Private Key encryption, which I am sure any hex editor could eventually find. There are many other methods that seem way over my head at the moment and leave the impression of overkill. I realize nothing is 100% secure. I am just looking for any input or reading material (links) to accomplish the main goal stated above. Would appreciate any help, thanks.

    Read the article

  • python-xmpp and looping through list of recipients to receive and IM message

    - by David
    I can't figure out the problem and want some input as to whether my Python code is incorrect, or if this is an issue or design limitation of Python XMPP library. I'm new to Python by the way. Here's snippets of code in question below. What I'd like to do is read in a text file of IM recipients, one recipient per line, in XMPP/Jabber ID format. This is read into a Python list variable. I then instantiate an XMPP client session and loop through the list of recipients and send a message to each recipient. Then sleep some time and repeat test. This is for load testing the IM client of recipients as well as IM server. There is code to alternately handle case of taking only one recipient from command line input instead of from file. What ends up happening is that Python does iterate/loop through the list but only last recipient in list receives message. Switch order of recipients to verify. Kind of looks like Python XMPP library is not sending it out right, or I'm missing a step with the library calls, because the debug print statements during runtime indicate the looping works correctly. recipient = "" delay = 60 useFile = False recList = [] ... elif (sys.argv[i] == '-t'): recipient = sys.argv[i+1] useFile = False elif (sys.argv[i] == '-tf'): fil = open(sys.argv[i+1], 'r') recList = fil.readlines() fil.close() useFile = True ... # disable debug msgs cnx = xmpp.Client(svr,debug=[]) cnx.connect(server=(svr,5223)) cnx.auth(user,pwd,'imbot') cnx.sendInitPresence() while (True): if useFile: for listUser in recList: cnx.send(xmpp.Message(listUser,msg+str(msgCounter))) print "sending to "+listUser+" msg = "+msg+str(msgCounter) else: cnx.send(xmpp.Message(recipient,msg+str(msgCounter))) msgCounter += 1 time.sleep(delay)

    Read the article

  • user inheritance in django

    - by amateur
    Hi guys, I saw a couple of ways extending user information of users and decided to adopt the model inheritance method. for instance, I have : class Parent(User): contact_means = models.IntegerField() is_staff = False objects = userManager() Now it is done, I've downloaded django_registration to help me out with sending emails to new users. The thing is, instead of using registration forms to register new user, I want to to invoke the email sending/acitvation capability of django_registration. So my workflow is: 1. add new Parent object in admin page. 2. send email My problem is, the django-registration creates a new registration profile together with a new user in the user table. how do I tweak this such that I am able to add the user entry into the custom user table. I have tried to create a modelAdmin and alter the save_model method to launch the create_inactive_user from django_registration, however I do not how to save the user object generated from django_registration into my Parent table when I have using model inheritance and I do not have a Foreign key attribute in my parent model.

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • Optimizing MySQL update query

    - by Jernej Jerin
    This is currently my MySQL UPDATE query, which is called from program written in Java: String query = "UPDATE maxday SET DatePressureREL = (SELECT Date FROM ws3600 WHERE PressureREL = (SELECT MAX" + "(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), " + "PressureREL = (SELECT PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 " + "WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), ..."; try { s.execute(query); } catch (SQLException e) { System.out.println("SQL error"); } catch(Exception e) { e.printStackTrace(); } Let me explain first, what does it do. I have two tables, first is ws3600, which holds columns (Date, PressureREL, TemperatureOUT, Dewpoint, ...). Then I have second table, called maxday, which holds columns like DatePressureREL, PressureREL, DateTemperatureOUT, TemperatureOUT,... Now as you can see from an example, I update each column, the question is, is there a faster way? I am asking this, because I am calling MAX twice, first to find the Date for that value and secondly to find the actual value. Now I know that I could write like that: SELECT Date, PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1 That way I get the Date of the max and the max value at the same time and then update with those values the data in maxday table. But the problem of this solution is, that I have to execute many queries, which as I understand takes alot more time compared to executing one long mysql query because of overhead in sending each query to the server. If there is no better way, which solution beetwen this two should I choose. The first, which only takes one query but is very unoptimized or the second which is beter in terms of optimization, but needs alot more queries which probably means that the preformance gain is lost because of overhead in sending each query to the server?

    Read the article

  • Get an Arduino and Android phone to communicate over the web

    - by Saleem
    I am writing an Android application to communicate with my Arduino over the web. The Arduino is running a web server through an Ethernet shield. I am attaching my code, but I will explain it here so you will understand what I am trying to do. The Android sends an HTTP request in the format http://192.168.1.148/?Lights=1. The Arduino gets the request, executes the command (in this case turning on some lights) and then responds to the Android device by simply sending the string "Lights=On". The Android will then change the color of the button to notify the user that the command was executed successfully. The Arduino is getting the instruction and executing it and sending the response but my button color is not changing. I know that the Android device is getting the string because I added a debug line to change the text on the button to the received response. The relevant code for the Android device is: ((Button) v).setText(sb.toString()); //This works and the button text changes to "Lights=On". //Test response and update button if(sb.toString()=="Lights=On"){ v.getBackground().setColorFilter(0xFFFFFF00, PorterDuff.Mode.MULTIPLY); Drawable d = lightOff.getBackground(); lightOff.invalidateDrawable(d); d.clearColorFilter(); } The Arduino code is: if(s=="Lights"){ switch(client.read()){ case '0': digitalWrite(LightPin,0); client.print("Lights=Off"); //debug Serial.println("Lights=Off"); break; case '1': digitalWrite(LightPin,1); client.print("Lights=On"); Serial.println("Lights=On"); break; } } Please let me know if you need more of the code to answer this question.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • GLSL shader render to texture not saving alpha value

    - by quadelirus
    I am rendering to a texture using a GLSL shader and then sending that texture as input to a second shader. For the first texture I am using RGB channels to send color data to the second GLSL shader, but I want to use the alpha channel to send a floating point number that the second shader will use as part of its program. The problem is that when I read the texture in the second shader the alpha value is always 1.0. I tested this in the following way: at the end of the first shader I did this: gl_FragColor(r, g, b, 0.1); and then in the second texture I read the value of the first texture using something along the lines of vec4 f = texture2D(previous_tex, pos); if (f.a != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } No pixels in my output are black, whereas if I change the above code to read gl_FragColor(r, g, 0.1, 1.0); //Notice I'm now sending 0.1 for blue and in the second shader vec4 f = texture2D(previous_tex, pos); if (f.b != 1.0) { gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); return; } All the appropriate pixels are black. This means that for some reason when I set the alpha value to something other than 1.0 in the first shader and render to a texture, it is still seen as being 1.0 by the second shader. Before I render to texture I glDisable(GL_BLEND); It seems pretty clear to me that the problem has to do with OpenGL handling alpha values in some way that isn't obvious to me since I can use the blue channel in the way I want, and figured someone out there will instantly see the problem.

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Obtaining touch location for a uiscrollview touch

    - by LOSnively
    I have a uiscrollview as an element of a uiscrollviewcontroller, along with other view objects. The image scrolls and zooms as expected, when the scrollView is the top subview. However, I also need to get the screen location of the touch, in particular when there is no scroll action. (I understand the location may change during a scroll, but that's not important.) I haven't found a way to do that. In the scrollviewcontroller implementation I have customized all of the standard methods that should do this: "touchesShouldBegin...", "touchesBegan:...", "touchesEnded:...", and so on. As far as I can tell, none of these are being called during a touch event when the scrollView is the top subview. I've tried setting the delayContentTouches property to both YES and NO, and that doesn't seem to make a difference. As an alternative, I've tried putting a UIView as the top subview and then tried passing the touches to the now underlying scrollView. In this configuration, the standard methods are called and I can get the touch location, but I haven't found a mechanism for the touches to be passed to the scrollView so scrolling occurs. Doing something like sending the touch messages to the specific scrollView, or to "super" or just sending them to nextResponder doesn't do it. It seems I can make the scroll work or find the location of the touch but not both, depending on what the "top" subview is. I suspect this is trivial, but after two weeks of struggling, it's time to eat my embarrassment for not being able to do this seemingly simplest of things. I've read all of the related questions here on stackoverflow, tried most if not all of the suggestions, and so far, nothing has worked. I've looked through the various links and references suggested by the answers, including Apple's documentation, but none have pointed out the gap in my understanding. Any ideas would be appreciated.

    Read the article

  • Debugging JSR 168 Portlet with spring, eclipse & pluto.

    - by mikep
    I am trying to set up a development environment to test Spring Portlet MVC for development of JSR 168 conforming portlets. I have the latest STS installed, which included Spring 2.5 and Eclipse (Catalina). This has been my environment to develop with Spring MVC, and that works fine using Apache as a local server for debugging. I found some instructions on the Pluto portal site on using Pluto as a remote debugging host for portlets. I have implemented those instructions. I am sending Eclipse into debug mode by right clicking on one of the JSPs and going into "debug as". My problem is that when I log into Pluto, it is not sending me into debug mode. I am seeing the default Pluto page as opposed to my portlet. My portlet has not been installed onto Pluto, and the instructions do not seem to require the portlet to be installed. To help, I have a screen shot at http://www.ceruleaninc.ca/pluto%5Fproblem.jpg, showing the following: Eclipse showing the remote debugging to localhost:8000 Tomcat showing the "Listening for transport dt_socket at address: 8000 The Catalina.bat jpda start command The Pluto Portal screen after log in Thanks much! I would welcome any advice on approaches to debugging portlets. I am not tied to pluto. There does seem to be a lack of detailed instructions on this topic.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >