Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 839/916 | < Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • Nested loops through recursion with usable iterators

    - by narandzasto
    Help! I need to generate a query with loops,but there must be undefinitly numbers of loop or much as many a client wants. I know that it can be done with recursion,but don't know exectly how.One more thing. Notice,please,that I need to use those k,i,j iterators later in "if" condition and I don't know how to catch them. Thanks class Class1 { public void ListQuery() { for (int k = 0; k < listbox1.Count; k++) { for (int i = 0; i < listbox2.Count; i++) { for (int j = 0; j < listbox3.Count; j++) { if (k == listbox1.count-1 && i == listbox2.count-1 && j == listbox3.count-1) { if (!checkbox1) Query += resultset.element1 + "=" + listbox1[k]; if (!checkbox2) Query += resultset.element2 + "=" + listbox1[i]; if (!checkbox3) Query += resultset.element3 + "=" + listbox1[j]; } } } } } }

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • FileReference.browse() stops playback on some Flash Players

    - by Christophe Herreman
    We have an issue were the server session associated with a Flex client times out when the browse file dialog is open for a time longer then the configured session timeout. It seems that on some players, the playback is stopped when browse or download on a FileReference is executing. This also causes remote calls to be blocked and hence our manual keep-alive messages are not sent to the server, resulting in a session timeout. I searched for some info on this in the docs and found a notice of it, but it does not explicitly list the players it does (not) work. Would anyone know were I could find a complete list? PS: here are the links that mention this behavior: http://livedocs.adobe.com/flex/3/html/help.html?content=17_Networking_and_communications_7.html While calls to the FileReference.browse(), FileReferenceList.browse(), or FileReference.download() method are executing, most players will continue SWF file playback. http://livedocs.adobe.com/flex/3/langref/flash/net/FileReference.html While calls to the FileReference.browse(), FileReferenceList.browse(), or FileReference.download() methods are executing, SWF file playback pauses in stand-alone and external versions of Flash Player and in AIR for Linux and Mac OS X 10.1 and earlier Anyone knows what is meant with an "external Flash Player"? PPS: we tested this on Linux (10.0.x and 10.1.x) in Firefox where it seems to stop playback and on Windows (10.0.x) in IE where playback seems to continue.

    Read the article

  • What is the performance impact of CSS's universal selector?

    - by Bungle
    I'm trying to find some simple client-side performance tweaks in a page that receives millions of monthly pageviews. One concern that I have is the use of the CSS universal selector (*). As an example, consider a very simple HTML document like the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Example</title> <style type="text/css"> * { margin: 0; padding: 0; } </head> <body> <h1>This is a heading</h1> <p>This is a paragraph of text.</p> </body> </html> The universal selector will apply the above declaration to the body, h1 and p elements, since those are the only ones in the document. In general, would I see better performance from a rule such as: body, h1, p { margin: 0; padding: 0; } Or would this have exactly the same net effect? Essentially, what I'm asking is if these rules are effectively equivalent in this case, or if the universal selector has to perform more unnecessary work that I may not be aware of. I realize that the performance impact in this example may be very small, but I'm hoping to learn something that may lead to more significant performance improvements in real-world situations. Thanks for any help!

    Read the article

  • Getting Preferred, Varied Products -- Tough Pseudocode Question

    - by Volomike
    I have a Dallas client who has given me a tough process to work out. I'm seeking your advice. I use PHP, but the language doesn't matter. We can work this out in pseudocode. In a nutshell, this involves showing products on a page based on matching keyword phrases, and using a preferred product provider, but varying this up so that I try to show products from each provider as much as possible when I can. He has 3 product companies: Amazon, eBay, and Overstock. I already have worked out with the API to get products back in an array via a function. He wants to give the user the preference of which one to use primarily -- so a priority number on each. He wants to display anywhere from 1 to 6 products on a page from them. Let's go with 6 for now in this example to start with. Let's assume Amazon first, then eBay, then Overstock as far as priority. He wants me to attempt to take a keyword phrase and find relevant products (all the keywords found in product title) from Amazon, eBay, and Overstock. If we find 1 product from each provider, he wants me to only display one product from each. But if we run out of a particular provider with a matching product, I start over again and grab another product from another provider until we reach 6 products. If there just aren't 6 relevant products, then I do the best I can -- even if I only display one product. If no relevant products, then I don't display anything.

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • Passing javascript object to webservice via Jquery ajax

    - by kralco626
    I have a webservice that returns an object [WebMethod] public List<User> ContractorApprovals() I also have a webservice that accepcts an object [WebMethod] public bool SaveContractor(Object u) When I make my webservice calls via Jquery: function ServiceCall(method, parameters, onSucess, onFailure) { var parms = "{" + (($.isArray(parameters)) ? parameters.join(',') : parameters) + "}"; // to json $.ajax({ type: "POST", url: "services/"+method, data: parms, contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { if (typeof onSucess == 'function' || typeof onSucess == 'object') onSucess(msg.d); }, error: function(msg, err) { $("#dialog-error").dialog('open');} }); I can call the first one just fine. My onSucess function gets passed a javascript object exactly structured like my User object on the service. However, I am now having trouble getting the object back to the server. I'm accepting Object as a parameter on the server side so I can't inagine there is an issue there. So I'm thinking something is wrong with the parms on the client side but i'm not sure what... I am doing something to the effect ServiceCall("AuthorizationManagerWorkManagement.asmx/ContractorApprovals", "", function(data,args){$("#div").data('user',data[0])}, null) then ServiceCall("AuthorizationManagerWorkManagement.asmx/SaveContractor", $("#div").data('user'), //These also do not work: "{'u': ' + $("#div").data("user") + '}", NOR JSON.stringify({u: userObject}) function(data,args){(alert(data)}, null) I know the first service call works, I can get the data. The second one is causing the "onFailure" method to execute rather then "OnSuccess". Any ideas?

    Read the article

  • Should I invest in GraniteDS for Flex + Java development?

    - by Boden
    I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network. It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible). So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data. LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting. Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running. So my question(s) to the StackOverflow audience is: 1) do you recommend GraniteDS, especially if my current Java stack is Spring + Hibernate? 2) at what point do you feel like it starts to pay off? That is, at what level of application complexity do you feel that using GraniteDS really starts to make development that much better? In what ways?

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • WCF: Exposed Object Model - stuck in a loop

    - by Mark
    Hi I'm working on a pretty big WSSF project. I have a normal object model in the business layer. Eg a customer has an orders collection property, when this is accessed then it loads from the data layer (lazy loading). An order has a productCollection property etc etc.. Now the bit I'm finding tricky is exposing this via WCF. I want to export a collection of orders. The client app will also need information about the customers. Using the WSSF data contract designer I have set it up so that customers have a property called "order collection". This is fine if you have a customer object and would like to look at the orders but if you have an order object there is no customer property so it doesn't work going up the hierarchy. I've tried adding a customer property to the orders object but then the code gets stuck in a loop when it loads the data contracts up. This is because it doesn't load on demand like in the business layer. I need to load all properties up before the objects can be sent out via WCF. It ends up loading an order, then the customer for that order, then the orders for that customer, then the customer for that order etc etc... I'm sure I've got all this wrong. Help!!

    Read the article

  • Codeigniter: Library function--I'm stuck

    - by Kevin Brown
    I have a library function that sets up my forms, and submits data. They're long, and they work, so I'll spare you reading my code. :) I simply need a way for my functions to determine how to handle the data. Until now, the function did one thing: Submit a report for the current user. NOW, the client has requested that an administrator also be able to complete a form--this means that the form would be filled out, and it would CREATE a user at the same time, whereas the current function EDITS and is accessed by an EXISTING user. Do I need a separate function to do essentially the same thing? How do I make one function perform two tasks? One to update a user, and if there is no user, create one. Current controller: function survey() { $id = $this->session->userdata('id'); $data['member'] = $this->home_model->getUser($id); //Convert the db Object to a row array $data['manager'] = $data['member']->row(); $manager_id = $data['manager']->manager_id; $data['manager'] = $this->home_model->getUser($manager_id); $data['manager'] = $data['manager']->row(); $data['header'] = "Home"; $this->survey_form_processing->survey_form($this->_container,$data, $method); } Current Library: function survey_form($container) { //Lots of validation stuff $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit(); } $this->CI->load->view($container,$data); The submit function is huge too. Basically says, "Update table with data where user_id=current user" I hope this wasn't too confusing. I'll create two functions if need be, but I'd like to keep redundancy down! }

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • How can I unit test a PHP class method that executes a command-line program?

    - by acoulton
    For a PHP application I'm developing, I need to read the current git revision SHA which of course I can get easily by using shell_exec or backticks to execute the git command line client. I have obviously put this call into a method of its very own, so that I can easily isolate and mock this for the rest of my unit tests. So my class looks a bit like this: class Task_Bundle { public function execute() { // Do things $revision = $this->git_sha(); // Do more things } protected function git_sha() { return `git rev-parse --short HEAD`; } } Of course, although I can test most of the class by mocking git_sha, I'm struggling to see how to test the actual git_sha() method because I don't see a way to create a known state for it. I don't think there's any real value in a unit test that also calls git rev-parse to compare the results? I was wondering about at least asserting that the command had been run, but I can't see any way to get a history of shell commands executed by PHP - even if I specify that PHP should use BASH rather than SH the history list comes up empty, I presume because the separate backticks executions are separate terminal sessions. I'd love to hear any suggestions for how I might test this, or is it OK to just leave that method untested and be careful with it when the app is being maintained in future?

    Read the article

  • Can't diagnose my MySQL root user problem

    - by George Crawford
    Hi all, I have a problem with the MySQL root user in My MySQL setup, and I just can't for the life of me work out how to fix it. It seems that I have somehow messed up the root user, and my access to databases is now very erratic. For reference, I'm using MAMP on OS X to provide the MySQL server. I'm not sure how much that matters though - I'd guess that whatever I've done will require a command-line fix to solve it. I can start MySQL using MAMP as usual, and access databases using the 'standard' users I have created for my PHP apps. However, the root user, which I use in my MySQL GUI client, and also in phpMyAdmin, can only access the "information_schema" database, as well as two I have created manually, and presumably (and mistakenly) left permissions wide open for. My 15 or so other databases cannot be accessed my the root user. When I load up phpMyAdmin, the home screen says: "Create new database: No Privileges". I certainly did at some stage change my root user's password using the MAMP dialog. But I don't remember if I did anything else which might have caused this problem. I've tried changing the password again, and there seems to be no change in the issue. I've also tried resetting root password using the command line, including starting mysql manually with --skip-grant-tables then flushing privs, but again, nothing seems to fix the issue. I've come to the end of my ideas, and would very much appreciate some step-by-step advice and diagnosis from one of the experts here! Many thanks for your help.

    Read the article

  • Validation problem with a 'JotForm' template

    - by Thomas
    A client sent me a form template they had created using jotform.com to implement on their wordpress site. The form template is supposed to hide part of the form until the user clicks the 'next' button. At which point a script is supposed to validate all of the input fields the user has presumably filled out and then display the rest of the form. While I have successfully managed to get the form to display the next part of the form when the user clicks 'next', it fails to validate the input fields. Its kind of difficult to explain without a huge block of text so it is probably easier to show you: The original working template that the customer sent me: http://www.loftist.com/jotform/List_Your_Loft.html The problem child: http://www.loftist.com/?page_id=78 If you just click on one of the input fields and then click elsewhere on the page, the input fields successfully return a validation error message and prevent the user from clicking on the 'next' button. However, if you simply click on the next button than the next set of fields get displayed. Any thoughts? What am I doing wrong here? Im convinced this must be a really simple problem but Im not sure what it could be....

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • ActiveX won't run from server

    - by user1709555
    I have a MFC activeX that runs fine from disk but when I put it on a server I get errors. Client: WIN7 machine Server: Ubunto running apache The HTML and the errors are below, please advice. 10xs, Nahum HTML: <html> <HEAD> <TITLE>myFirstOCX.CAB</TITLE> <script type="text/javascript" FOR="window"> function fn() { try{ document.all('Ctrl1').AboutBox();//error: Undifiend : object doesn't have AboutBox() method //OR var obj = new ActiveXObject ("activex.activexCtrl"); obj.AboutBox ();//error: Undifiend : Automation server can't create object } catch (ex) { alert("Error: " + ex.Description + " : " + ex.message); } } </script> </HEAD> <body bgcolor=lightblue > <TABLE BORDER> <TR> <TD><OBJECT CLASSID="CLSID:E228C560-FA68-48E6-850F-B1167515C920" CODEBASE=".\nsip.CAB#version=1,0,0,1" ID="Ctrl1" name="Ctrl1"> </OBJECT> </TD> </TR> <TR> <TD ALIGN="CENTER"> <INPUT TYPE=BUTTON VALUE="Click Me" onclick="fn()" > </TD> </TR> </TABLE> <INPUT TYPE=TEXT ID="ConnectionString" VALUE="" > </body> </html>

    Read the article

  • What kind of library to use for display of graphical objects and right click context menus

    - by Gopal
    Hi all, Goal: To develop a web based NMS interface which displays a network topology (e.g., switches, routers, links, endhosts). Each node should be 'movable' (draggable to an appropriate place manually or their best location computed algorithmically). I should be able to zoom into the network graph (say if there are many clusters of nodes and I want to concentrate on a particular cluster of nodes). I should be able to right-click any node or link and get a context menu (e.g., 'show routing table', 'show interfaces', 'show bandwidth utilization graph' etc.). The data for this network topology will be fetched by making calls to an apache based webserver where the backend scripts in python will fetch the appropriate data and send it via JSON to the web client. Question: I am assuming that some sort of javascript library/framework will be most appropriate for this - jQuery, Dojo, Moo etc. [I've never used any of these before]. Which of these would be most recommended for this sort of thing. Which would be easiest to learn (say in a months time). Thanks for any tips.

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • How to properly implement the Strategy pattern in a web MVC framework?

    - by jboxer
    In my Django app, I have a model (lets call it Foo) with a field called "type". I'd like to use Foo.type to indicate what type the specific instance of Foo is (possible choices are "Number", "Date", "Single Line of Text", "Multiple Lines of Text", and a few others). There are two things I'd like the "type" field to end up affecting; the way a value is converted from its normal type to text (for example, in "Date", it may be str(the_date.isoformat())), and the way a value is converted from text to the specified type (in "Date", it may be datetime.date.fromtimestamp(the_text)). To me, this seems like the Strategy pattern (I may be completely wrong, and feel free to correct me if I am). My question is, what's the proper way to code this in a web MVC framework? In a client-side app, I'd create a Type class with abstract methods "serialize()" and "unserialize()", override those methods in subclasses of Type (such as NumberType and DateType), and dynamically set the "type" field of a newly-instantiated Foo to the appropriate Type subclass at runtime. In a web framework, it's not quite as straightforward for me. Right now, the way that makes the most sense is to define Foo.type as a Small Integer field and define a limited set of choices (0 = "Number", 1 = "Date", 2 = "Single Line of Text", etc.) in the code. Then, when a Foo object is instantiated, use a Factory method to look at the value of the instance's "type" field and plug in the correct Type subclass (as described in the paragraph above). Foo would also have serialize() and unserialize() methods, which would delegate directly to the plugged-in Type subclass. How does this design sound? I've never run into this issue before, so I'd really like to know if other people have, and how they've solved it.

    Read the article

  • Strange problems with the Spring RestTemplate in Android application

    - by HarryCater
    I begin to use RESTful api of the Spring Framework in my android client application. But I have encountered with problems when I tried to execute HTTP request via postForObject/postForEntity methods. Here is my code: public String _URL = "https://noticemed.com/app/mobile/login"; public void BeginAuthorization(View view) { HttpHeaders requestHeaders = new HttpHeaders(); requestHeaders.setContentType(MediaType.APPLICATION_JSON); HttpEntity<String> _entity = new HttpEntity<String>(requestHeaders); RestTemplate templ = new RestTemplate(); templ.setRequestFactory(new HttpComponentsClientHttpRequestFactory()); templ.getMessageConverters().add(new MappingJacksonHttpMessageConverter()); ResponseEntity<String> _response = templ.postForEntity(_URL,_entity,String.class); //HERE APP CRASHES String _body = _response.getBody(); And here is a stack trace in logcat after app crashing. As you see there is no definite error message. So the question what am I doing wrong? How to fix this? May there is other way to do it?I really need a help. Thanks in advance!

    Read the article

  • How to parallelize this groovy code?

    - by lucas
    I'm trying to write a reusable component in Groovy to easily shoot off emails from some of our Java applications. I would like to pass it a List, where Email is just a POJO(POGO?) with some email info. I'd like it to be multithreaded, at least running all the email logic in a second thread, or make one thread per email. I am really foggy on multithreading in Java so that probably doesn't help! I've attempted a few different ways, but here is what I have right now: void sendEmails(List<Email> emails) { def threads = [] def sendEm = emails.each{ email -> def th = new Thread({ Random rand = new Random() def wait = (long)(rand.nextDouble() * 1000) println "in closure" this.sleep wait sendEmail(email) }) println "putting thread in list" threads << th } threads.each { it.run() } threads.each { it.join() } } I was hoping the sleep would randomly slow some threads down so the console output wouldn't be sequential. Instead, I see this: putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list in closure sending email1 in closure sending email2 in closure sending email3 in closure sending email4 in closure sending email5 in closure sending email6 in closure sending email7 in closure sending email8 in closure sending email9 in closure sending email10 sendEmail basically does what you'd expect, including the println statement, and the client that calls this follows, void doSomething() { Mailman emailer = MailmanFactory.getExchangeEmailer() def to = ["one","two"] def from = "noreply" def li = [] def email (1..10).each { email = new Email(to,null,from,"email"+it,"hello") li << email } emailer.sendEmails li }

    Read the article

  • Sharing a COM port over TCP

    - by guinness
    What would be a simple design pattern for sharing a COM port over TCP to multiple clients? For example, a local GPS device that could transmit co-ordinates to remote hosts in realtime. So I need a program that would open the serial port and accept multiple TCP connections like: class Program { public static void Main(string[] args) { SerialPort sp = new SerialPort("COM4", 19200, Parity.None, 8, StopBits.One); Socket srv = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); srv.Bind(new IPEndPoint(IPAddress.Any, 8000)); srv.Listen(20); while (true) { Socket soc = srv.Accept(); new Connection(soc); } } } I would then need a class to handle the communication between connected clients, allowing them all to see the data and keeping it synchronized so client commands are received in sequence: class Connection { static object lck = new object(); static List<Connection> cons = new List<Connection>(); public Socket socket; public StreamReader reader; public StreamWriter writer; public Connection(Socket soc) { this.socket = soc; this.reader = new StreamReader(new NetworkStream(soc, false)); this.writer = new StreamWriter(new NetworkStream(soc, true)); new Thread(ClientLoop).Start(); } void ClientLoop() { lock (lck) { connections.Add(this); } while (true) { lock (lck) { string line = reader.ReadLine(); if (String.IsNullOrEmpty(line)) break; foreach (Connection con in cons) con.writer.WriteLine(line); } } lock (lck) { cons.Remove(this); socket.Close(); } } } The problem I'm struggling to resolve is how to facilitate communication between the SerialPort instance and the threads. I'm not certain that the above code is the best way forward, so does anybody have another solution (the simpler the better)?

    Read the article

< Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >