Search Results

Search found 16383 results on 656 pages for 'bi applications'.

Page 629/656 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • choosing an image locally from http url and serving that image without a server round trip

    - by serverman
    Hi folks I am a complete novice to Flash (never created anything in flash). I am quite familiar with web applications (J2EE based) and have a reasonable expertise in Javascript. Here is my requirement. I want the user to select (via an html form) an image. Normally in the post, this image would be sent to server and may be stored there to be served later. I do not want that. I want to store this image locally and then serve it via HTTP to the user. So, the flow is: 1. Go to the "select image url":mywebsite.com/selectImage Browse the image and select the image This would transfer control locally to some code running on the client (Javascript or flash), which would then store the image locally at some place on the client machine. Go to the "show image url": mywebsite.com/showImage This would eventually result in some client code running on the browser that retrieves the image and renders it (without any server round trips.) I considered the following options: Use HTML5 local storage. Since I am a complete novice to flash, I looked into this. I found that it is fairly straightforward to store and retrieve images in javascript (only strings are allowed but I am hoping storing base64 encoded strings would work at least for small images). However, how do I serve the image via http url that points to my server without a server round trip? I saw the interesting article at http://hacks.mozilla.org/category/fileapi/ but that would work only in firefox and I need to work on all latest browsers (at least the ones supporting HTML5 local storage) Use flash SharedObjects. OK, this would have been good - the only thing is I am not sure where to start. Snippets of actionscripts to do this are scattered everywhere but I do not know how to use those scripts in an actual html page:) I do not need to create any movies or anything - just need to store an image and serve it locally. If I go this route, I would also use it to store other "strings" locally. If you suggest this, please give me the exact steps (could be pointers to other web sites) on how to do this. I would like to avoid paying for any flash development environment software ideally:) Thank you!

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • StaX: Content not allowed in prolog

    - by RalfB
    I have the following (test) XML file below and Java code that uses StaX. I want to apply this code to a file that is about 30 GB large but with fairly small elements, so I thought StaX is a good choice. I am getting the following error: Exception in thread "main" javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598) at at.tuwien.mucke.util.xml.staxtest.StaXTest.main(StaXTest.java:18) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) <?xml version='1.0' encoding='utf-8'?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <price>44.95</price> <description>An in-depth look at creating applications with XML.</description> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <price>5.95</price> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> </book> </catalog> Here the code: package xml.staxtest; import java.io.*; import javax.xml.stream.*; public class StaXTest { public static void main(String[] args) throws Exception { XMLInputFactory xif = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xif.createXMLStreamReader(new FileReader("D:/Data/testFile.xml")); while(streamReader.hasNext()){ int eventType = streamReader.next(); if(eventType == XMLStreamReader.START_ELEMENT){ System.out.println(streamReader.getLocalName()); } //... more to come here later ... } } }

    Read the article

  • How do I write object classes effectively when dealing with table joins?

    - by Chris
    I should start by saying I'm not now, nor do I have any delusions I'll ever be a professional programmer so most of my skills have been learned from experience very much as a hobby. I learned PHP as it seemed a good simple introduction in certain areas and it allowed me to design simple web applications. When I learned about objects, classes etc the tutor's basic examnples covered the idea that as a rule of thumb each database table should have its own class. While that worked well for the photo gallery project we wrote, as it had very simple mysql queries, it's not working so well now my projects are getting more complex. If I require data from two separate tables which require a table join I've instead been ignoring the class altogether and handling it on a case by case basis, OR, even worse been combining some of the data into the class and the rest as a separate entity and doing two queries, which to me seems inefficient. As an example, when viewing content on a forum I wrote, if you view a thread, I retrieve data from the threads table, the posts table and the user table. The queries from the user and posts table are retrieved via a join and not instantiated as an object, whereas the thread data is called using my Threads class. So how do I get from my current state of affairs to something a little less 'stupid', for want of a better word. Right now I have a DB class that deals with connection and escaping values etc, a parent db query class that deals with the common queries and methods, and all of the other classes (Thread, Upload, Session, Photo and ones thats aren't used Post, User etc ) are children of that. Do I make a big posts class that has the relevant extra attributes that I retrieve from the users (and potentially threads) table? Do I have separate classes that populate each of their relevant attributes with a single query? If so how do I do that? Because of the way my classes are written, based on what I was taught, my db update row method, or insert method both just take the attributes as an array and update all of that, if I have extra attributes from other db tables in each class then how do I rewrite those methods as obbiously updating automatically like that would result in errors? In short I think my understanding is limited right now and I'd like some pointers when it comes to the fundamentals of how to write more complex classes.

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • How do I 'globally' catch exceptions thrown in object instances.

    - by SleepyBobos
    I am currently writing a winforms application (C#). I am making use of the Enterprise Library Exception Handling Block, following a fairly standard approach from what I can see. IE : In the Main method of Program.cs I have wired up event handler to Application.ThreadException event etc. This approach works well and handles the applications exceptional circumstances. In one of my business objects I throw various exceptions in the Set accessor of one of the objects properties set { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum trim..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message..."); _minimumTrim = value; } My logic for this approach (without turning this into a 'when to throw exceptions' discussion) is simply that the business objects are responsible for checking business rule constraints and throwing an exception that can bubble up and be caught as required. It should be noted that in the UI of my application I do explictly check the values that the public property is being set to (and take action there displaying friendly dialog etc) but with throwing the exception I am also covering the situation where my business object may not be used by a UI eg : the Property is being set by another business object for example. Anyway I think you all get the idea. My issue is that these exceptions are not being caught by the handler wired up to Application.ThreadException and I don't understand why. From other reading I have done the Application.ThreadException event and it handler "... catches any exception that occurs on the main GUI thread". Are the exceptions being raised in my business object not in this thread? I have not created any new threads. I can get the approach to work if I update the code as follows, explicity calling the event handler that is wired to Application.ThreadException. This is the approach outlined in Enterprise Library samples. However this approach requires me to wrap any exceptions thrown in a try catch, something I was trying to avoid by using a 'global' handler to start with. try { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message"); _minimumTrim = value; } catch (Exception ex) { Program.ThreadExceptionHandler.ProcessUnhandledException(ex); } I have also investigated using wiring a handler up to AppDomain.UnhandledException event but this does not catch the exceptions either. I would be good if someone could explain to me why my exceptions are not being caught by my global exception handler in the first code sample. Is there another approach I am missing or am I stuck with wrapping code in try catch, shown above, as required?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • php loop xml data with xsd schema - how do get the data

    - by miholzi
    i try to geht the data from a xml file and i have troubles to get data, for example, how can i get the caaml:locRef value or the caaml:beginPosition value? here is the code so far: /* a big thank you to helderdarocha */ /* – he already helped me yesterday with a part of this code */ $doc = new DOMDocument(); $doc->load('xml/test.xml'); $xpath = new DOMXpath($doc); $xpath->registerNamespace("caaml", "http://caaml.org/Schemas/V5.0/Profiles/BulletinEAWS"); if ($doc->schemaValidate('http://caaml.org/Schemas/V5.0/Profiles/BulletinEAWS/CAAMLv5_BulletinEAWS.xsd')) { foreach ($xpath->query('//caaml:DangerRating') as $key) { echo $key->nodeValue; print_r($key); } } and here ist the print_r from $key DOMElement Object ( [tagName] => caaml:DangerRating [schemaTypeInfo] => [nodeName] => caaml:DangerRating [nodeValue] => 2014-03-03+01:00 2 [nodeType] => 1 [parentNode] => (object value omitted) [childNodes] => (object value omitted) [firstChild] => (object value omitted) [lastChild] => (object value omitted) [previousSibling] => (object value omitted) [nextSibling] => (object value omitted) [attributes] => (object value omitted) [ownerDocument] => (object value omitted) [namespaceURI] => http://caaml.org/Schemas/V5.0/Profiles/BulletinEAWS [prefix] => caaml [localName] => DangerRating [baseURI] => /Applications/MAMP/htdocs/lola/xml/test.xml [textContent] => 2014-03-03+01:00 2 ) 2014-03-04+01:00 2 and here a part of the xml <caaml:DangerRating> <caaml:locRef xlink:href="AT7R1"/> <caaml:validTime> <caaml:TimePeriod> <caaml:beginPosition>2014-03-06T00:00:00+01:00</caaml:beginPosition> <caaml:endPosition>2014-03-06T11:59:59+01:00</caaml:endPosition> </caaml:TimePeriod> </caaml:validTime> <caaml:validElevation> <caaml:ElevationRange uom="m"> <caaml:beginPosition>2200</caaml:beginPosition> </caaml:ElevationRange> </caaml:validElevation> <caaml:mainValue>2</caaml:mainValue> </caaml:DangerRating> <caaml:DangerRating> <caaml:locRef xlink:href="AT7R1"/> <caaml:validTime> <caaml:TimePeriod> thanks!

    Read the article

  • Where are the function literals c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • php functions within functions.

    - by Adamski
    Hi all, ihave created a simple project to help me get to grips with php and mysql, but have run into a minor issue, i have a working solution but would like to understand why i cannot run this code successfully this way, ill explain: i have a function, function fetch_all_movies(){ global $connection; $query = 'select distinct * FROM `'.TABLE_MOVIE.'` ORDER BY movieName ASC'; $stmt = mysqli_prepare($connection,$query); mysqli_execute($stmt); mysqli_stmt_bind_result($stmt,$id,$name,$genre,$date,$year); while(mysqli_stmt_fetch($stmt)){ $editUrl = "index.php?a=editMovie&movieId=".$id.""; $delUrl = "index.php?a=delMovie&movieId=".$id.""; echo "<tr><td>".$id."</td><td>".$name."</td><td>".$date."</td><td>".get_actors($id)."</td><td><a href=\"".$editUrl."\">Edit</a> | <a href=\"".$delUrl."\">Delete</a></td></tr>"; } } this fetches all the movies in my db, then i wish to get the count of actors for each film, so i pass in the get_actors($id) function which gets the movie id and then gives me the count of how many actors are realted to a film. here is the function for that: function get_actors($movieId){ global $connection; $query = 'SELECT DISTINCT COUNT(*) FROM `'.TABLE_ACTORS.'` WHERE movieId = "'.$movieId.'"'; $result = mysqli_query($connection,$query); $row = mysqli_fetch_array($result); return $row[0]; } the functions both work perfect when called separately, i just would like to understand when i pass the function inside a function i get this warning: Warning: mysqli_fetch_array() expects parameter 1 to be mysqli_result, boolean given in /Applications/MAMP/htdocs/movie_db/includes/functions.inc.php on line 287 could anyone help me understand why? many thanks.

    Read the article

  • exclude private property from print_r or object?

    - by Hailwood
    Basically I am using Code Igniter, and the Code Igniter base class is huge, when I print_r some of my objects they have the base class embedded inside them. this makes it a pain to get the information I actually wanted (the rest of the properties). So, I am wondering if there is a way I can hide, or remove the base class object? I have tried clone $object; unset($object->ci); print_r($object); but of course the ci property is private. the actual function I am using for dumping is: /** * Outputs the given variables with formatting and location. Huge props * out to Phil Sturgeon for this one (http://philsturgeon.co.uk/blog/2010/09/power-dump-php-applications). * To use, pass in any number of variables as arguments. * Optional pass in "true" as final argument to kill script after dump * * @return void */ function dump() { list($callee) = debug_backtrace(); $arguments = func_get_args(); $total_arguments = count($arguments); if (end($arguments) === true) $total_arguments--; echo '<fieldset style="background: #fefefe !important; border:2px red solid; padding:5px">'; echo '<legend style="background:lightgrey; padding:5px;">' . $callee['file'] . ' @ line: ' . $callee['line'] . '</legend><pre>'; $i = 0; foreach ($arguments as $argument) { //if the last argument is true we don't want to display it. if ($i == ($total_arguments) && $argument === true) break; echo '<br/><strong>Debug #' . (++$i) . ' of ' . $total_arguments . '</strong>: '; if ((is_array($argument) || is_object($argument)) && count($argument)) { print_r($argument); } else { var_dump($argument); } } echo '</pre>' . PHP_EOL; echo '</fieldset>' . PHP_EOL; //if the very last argument is "true" then die if (end($arguments) === true) die('Killing Script'); }

    Read the article

  • Is there a Silverlight equivalent to "Application.OpenForms"?

    - by lightmeetsdark
    Basically, I'm trying to take information entered by the user on one page and print it out to another page via a "printer friendly" version, or report, of something. I have a MainPage.xaml that is, as the name suggests, my main page, but in a window there is the subpage AdCalculator.xaml where the user enters the information and PrintEstimate.xaml that is navigated to via a button on AdCalculator. I would like to be able to transfer the information entered in the textboxes from AdCalculator and print it out via text blocks in PrintEstimate. So in order to do that I have the following code: Views.AdCalculator AdCalc = new Views.AdCalculator(); string PrintCompanyName = AdCalc.CompanyName; string PrintContactName = AdCalc.txt_CustomerName.Text; string PrintBillingAddress1 = AdCalc.txt_BillingAddress.Text; string PrintBillingAddress2 = AdCalc.txt_BillingAddressLine2.Text; string PrintPhoneNumber = AdCalc.txt_PhoneNumber.Text; string PrintNumOfAds = AdCalc.txt_NumofAds.Text; string PrintRateOfPlay = AdCalc.Cmb_Rate.SelectedValue.ToString(); string PrintNumOfMonths = AdCalc.txt_NumofMonths.Text; string PrintTotalDue = AdCalc.txt_InvoiceSummary_TotalDue.Text; PrintEstimate PrintEstimatePage = new PrintEstimate(); PrintEstimatePage.txt_CompanyName.Text = PrintCompanyName; PrintEstimatePage.txt_CustomerName.Text = PrintContactName; PrintEstimatePage.txt_BillingAddress.Text = PrintBillingAddress1; PrintEstimatePage.txt_BillingAddressLine2.Text = PrintBillingAddress2; PrintEstimatePage.txt_PhoneNumber.Text = PrintPhoneNumber; PrintEstimatePage.txt_InvoiceSummary_NumofAds.Text = PrintNumOfAds; PrintEstimatePage.txt_InvoiceSummary_RateofPlay.Text = PrintRateOfPlay; PrintEstimatePage.txt_InvoiceSummary_NumOfMonths.Text = PrintNumOfMonths; PrintEstimatePage.txt_EstimateTotal.Text = PrintTotalDue; Only problem is, when I instantiate the new AdCalculator page, it clears the values, so nothing is actually retained as far as user-input goes. Following a lead from a colleague, I believe all I need to do is change the line Views.AdCalculator AdCalc = new Views.AdCalculator(); to Views.AdCalculator AdCalc = (AdCalculator)Application.OpenForms["AdCalculator"]; except the "Apllication.OpenForms" doesn't register. I know there are a lot of differences in the way C# code-behind is laid out for silverlight applications, so I didn't know if there was an equivalent that anyone knew about to "Application.OpenForms" that would help solve my issue or if there was any other way to go about getting my task done. Thank you very much!

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Are comonads a good fit for modeling the Wumpus world?

    - by Tim Stewart
    I'm trying to find some practical applications of a comonad and I thought I'd try to see if I could represent the classical Wumpus world as a comonad. I'd like to use this code to allow the Wumpus to move left and right through the world and clean up dirty tiles and avoid pits. It seems that the only comonad function that's useful is extract (to get the current tile) and that moving around and cleaning tiles would not use be able to make use of extend or duplicate. I'm not sure comonads are a good fit but I've seen a talk (Dominic Orchard: A Notation for Comonads) where comonads were used to model a cursor in a two-dimensional matrix. If a comonad is a good way of representing the Wumpus world, could you please show where my code is wrong? If it's wrong, could you please suggest a simple application of comonads? module Wumpus where -- Incomplete model of a world inhabited by a Wumpus who likes a nice -- tidy world but does not like falling in pits. import Control.Comonad -- The Wumpus world is made up of tiles that can be in one of three -- states. data Tile = Clean | Dirty | Pit deriving (Show, Eq) -- The Wumpus world is a one dimensional array partitioned into three -- values: the tiles to the left of the Wumpus, the tile occupied by -- the Wumpus, and the tiles to the right of the Wumpus. data World a = World [a] a [a] deriving (Show, Eq) -- Applies a function to every tile in the world instance Functor World where fmap f (World as b cs) = World (fmap f as) (f b) (fmap f cs) -- The Wumpus world is a Comonad instance Comonad World where -- get the part of the world the Wumpus currently occupies extract (World _ b _) = b -- not sure what this means in the Wumpus world. This type checks -- but does not make sense to me. extend f w@(World as b cs) = World (map world as) (f w) (map world cs) where world v = f (World [] v []) -- returns a world in which the Wumpus has either 1) moved one tile to -- the left or 2) stayed in the same place if the Wumpus could not move -- to the left. moveLeft :: World a -> World a moveLeft w@(World [] _ _) = w moveLeft (World as b cs) = World (init as) (last as) (b:cs) -- returns a world in which the Wumpus has either 1) moved one tile to -- the right or 2) stayed in the same place if the Wumpus could not move -- to the right. moveRight :: World a -> World a moveRight w@(World _ _ []) = w moveRight (World as b cs) = World (as ++ [b]) (head cs) (tail cs) initWorld = World [Dirty, Clean, Dirty] Dirty [Clean, Dirty, Pit] -- cleans the current tile cleanTile :: Tile -> Tile cleanTile Dirty = Clean cleanTile t = t Thanks!

    Read the article

  • Routing error when trying to use same view for update and create flows (Rails 3)

    - by Jamis Charles
    My overall use case: I have a Listing model that has many images. The Listing detail page lists all the fields that can be updated inline (through ajax). I want to be able to use the same view for both update listing and create new listing. My listing controller looks as follows: def detail @listing = Listing.find(params[:id]) @image = Image.new #should this link somewhere else? respond_to do |format| format.html # show.html.erb format.xml { render :xml => @listing } end end def create # create a new listing and save it immediately. Assign it to guest, with a status of "draft" @listing = Listing.new(:price_id => 1) # Default price id # save it to db # TODO add validation that it has to have a price ID, on record creation. So the view doesn't break. @listing.save @image = Image.new # redirect_to "/listings/detail/@listing.id" #this didn't work respond_to do |format| format.html # show.html.erb format.xml { render :xml => @listing } end end The PROBLEM I'm using a partial that shows the same form for the create view and the detail view. This works perfectly except for one thing: When I pull up http://0.0.0.0:3000/listings/detail/7, it works perfectly. When I pull up http://0.0.0.0:3000/listings/new, I get the following error: Showing /Applications/MAMP/htdocs/rails_testing/feedbackd/app/views/listings/_edit_form.html.erb where line #100 raised: No route matches {:action="show", :controller="images"} Extracted source (around line #100): 97: <!-- Form for new images --> 98: <div class="span-20 append-bottom"> 99: <!-- <%# form_for :image, @image, :url => image_path, :html => { :multipart => true } do |f| %> --> 100: <%= form_for @image, :url => image_path, :html => { :multipart => true } do |f| %> 101: <%= f.text_field :description %><br /> 102: <%= f.file_field :photo %> 103: <%= submit_tag "Upload" %> What I think the issue is: When I upload a new image (I'm using Paperclip), it requires the listing_id to create the image record. Since the listing_id isn't passed in with listings/new it can't find the listing_id. How can I pass in the id? Via a redirect? What's the best way to solve this? Thank you.

    Read the article

  • NVIDIA graphics driver in Ubuntu 12.04

    - by user924501
    So my overall goal is that I want to be able to code with CUDA enabled applications. However, upon many days of searching, using installation walkthroughs, and reinstalling countless times after driver failure... I'm now here as a last resort. I cannot get Ubuntu 12.04 LTS to install the NVIDIA 295.59 driver for my GeForce GT 540M NVIDIA graphics card. My main system specs is as follows... (I believe having the Intel processor may be the problem) DELL Laptop XPS L502X Intel® Core™ i7-2620M CPU @ 2.70GHz × 4 Intel Integrated Graphics 64 bit NVIDIA GeForce GT 540M Ubuntu 12.04 LTS All other specs are irrelevant unless I forgot something? Methods Tried: aptitude install nvidia-current (all packages) Results: Nothing really happened. Nothing in the additional drivers menu appeared, nor was the NVIDIA X Server settings application allowing access because it thought there was no NVIDIA X Server installed. Downloaded driver from nvidia.com. Set nomodeset in the grub boot menu through /boot/grub/grub.cfg Went to console and turned off lightdm. Installed the driver, but it said the pre-install failed? (mean anything?) Started up lightdm again. Results: NVIDIA X Server settings still didn't notice it. Even tried to do nvidia-xconfig multiple times. I also went into the config file to make sure the driver setting said "nvidia". aptitude install nvidia-173 (all packages) Results: Couldn't find the xorg-video-abi-10 virtual package. It was nowhere to be found and the ubuntu forums everywhere had no answers. Lots of people were having this problem. This is easily done in windows, simply download the driver and debug in visual studio with no problems at all. I'd like clear step-by-step instructions on how I should go about this. I'm relatively new to linux but I can find my way around pretty well so you aren't talking to a straight-up beginner. Also, if you think another thread may have the answer please post because I was having a hard time looking for my specific type of problem. TL;DR I want to have access to my GPU so I can code with CUDA while in Ubuntu 12.04 on my 64 bit laptop that also has Intel integrated graphics on the processor. Solution: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates && sudo apt-get update && sudo apt-get upgrade && sudo apt-get install nvidia-current

    Read the article

  • Xcode App Crash-When connecting to the ODATA services [on hold]

    - by user3685677
    Can someone help me resolving the following issue: When trying to connect from iPad app to SAP ECC system through ODATA channel services via SUP, it is allowing me to login for the first time and could retrieve the data successfully from SAP system. But when I logout and try logging in again with the same session, application gets crashed. Below is the crash report for your reference. I am using SDM Parser to connect the SAP system. SDMODataServiceDocumentParser *sdmDocParser = [[SDMODataServiceDocumentParser alloc] init]; [sdmDocParser parse:aServiceDocument]; m_serviceDocument = sdmDocParser.serviceDocument; //Load the object with metadata xml: SDMODataMetaDocumentParser *sdmMetadataParser = [[SDMODataMetaDocumentParser alloc] initWithServiceDocument:m_serviceDocument]; [sdmMetadataParser parse:aMetadata]; After initiated the service, setting the URL. [service setServiceDocumentUrl:m_serviceDocumentURL]; Using SDMconnectivityhelper to connect the URL id<SDMRequesting> serviceDocumentRequest2 = [connectivityHelper executeBasicSyncRequestWithQuery3:[[ODataQuery alloc]initWithURL:[NSURL URLWithString:encodedStrUrl]]] ; - (id <SDMRequesting>)executeBasicSyncRequestWithQuery3:(ODataQuery *)aQuery { id<SDMRequesting> request = [self createRequestWithQuery:aQuery]; [request setTimeOutSeconds:TIMEOUT_SEC]; [request setRequestMethod:@"GET"]; [request addRequestHeader:@"Content-Type" value:@"application/xml"]; [request startSynchronous];**[App getting CRASH in this line]** return request; } - (id <SDMRequesting>)createRequestWithQuery:(ODataQuery *)aQuery { if (isSUPMode) { [SDMRequestBuilder setRequestType:SUPRequestType]; } else { [SDMRequestBuilder setRequestType:HTTPRequestType]; } id <SDMRequesting> request = [SDMRequestBuilder requestWithURL:aQuery.URL]; request.username = self.username; request.password = self.password; return request; } Crash Report:- Incident Identifier: 347511BA-5F7F-45D4-8662-D5DCD2F88EA7 CrashReporter Key: 9a4d38cf19b1a94476eb6b2170d4f56678d6ca60 Hardware Model: iPad3,4 Path: /var/mobile/Applications/F38AD64F-03F8-4A21- Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x00000000 Triggered by Thread: 0 Thread 0 Crashed: 0 libsystem_platform.dylib 0x393a94c0 _platform_memmove$VARIANT $Swift + 160 1 Eby Sales Order 0x0015a2c8 0xb7000 + 668360 2 Eby Sales Order 0x0015a8b8 0xb7000 + 669880 3 Eby Sales Order 0x003331ee 0xb7000 + 2605550 4 Eby Sales Order 0x0031856e 0xb7000 + 2495854 5 Eby Sales Order 0x00338454 0xb7000 + 2626644 6 Eby Sales Order 0x000e6ad8 0xb7000 + 195288 7 Eby Sales Order 0x000e99a0 0xb7000 + 207264 8 Eby Sales Order 0x000ea442 0xb7000 + 209986 9 Eby Sales Order 0x000eb0d6 0xb7000 + 213206 10 Eby Sales Order 0x000c13d0 0xb7000 + 41936 11 Foundation 0x2ec93112 __NSFireDelayedPerform + 410 12 CoreFoundation 0x2e27ef4c __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 12 13 CoreFoundation 0x2e27eb66 __CFRunLoopDoTimer + 790 14 CoreFoundation 0x2e27ceee __CFRunLoopRun + 1214 15 CoreFoundation 0x2e1e7764 CFRunLoopRunSpecific + 520 16 CoreFoundation 0x2e1e7546 CFRunLoopRunInMode + 102 17 GraphicsServices 0x331216ce GSEventRunModal + 134 18 UIKit 0x30b4688c UIApplicationMain + 1132 19 Eby Sales Order 0x000bd8da 0xb7000 + 26842 20 Eby Sales Order 0x000bd89c 0xb7000 + 26780

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • A Security (encryption) Dilemma

    - by TravisPUK
    I have an internal WPF client application that accesses a database. The application is a central resource for a Support team and as such includes Remote Access/Login information for clients. At the moment this database is not available via a web interface etc, but one day is likely to. The remote access information includes the username and passwords for the client's networks so that our client's software applications can be remotely supported by us. I need to store the usernames and passwords in the database and provide the support consultants access to them so that they can login to the client's system and then provide support. Hope this is making sense. So the dilemma is that I don't want to store the usernames and passwords in cleartext on the database to ensure that if the DB was ever compromised, I am not then providing access to our client's networks to whomever gets the database. I have looked at two-way encryption of the passwords, but as they say, two-way is not much different to cleartext as if you can decrypt it, so can an attacker... eventually. The problem here is that I have setup a method to use a salt and a passcode that are stored in the application, I have used a salt that is stored in the db, but all have their weaknesses, ie if the app was reflected it exposes the salts etc. How can I secure the usernames and passwords in my database, and yet still provide the ability for my support consultants to view the information in the application so they can use it to login? This is obviously different to storing user's passwords as these are one way because I don't need to know what they are. But I do need to know what the client's remote access passwords are as we need to enter them in at the time of remoting to them. Anybody have some theories on what would be the best approach here? update The function I am trying to build is for our CRM application that will store the remote access details for the client. The CRM system provides call/issue tracking functionality and during the course of investigating the issue, the support consultant will need to remote in. They will then view the client's remote access details and make the connection

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >