Search Results

Search found 3471 results on 139 pages for 'automated builds'.

Page 113/139 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Why does the Java Collections Framework offer two different ways to sort?

    - by dvanaria
    If I have a list of elements I would like to sort, Java offers two ways to go about this. For example, lets say I have a list of Movie objects and I’d like to sort them by title. One way I could do this is by calling the one-argument version of the static java.util.Collections.sort( ) method with my movie list as the single argument. So I would call Collections.sort(myMovieList). In order for this to work, the Movie class would have to be declared to implement the java.lang.Comparable interface, and the required method compareTo( ) would have to be implemented inside this class. Another way to sort is by calling the two-argument version of the static java.util.Collections.sort( ) method with the movie list and a java.util.Comparator object as it’s arguments. I would call Collections.sort(myMovieList, titleComparator). In this case, the Movie class wouldn’t implement the Comparable interface. Instead, inside the main class that builds and maintains the movie list itself, I would create an inner class that implements the java.util.Comparator interface, and implement the one required method compare( ). Then I'd create an instance of this class and call the two-argument version of sort( ). The benefit of this second method is you can create an unlimited number of these inner class Comparators, so you can sort a list of objects in different ways. In the example above, you could have another Comparator to sort by the year a movie was made, for example. My question is, why bother to learn both ways to sort in Java, when the two-argument version of Collections.sort( ) does everything the first one-argument version does, but with the added benefit of being able to sort the list’s elements based on several different criteria? It would be one less thing to have to keep in your mind while coding. You’d have one basic mechanism of sorting lists in Java to know.

    Read the article

  • Calling method on category included from iPhone static library causes NSInvalidArgumentException

    - by Corey Floyd
    I have created a static library to house some of my code like categories. I have a category for UIViews in "UIView-Extensions.h" named Extensions. In this category I have a method called: - (void)fadeOutWithDelay:(CGFloat)delay duration:(CGFloat)duration; Calling this method works fine on the simulator on Debug configuration. However, if try to run the app on the device I get a NSInvalidArgumentException: [UIView fadeOutWithDelay:duration:]: unrecognized selector sent to instance 0x1912b0 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIView fadeOutWithDelay:duration:]: unrecognized selector sent to instance 0x1912b0 It seems for some reason UIView-Extensions.h is not being included in the device builds. What I have checked/tried I did try to include another category for NSString, and had the same issue. Other files, like whole classes and functions work fine. It is an issue that only happens with categories. I did a clean all targets, which did not fix the problem. I checked the static library project, the categories are included in the target's "copy headers" and "compile sources" groups. The static library is included in the main projects "link binary with library" group. Another project I have added the static library to works just fine. I deleted and re-added the static library with no luck -ObjC linker flag is set Any ideas? nm output libFJSCodeDebug.a(UIView-Extensions.o): 000004d4 t -[UIView(Extensions) changeColor:withDelay:duration:] 00000000 t -[UIView(Extensions) fadeInWithDelay:duration:] 000000dc t -[UIView(Extensions) fadeOutWithDelay:duration:] 00000abc t -[UIView(Extensions) firstResponder] 000006b0 t -[UIView(Extensions) hasSubviewOfClass:] 00000870 t -[UIView(Extensions) hasSubviewOfClass:thatContainsPoint:] 000005cc t -[UIView(Extensions) rotate:] 000002d8 t -[UIView(Extensions) shrinkToSize:withDelay:duration:] 000001b8 t -[UIView(Extensions) translateToFrame:delay:duration:] U _CGAffineTransformRotate 000004a8 t _CGPointMake U _CGRectContainsPoint U _NSLog U _OBJC_CLASS_$_UIColor U _OBJC_CLASS_$_UIView U ___CFConstantStringClassReference U ___addsf3vfp U ___divdf3vfp U ___divsf3vfp U ___extendsfdf2vfp U ___muldf3vfp U ___truncdfsf2vfp U _objc_enumerationMutation U _objc_msgSend U _objc_msgSend_stret U dyld_stub_binding_helper

    Read the article

  • How can I convert a bunch of files from ISO-8859-1 to UTF-8 using Perl?

    - by tau
    I have several documents I need to convert from ISO-8859-1 to UTF-8 (without the BOM of course). This is the issue though. I have so many of these documents (it is actually a mix of documents, some UTF-8 and some ISO-8859-1) that I need an automated way of converting them. Unfortunately I only have ActivePerl installed and don't know much about encoding in that language. I may be able to install PHP, but I am not sure as this is not my personal computer. Just so you know, I use Scite or Notepad++, but both do not convert correctly. For example, if I open a document in Czech that contains the character "ž" and go to the "Convert to UTF-8" option in Notepad++, it incorrectly converts it to an unreadable character. There is a way I CAN convert them, but it is tedious. If I open the document with the special characters and copy the document to Windows clipboard, then paste it into a UTF-8 document and save it, it is okay. This is too tedious (opening every file and copying/pasting into a new document) for the amount of documents I have. Any ideas? Thanks!!!

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

  • OpenSSL certificate lacks key identifiers

    - by 0xDEAD BEEF
    How do i add these sections to certificate (i am manualy building it using C++). X509v3 Subject Key Identifier: A4:F7:38:55:8D:35:1E:1D:4D:66:55:54:A5:BE:80:25:4A:F0:68:D0 X509v3 Authority Key Identifier: keyid:A4:F7:38:55:8D:35:1E:1D:4D:66:55:54:A5:BE:80:25:4A:F0:68:D0 Curently my code builds sertificate well, except for those keys.. :/ static X509 * GenerateSigningCertificate(EVP_PKEY* pKey) { X509 *x; x = X509_new(); //create x509 certificate X509_set_version(x, NID_X509); ASN1_INTEGER_set(X509_get_serialNumber(x), 0x00000000); //set serial number X509_gmtime_adj(X509_get_notBefore(x), 0); X509_gmtime_adj(X509_get_notAfter(x),(long)60*60*24*365); //1 year X509_set_pubkey(x, pKey); //set pub key from just generated rsa X509_NAME *name; name = X509_get_subject_name(x); NAME_StringField(name, "C", "LV"); NAME_StringField(name, "CN", "Point"); //common name NAME_StringField(name, "O", "Point"); //organization X509_set_subject_name(x, name); //save name fields to certificate X509_set_issuer_name(x, name); //save name fields to certificate X509_EXTENSION *ex; ex = X509V3_EXT_conf_nid(NULL, NULL, NID_netscape_cert_type, "server"); X509_add_ext(x,ex,-1); X509_EXTENSION_free(ex); ex = X509V3_EXT_conf_nid(NULL, NULL, NID_netscape_comment, "example comment extension"); X509_add_ext(x, ex, -1); X509_EXTENSION_free(ex); ex = X509V3_EXT_conf_nid(NULL, NULL, NID_netscape_ssl_server_name, "www.lol.lv"); X509_add_ext(x, ex, -1); X509_EXTENSION_free(ex); ex = X509V3_EXT_conf_nid(NULL, NULL, NID_basic_constraints, "critical,CA:TRUE"); X509_add_ext(x, ex, -1); X509_EXTENSION_free(ex); X509_sign(x, pKey, EVP_sha1()); //sign x509 certificate return x; }

    Read the article

  • Winsock tcp/ip Socket listening but connection refused, race condition?

    - by Wayne
    Hello folks. This involves two automated unit tests which each start up a tcp/ip server that creates a non-blocking socket then bind()s and listen()s in a loop on select() for a client that connects and downloads some data. The catch is that they work perfectly when run separately but when run as a test suite, the second test client will fail to connect with WSACONNREFUSED... UNLESS there is a Thread.Sleep() of several seconds between them??!!! Interestingly, there is retry loop every 1 second for connecting after any failure. So the second test loops for a while until timeout after 10 minutes. During that time, netstat -na shows the correct port number is in the LISTEN state for the server socket. So if it is in the listen state? Why won't it accept the connection? In the code, there are log messages that show the select NEVER even gets a socket ready to read (which means ready to accept a connection when it applies to a listening socket). Obviously the problem must be related to some race condition between finishing one test which means close() and shutdown() on each end of the socket, and the start up of the next. This wouldn't be so bad if the retry logic allowed it to connect eventually after a couple of seconds. However it seems to get "gummed up" and won't even retry. However, for some strange reason the listening socket SAYS it's in the LISTEN state even through keeps refusing connections. So that means it's the Windoze O/S which is actually catching the SYN packet and returning a RST packet (which means "Connection Refused"). The only other time I ever saw this error was when the code had a problem that caused hundreds of sockets to get stuck in TIME_WAIT state. But that's not the case here. netstat shows only about a dozen sockets with only 1 or 2 in TIME_WAIT at any given moment. Please help.

    Read the article

  • Gradle fails injecting dependencies into subprojects using fileTree

    - by Matthias
    Maybe I'm missing something about the way Gradle works. What I have here is a parent project, which only contains configuration, i.e. there won't be any artifact being built when building it, it merely manages and builds all its subprojects. Now the subprojects share some dependency configuration, so I figured what I would do in the root project's build.gradle is: subprojects { dependencies { compile fileTree(dir: 'lib', includes: ['*.jar']) } } that, however, does not work, it fails with a rather obscure error message: A problem occurred evaluating root project 'qype-android' Cause: No signature of method: org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDependencyHandler.compile() is applicable for argument types: (org.gradle.api.internal.file.DefaultConfigurableFileTree) values: [file set 'lib'] Possible solutions: module(java.lang.Object) after some trial and error, I could "fix" this issue by applying the 'java' plugin to the parent project. How come? I don't see anywhere from the Gradle docs that a fileTree dependency requires the Java plugin. Even so, why would I need it on the project that is injecting the configuration, as opposed to on the project that is being configured (note that the subprojects all apply the Java plugin themselves)? Does this mean that if I have N different subprojects that are all of varying natures, and apply different plugins, that the parent projects must always apply the set of all plugins beings used somewhere to itself, too?

    Read the article

  • Is a "factory" method the right pattern?

    - by jdt141
    Hey all - So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor... /* * class1 and class 2 derive from the same superclass */ class Container { public: boost::shared_ptr<ComposedClass1> class1; boost::shared_ptr<ComposedClass2> class2; private: ... } /* * Constructor - builds the objects that we need in this container. */ Container::Container(some params) { class1.reset(new ComposedClass1(...)); class2.reset(new ComposedClass2(...)); } What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • Idiomatic way to build a custom structure from XML zipper in Clojure

    - by Checkers
    Say, I'm parsing an RSS feed and want to extract a subset of information from it. (def feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)) I can get links and titles separately: (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text) However I can't figure out the way to extract them at the same time without traversing the zipper more than once, e.g. (let [feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)] (zipmap (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text))) ...or a variation of thereof, involving mapping multiple sequences to a function that incrementally builds a map with, say, assoc. Not only I have to traverse the sequence multiple times, the sequences also have separate states, so elements must be "aligned", so to speak. That is, in a more complex case than RSS, a sub-element may be missing in particular element, making one of sequences shorter by one (there are no gaps). So the result may actually be incorrect. Is there a better way or is it, in fact, the way you do it in Clojure?

    Read the article

  • SAML Identity Provider based on Active Directory

    - by Jarret
    I have a 3rd party program that supports web SSO using SAML 1.1 (it is ready to serve as the Service Provider, in other words). We would like to implement this SSO for our intranet users based on their Active Directory credentials. In other words, they've already logged on to their system, so let's simply use those credentials to facilitate an SSO. I am a little overwhelmed at where to begin, though. My initial thought is that IIS / Active Directory could easily serve as the Identity Provider since IIS gives us "Integrated Windows Authentication" abilities. I would think we could just create a .NET web app that requires Integrated Authentication which simply extracts the current user ID, builds the SAML response, and re-directs the user back to the Service Provider with this SAML response to complete the SSO. But then, my problem is that I simply have no real idea of how to go about creating this SAML response, the X.509 certs involved, etc... I am wondering if I am in over my head on this, or if creating this SAML response should be relatively easy. Note this SSO is to be used by intranet users only, so no need to worry about federating with other companies / domains.

    Read the article

  • TeamCity output artifacts not published to IIS7 folder

    - by clausas
    I am trying to set up TeamCity to build and deploy an ASP.NET MVC application. I have the setup running successfully on other servers using TeamCity 4.5, but the new server is running TeamCity 6, and I am having trouble getting it to work as expected. TeamCity manages to get the files from source control, and the project (Visual Studio Solution 2008 set to "Build") builds and outputs the necessary files as expected. The problem seems to be with my artifact paths, as the output files are not copied to the website folder. My solution consists of dozen projects, of which the "Web" project is the interesting one in this case. The build checkout directory is C:\TeamCity\buildAgent\work\7da320cebf0ee541, and the "Web"-project is found in C:\TeamCity\buildAgent\work\7da320cebf0ee541\Web I have set up my build configuration with the following artifact paths (relative from checkout directory to the folder containing the website): Web/bin=>../../../../inetpub/wwwroot/staging/bin Web/Content=>../../../../inetpub/wwwroot/staging/Content Web/Views=>../../../../inetpub/wwwroot/staging/Views Web/Media=>../../../../inetpub/wwwroot/staging/Media Web/*.aspx=>../../../../inetpub/wwwroot/staging Web/*.asax=>../../../../inetpub/wwwroot/staging (I've tried with more ../ just in case, but it didn't make a difference). This is the output I get from the log [19:35:29]: Publishing artifacts (1s) [19:35:29]: [Publishing artifacts] Paths to publish: [Web/bin=../../../../inetpub/wwwroot/staging/bin, Web/Content=../../../../inetpub/wwwroot/staging/Content, Web/obj=../../../../inetpub/wwwroot/staging/obj, Web/Views=../../../../inetpub/wwwroot/staging/Views, Web/Media=../../../../inetpub/wwwroot/staging/Media, Web/.aspx=../../../../inetpub/wwwroot/staging, Web/.asax=../../../../inetpub/wwwroot/staging, teamcity-info.xml] [19:35:30]: [Publishing artifacts] Sending files [19:35:32]: Build finished Logs from some of the other servers running TeamCity 4.5 uses a different format, with a line for each of the artifacts being published, I'm not sure if this is relevant or only due to a different logging format. Everything seems to be working, but no files are put in my website folder after a build, am I missing something here? Any help will be much appreciated :)

    Read the article

  • backbone.js Model.get() returns undefined, scope using coffeescript + coffee toaster?

    - by benipsen
    I'm writing an app using coffeescript with coffee toaster (an awesome NPM module for stitching) that builds my app.js file. Lots of my application classes and templates require info about the current user so I have an instance of class User (extends Backbone.Model) stored as a property of my main Application class (extends Backbone.Router). As part of the initialization routine I grab the user from the server (which takes care of authentication, roles, account switching etc.). Here's that coffeescript: @user = new models.User @user.fetch() console.log(@user) console.log(@user.get('email')) The first logging statement outputs the correct Backbone.Model attributes object in the console just as it should: User _changing: false _escapedAttributes: Object _pending: Object _previousAttributes: Object _silent: Object attributes: Object account: Object created_on: "1983-12-13 00:00:00" email: "[email protected]" icon: "0" id: "1" last_login: "2012-06-07 02:31:38" name: "Ben Ipsen" roles: Object __proto__: Object changed: Object cid: "c0" id: "1" __proto__: ctor app.js:228 However, the second returns undefined despite the model attributes clearly being there in the console when logged. And just to make things even more interesting, typing "window.app.user.get('email')" into the console manually returns the expected value of "[email protected]"... ? Just for reference, here's how the initialize method compiles into my app.js file: Application.prototype.initialize = function() { var isMobile; isMobile = navigator.userAgent.match(/(iPhone|iPod|iPad|Android|BlackBerry)/); this.helpers = new views.DOMHelpers().initialize().setup_viewport(isMobile); this.user = new models.User(); this.user.fetch(); console.log(this.user); console.log(this.user.get('email')); return this; }; I initialize the Application controller in my static HTML like so: jQuery(document).ready(function(){ window.app = new controllers.Application(); }); Suggestions please and thank you!

    Read the article

  • How do I merge multiple PDB files ?

    - by blue.tuxedo
    We are currently using a single command line tool to build our product on both Windows and Linux. Si far its works nicely, allowing us to build out of source and with finer dependencies than what any of our previous build system allowed. This buys us great incremental and parallel build capabilities. To describe shortly the build process, we get the usual: .cpp -- cl.exe --> .obj and .pdb multiple .obj and .pdb -- cl.exe --> single .dll .lib .pdb multiple .obj and .pdb -- cl.exe --> single .exe .pdb The msvc C/C++ compiler supports it adequately. Recently the need to build a few static libraries emerged. From what we gathered, the process to build a static library is: multiple .cpp -- cl.exe --> multiple .obj and a single .pdb multiple .obj -- lib.exe --> a single .lib The single .pdb means that cl.exe should only be executed once for all the .cpp sources. This single execution means that we can't parallelize the build for this static library. This is really unfortunate. We investigated a bit further and according to the documentation (and the available command line options): cl.exe does not know how to build static libraries lib.exe does not know how to build .pdb files Does anybody know a way to merge multiple PDB files ? Are we doomed to have slow builds for static libraries ? How do tools like Incredibuild work around this issue ?

    Read the article

  • super light software development process

    - by Walty
    hi, For the development process I have involved so far, most have teams of SINGLE member, or occasionally two. We used python + django for the major development, the development process is actually very fast, and we do have code reviews, design pattern discussions, and constant refactoring. Though team size is small, I do think there are some development processes / best practices that could be enforced. For example, using svn would be definitely better than regular copy backup. I did read some articles & books about Agile, XP & continuous integration, I think they are nice, but still too heavy for this case (team of 1 or 2, and fast coding). For example, IMHO, with nice design pattern, and iterative development + refactoring, the TDD MIGHT be an overkill, or at least the overhead does not out-weight the advantages. And so is the pair programming. The automated testing is a nice idea, but it seems not technically feasible for every project. our current practices are: svn + milestone + code review I wonder if there are development processes / best practices specifically targeted on such super light teams? thanks.

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • Stored Procedure: Variable passed from PHP alters second half of query string

    - by Stephanie
    Hello everyone, Basically, we have an ASP website right now that I'm converting to PHP. We're still using the MSSQL server for the DB -- it's not moving. In the ASP, now, there's an include file with a giant sql query that's being executed. This include sits on a lot of pages and this is a simple version of what happens. Pages A, B and C all use this include file to return a listing. In ASP, Page A passes through variable A to the include file - page B passes through variable B -- page C passes through variable C, and so on. The include file builds the SQL query like this: sql = "SELECT * from table_one LEFT OUTER JOIN table_two ON table_one.id = table_two.id" then adds (remember, ASP), based on the variable passed through from the parent page, Select Case sType Case "A" sql = sql & "WHERE LOWER(column_a) <> 'no' AND LTRIM(ISNULL(column_b),'') <> '' ORDER BY column_a Case "B" sql = sql & "WHERE LOWER(column_c) <> 'no' ORDER BY lastname, firstname Case "C" sql = sql & "WHERE LOWER(column_f) <> 'no' OR LOWER(column_g) <> 'no' ORDER BY column_g As you notice, every string that's added on as the second part of the sql query is different than the previous; not just one variable can be substituted out, which is what has me stumped. How do I translate this case / switch into the stored procedure, based on the varchar input that I pass to the stored procedure via PHP? This stored procedure will actually handle a query listing for about 20 pages, so it's a hefty one and this is my first major complicated one. I'm getting there, though! I'm also just more used to MySQL, too. Not that they're that different. :P Thank you very much for your help in advance. Stephanie

    Read the article

  • problems with Console.SetOut in Release Mode?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Update: hamishmcn below started talking about debug/release builds. I removed the native call in the above button1_click method and just replaced it with a standard Console.WriteLine .net call. When I compiled and ran this in debug mode the messages were redirected to the output file. When I switched to release mode however the calls weren't redirected. Console redirection only seems to work in debug mode. How do I get around this?

    Read the article

  • How do you make a static sprite be a child of another sprite in cocos2D while using SpaceManager

    - by JJBigThoughts
    I have two static (STATIC_MASS) SpaceManager sprites. One is a child of the other - by which I mean that one sort of builds up the other one, but although the child's images shows up in the right place, the child doesn't seem to exists in the chipmunk physics engine, like I would expect. In my case, I have a backboard (rectangular sprite) and a hoop (a circular sprite). Since I might want to move the backboard, I'd like to attach the hoop to backboard so that the hoop automatically moves right along with the backboard. Here, we see a rotating backboard with attached hoop. It looks OK on the screen, but other objects only bounce off the backboard but pass right through the hoop (in a bad sense of the term). What doesn't my child sprite seem to exist in the physics engine? // Add Backboard cpShape *shapeRect = [smgr addRectAt:cpvWinCenter mass:STATIC_MASS width:200 height:10 rotation:0.0f ];// We're upgrading this cpCCSprite * cccrsRect = [cpCCSprite spriteWithShape:shapeRect file:@"rect_200x10.png"]; [self addChild:cccrsRect]; // Spin the static backboard: http://stackoverflow.com/questions/2691589/how-do-you-make-a-sprite-rotate-in-cocos2d-while-using-spacemanager // Make static object update moves in chipmunk // Since Backboard is static, and since we're going to move it, it needs to know about spacemanager so its position gets updated inside chipmunk. // Setting this would make the smgr recalculate all static shapes positions every step // cccrsRect.integrationDt = smgr.constantDt; // cccrsRect.spaceManager = smgr; // Alternative method: smgr.rehashStaticEveryStep = YES; smgr.rehashStaticEveryStep = YES; // Spin the backboard [cccrsRect runAction:[CCRepeatForever actionWithAction: [CCSequence actions: [CCRotateTo actionWithDuration:2 angle:180], [CCRotateTo actionWithDuration:2 angle:360], nil] ]]; // Add the hoop cpShape *shapeHoop = [smgr addCircleAt:ccp(100,-45) mass:STATIC_MASS radius: 50 ]; cpCCSprite * cccrsHoop = [cpCCSprite spriteWithShape:shapeHoop file:@"hoop_100x100.png"]; [cccrsRect addChild:cccrsHoop]; This is only half working for me. Note: SpaceManager is a toolkit for working with cocos2D-iphone

    Read the article

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >