Search Results

Search found 4561 results on 183 pages for 'production'.

Page 149/183 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • What does stdole.dll do?

    - by rc1
    We have a large C# (.net 2.0) app which uses our own C++ COM component and a 3rd party fingerprint scanner library also accessed via COM. We ran into an issue where in production some events from the fingerprint library do not get fired into the C# app, although events from our own C++ COM component fired and were received just fine. Using MSINFO32 to compare the loaded modules on a working system to those on a failing system we determined that this was caused by STDOLE.DLL not being in the GAC and hence not loaded into the faulty process. Dragging this file into the GAC caused events to come back fine from the fingerprint COM library. So what does stdole.dll do? It's 16k in size so it can't be much... is it some sort of link to another library like STDOLE32? How come its absence causes such odd behavior? How do we distribute stdole.dll? This is an XCOPY deploy app and we don't use the GAC. Should we package it as a resource and use the System.EnterpriseServices.Internal.Publish.GacInstall to ensure it's in the GAC?

    Read the article

  • dynamically horizontal scalable key value store

    - by Zubair
    Hi, Is there a key value store that will give me the following: Allow me to simply add and remove nodes and will redstribute the data automatically Allow me to remove nodes and still have 2 extra data nodes to provide redundancy Allow me to store text or images up to 1GB in size Can store small size data up to 100TB of data Fast (so will allow queries to be performed on top of it) Make all this transparent to the client Works on Ubuntu/FreeBSD or Mac Free or open source I basically want something I can use a "single", and not have to worry about having memcached, a db, and several storage components so yes, I do want a database "silver bullet" you could say. Thanks Zubair Answers so far: MogileFS on top of BackBlaze - As far as I can see this is just a filesystem, and after some research it only seems to be appropriate for large image files Tokyo Tyrant - Needs lightcloud. This doesn't auto scale as you add new nodes. I did look into this and it seems it is very fast for queries which fit onto a single node though Riak - This is one I am looking into myself, but I don't have any results yet Amazon S3 - Is anyone using this as their sole persistance layer in production? From what I have seen it seems to be used for storage of images as complex queries are too expensive @shaman suggested Cassandra - definitely one I am looking into So far it seems that there is no database or key value store that fulfills the criteria I mentioned, not even after offering a bounty of 100 points did the question get answered!

    Read the article

  • Installing mongrel service on Windows 2008

    - by akirekadu
    We use InstallAnywhere to install our product. One of the components that it needs to install is mongrel. IA invokes the following command line during installation: mongrel_rails service::install -N service-1 -D "Service 1" -c "C:\app_dir\\rails\rails_apps\service-1" -p 19000 -e production Apprently under the hoods "sc create..." is used. The installation works great on Windows 2003. On Windows 2008 though this operation requires elevated privileges. When I login as local administrator (ie 'local-machine\administrator' user), the installation works just fine. However, when I login as a domain user that is part of local administrators group, the services fails to install with error "access is denied". How can I make it possible to install the product without having to login as local administrator? Thanks! Couple of notes I would like to add. One solution I tried is to execute the installer as administrator. The service does get installed. However, it creates another problem. An embedded 3rd party product and its files get installed with admin only rights. So we do need to run the installer as logged in user.

    Read the article

  • Full-text search error during full-text index population : Error Code '0x80092003'

    - by user360074
    Dear All, I have problem with Full-Text Search service in production environment. Each time I rebuild full-text catalog, there is no error in User Interface, but there is no data in Full-Text Catalog Item Count : 0 Catalog size : 0 MB OS : Windows Server 2003 R2 Standard Edition Service Pack2 SQL Server Version : Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2) It work on dev server (windows xp professional version 2002 service pack 3) but error on prod server (Windows Server 2003 R2 Standard Edition Service Pack2) This is error log. Scrawl Log: 2010-06-02 03:51:31.06 spid24s Informational: Full-text Full population initialized for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'). Population sub-tasks: 1. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000006. Attempt will be made to reindex it. 2010-06-02 03:51:31.06 spid24s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000005. Attempt will be made to reindex it.

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Are there size limitations to the .NET Assembly format?

    - by McKAMEY
    We ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which .NET Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. aspnet_merge.exe / Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! My question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge.exe gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous.

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • Rails 3 with Ruby 1.9.1 on Heroku

    - by stephen murdoch
    I've decided that I am going to man-up and start using Rails 3 from now on but the following note found here puts me off a bit: Note that Ruby 1.8.7 has marshaling bugs that crash both Rails 2.3.x and Rails 3.0.0. Ruby 1.9.1 outright segfaults on Rails 3.0.0, so if you want to use Rails 3 with 1.9.x, jump on 1.9.2 trunk for smooth sailing. I use Heroku for my deployment and as far as I am aware they do not plan to add 1.9.2 to the stack until it's stable (which might be in August) so I was thinking of doing it with 1.9.1. and seeing what happens. I know that there is a 3rd beta release now but the comments on the blog imply that it's still a little bit buggy. Is DHH inferring that you shouldn't touch Rails 3 at all if you are on 1.9.1? What are other Heroku-ists doing regarding Rails 3? Anyone using it for any production apps? I guess I'll only know once I've tried but any advice would be nice.

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • NMock2.0 - how to stub a non interface call?

    - by dferraro
    Hello, I have a class API which has full code coverage and uses DI to mock out all the logic in the main class function (Job.Run) which does all the work. I found a bug in production where we werent doing some validation on one of the data input fields. So, I added a stub function called ValidateFoo()... Wrote a unit test against this function to Expect a JobFailedException, ran the test - it failed obviously because that function was empty. I added the validation logic, and now the test passes. Great, now we know the validation works. Problem is - how do I write the test to make sure that ValidateFoo() is actually called inside Job.Run()? ValidateFoo() is a private method of the Job class - so it's not an interface... Is there anyway to do this with NMock2.0? I know TypeMock supports fakes of non interface types. But changing mock libs right now is not an option. At this point if NMock can't support it, I will simply just add the ValidateFoo() call to the Run() method and test things manually - which obviously I'd prefer not to do considering my Job.Run() method has 100% coverage right now. Any Advice? Thanks very much it is appreciated. EDIT: the other option I have in mind is to just create an integration test for my Job.Run functionality (injecting to it true implementations of the composite objects instead of mocks). I will give it a bad input value for that field and then validate that the job failed. This works and covers my test - but it's not really a unit test but instead an integration test that tests one unit of functionality.... hmm.. EDIT2: IS there any way to do tihs? Anyone have ideas? Maybe TypeMock - or a better design?

    Read the article

  • Which Django 1.2.x multilingual application to use?

    - by mawimawi
    There are a couple of different applications for internationalized content in Django. As of now I only have used http://code.google.com/p/django-multilingual/ in my production environments, but I wonder if there are "better" solutions for my wishes. What my staff users need is the following: An object is being created by a staff user in any language (e.g. "de") This object should be displayed in the german version of the website. When a staff user translates the object into a different language (e.g. "fr"), then the page must be visible in the french version as well. If an object is not translated in the visitor's currently selected language (e.g. "en"), then calling the objects url shall raise a 404 Error (or even better a notice that the object is only available in the languages "de" and "fr", and the visitor might be able to select one of the languages) My staff users are working in the admin interface, so the multilingual application must support this as well. I don't really care whether the multilingual app uses a single table with many fields (like title_en, title_de, title_fr) or a foreign key to a related table (as it is implemented in django-multlingual). I only want it to have a good admin interface and no "default" language, because some content might be available just in "de", and some other just in "fr" and "en". And the most important issue of course is compatibility with Django 1.2.x. What are your experiences and preferred apps, and why?

    Read the article

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • Listener error not connecting

    - by Sham
    I have two database running on Port No : 1521. When i m connecting to ORCL db it get's connected, but when i try to connect to another DB it gives me following error. ORA-12514: TNS:listener does not currently know of service requested in connect descriptor. My Listener: # listener.ora Network Configuration File: C:\app\Administrator\product\11.2.0 \dbhome_1\network\admin\listener.ora # Generated by Oracle configuration tools. ADMIN_RESTRICTIONS_LISTENER = ON LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) ) ) ADR_BASE_LISTENER = C:\app\Administrator TNSNAMES.ora ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) PARIVARTAN = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 127.1.1.1)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = Parivartan) ) ) Lsnrctl Result STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for 64-bit Windows: Version 11.2.0.1.0 - Production Start Date 14-DEC-2012 14:22:51 Uptime 0 days 0 hr. 19 min. 31 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.2.0\dbhome_1\network\a dmin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\127.1.1.1\listener\al ert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.1.1.1)(PORT=1521))) Services Summary... Service "orcl" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfully Reply me soon....

    Read the article

  • Generating a static website from a set of content data (possibly with webgen, webby or a similar too

    - by Darel
    My company (an engineering firm) is looking to redesign their website with some dynamic content. We have a nice portfolio of projects that we'd like to present on our site by category. To elaborate, I'd like to have a "Projects Category" menu, where you can choose a sub-project category (such as churches, schools, etc) which links to a page with images of all projects which have been tagged with that category attribute. Clicking on an image would then take you to a detailed page for that project. I have done a good bit of asp and jsp page development, but I've always worked on the front end in an enterprise environment - I've never built a production site from the back end. The advice I've gotten so far is that a full-blown CMS solution would be somewhat overkill, as we won't have a large hit count, and we'll be displaying a few hundred projects at most. One big-picture choice I appear to have - whether to dynamically generate the pages (with asp or jsp) or to use a tool to generate a set of static html pages. The tool would build the menus, project summary pages, and individual project pages based on a set of data I could provide (in the form of a database or text file.) I'm leaning towards trying to use a tool like webgen or webby to statically generate the site due to our current web hosting situation. Any thoughts on which approach is more appropriate? Is webgen or webby capable of doing what I am trying to do? Or can anyone recommend other web authoring tools better equipped to accomplish this? Thanks for any feedback!

    Read the article

  • Glassfish: Defining Custom JNDI Names for Session Beans

    - by Adeel Ansari
    Background: Want to use GF3 in development, where as actual SIT, UAT, and production is using WAS. Problem: With the remote session beans everything is good to go, as GF3 gives a non-standard JNDI name, which is same as what WAS suggests, i.e. an absolute class name. Now for the local session beans WAS use the same absolute class name but with the prefix, i.e. ejblocal:. Whereas GF3 doesn't give any non-standard JNDI name for local session beans. GF3 came up with only portable name, java:global/..I need to find a way so I can use the same names for both. I am using EJB 3.0, WAS 7.9, and Glassfish 3. Don't have any xml confiuration for ejbs. Using Spring to inject the bean in Struts2 actions. With remote interfaces both servers are okay and agreed on a single convention, but for locals they differ. Is there any solution for this? Or just sun-ejb-jar.xml will solve it? Thanks.

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • Deploying .NET COM dll, getting error (0x80070002)

    - by Brett
    I have a .NET COM assembly I am attempting to deploy to a web server (IIS 6 Win 2003). We have successfully deployed this assembly to our test environment, but the production environment is not working. The assembly is being called from a classic ASP page. Every time that page tries to initialize the assembly with “Set LTMRender = CreateObject("LTMRender.Render")”, I get an error “Error Type:, (0x80070002)”. This error seems to indicate a permission denied, or file not found type problem. I created a test app to see if the assembly works outside of the web page. The .exe initializes the assembly, and then makes a call designed to fail which in turn causes the assembly to produce a log file. It works if I run the .exe in the same folder as the assembly, but fails if I run it elsewhere. For some reason, the assembly is not accessible from outside it’s folder. I can’t figure out why this won’t work. Things I have confirmed: The deployment folder has adequate permissions. We have confirmed that the folder the assembly in installed in has the correct permissions for all the necessary user accounts. The Assembly is signed with a strong name, and was registered with regasm.exe C:_WebSites\LTMRender\LTMRender.dll /codebase /tlb:C:_WebSites\LTMRender\LTMRender.tlb. Regasm reported success. The Assembly has the attribute and relevant GUID’s set correctly. Any tips? EDIT We ran filemon against my testapp.exe and it seems to have indicated what the problem is. When testapp.exe runs in D:_websites\DocWebV2\ or D:_websites\DocWebV2\ LTMRender\ folder, it succeeds and filemon is showing D:_websites\DocWebV2\LTMRender\pinPDF.dll SUCCESS If I run my testapp.exe in the D:_websites\DocWebV2\Client – where my asp pages run, it shows D:_websites\DocWebV2\pinPDF.dll NAME NOT FOUND and then D:_websites\DocWebV2\pinPDF\pinPDF.dll FILE NOT FOUND I’m not sure why it is not looking in the correct folder if it’s under this particular folder only.

    Read the article

  • Avoiding shutdown hook

    - by meryl
    Through the following code I can play and cut and audio file. Is there any other way to avoid using a shutdown hook? The problem is that whenever I push the cut button , the file doesn't get saved until I close the application thanks ...................... void play_cut() { try { // First, we get the format of the input file final AudioFileFormat.Type fileType = AudioSystem.getAudioFileFormat(inputAudio).getType(); // Then, we get a clip for playing the audio. c = AudioSystem.getClip(); // We get a stream for playing the input file. AudioInputStream ais = AudioSystem.getAudioInputStream(inputAudio); // We use the clip to open (but not start) the input stream c.open(ais); // We get the format of the audio codec (not the file format we got above) final AudioFormat audioFormat = ais.getFormat(); // We add a shutdown hook, an anonymous inner class. Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { // We're now in the hook, which means the program is shutting down. // You would need to use better exception handling in a production application. try { // Stop the audio clip. c.stop(); // Create a new input stream, with the duration set to the frame count we reached. Note that we use the previously determined audio format AudioInputStream startStream = new AudioInputStream(new FileInputStream(inputAudio), audioFormat, c.getLongFramePosition()); // Write it out to the output file, using the same file type. AudioSystem.write(startStream, fileType, outputAudio); } catch(IOException e) { e.printStackTrace(); } } }); // After setting up the hook, we start the clip. c.start(); } catch (UnsupportedAudioFileException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (LineUnavailableException e) { e.printStackTrace(); } }// end play_cut ......................

    Read the article

  • Modelling Business Logic with NON-Techies

    - by cbmeeks
    The setup: Winform/ASP.NET MVC projects. Learning NHibernate SQL-Server driven apps I work with clients that have no idea how to model an application. That's what I'm for. However, we have lots of conflicts with validation, mis-understandings, etc. For example, the client will ask for an order entry screen. The screen should require a "product". That's fine and dandy. However, the client didn't know to tell me that the user can't order a product of "Class A" unless it's Tuesday. Or, they need a time entry screen. 2 days before it's rolled into production, they casually forgot to mention that certain activities are only valid for certain situations. These situations being a week of coding. That's of course, some crude examples (not by much!). But the problem is getting these non-technical clients to layout their business logic. They somehow didn't realize that the "Class A" problem would come up two weeks later, etc. I'm all for agile programming but is there an easy way to somehow make business logic like this extremely easy to implement and change on almost a daily basis? I of course am splitting the project into hopefully intelligent pieces, using NHibernate, etc. But making this BI logic so dynamic is really making it hard to project timelines, etc. Any suggestions? I know there will never be a perfect client (or a perfect provider) but how do you guys deal with the constant mis-understandings? Thanks.

    Read the article

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless @@list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink result = content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] return result else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • Does overloading Grails static 'mapping' property to bolt on database objects violate DRY?

    - by mikesalera
    Does Grails static 'mapping' property in Domain classes violate DRY? Let's take a look at the canonical domain class: class Book {      Long id      String title      String isbn      Date published      Author author      static mapping = {             id generator:'hilo', params:[table:'hi_value',column:'next_value',max_lo:100]      } } or: class Book { ...         static mapping = {             id( generator:'sequence', params:[sequence_name: "book_seq"] )     } } And let us say, continuing this thought, that I have my Grails application working with HSQLDB or MySQL, but the IT department says I must use a commercial software package (written by a large corp in Redwood Shores, Calif.). Does this change make my web application nobbled in development and test environments? MySQL supports autoincrement on a primary key column but does support sequence objects, for example. Is there a cleaner way to implement this sort of 'only when in production mode' without loading up the domain class?

    Read the article

  • Bootstrapper (setup.exe) says ".NET 3.5 not found" but launching .msi directly installs application

    - by Marek
    Our installer generates a bootstrapper (setup.exe) and a MSI file - a pretty common scenario. One of the production machines reports a strange problem during install: If the user launches the bootstrapper (setup.exe), it reports that .NET 3.5 is not installed. This happens with account under administator group. No matter if they launch it as administrator or not, same behavior. the application installs fine when application.msi or OurInstallLauncher.exe (see below for explanation) is started directly no matter if run as administrator is applied. We have checked that .NET is installed on the machine (both 64bit and 32bit "versions" = under both C:\Windows\Microsoft.NET\Framework64 and C:\Windows\Microsoft.NET\Framework there is a folder named v3.5. This happens on a 64 bit Windows 7. I can not reproduce it on my development 64 bit Windows 7. On Windows XP and Vista, it has worked without any problem for a long time so far. Part of our build script that declares the GenerateBootStrapper task (nothing special): <ItemGroup> <BootstrapperFile Include="Microsoft.Windows.Installer.3.1"> <ProductName>Microsoft Windows Installer 3.1</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Net.Framework.3.5"> <ProductName>Microsoft .NET Framework 3.5</ProductName> </BootstrapperFile> </ItemGroup> <GenerateBootstrapper ApplicationFile=".\Files\OurInstallLauncher.exe" ApplicationName="App name" Culture="en" ComponentsLocation ="HomeSite" CopyComponents="True" Validate="True" BootstrapperItems="@(BootstrapperFile)" OutputPath="$(OutSubDir)" Path="$(SdkBootstrapperPath)" /> Note: OurInstallLauncher.exe is language selector that applies a transform to the msi based on user selection. This is not relevant to the question at all because the installer never gets as far as launching this exe! It displays that .NET 3.5 is missing right after starting setup.exe. Has anyone seen this behavior before?

    Read the article

  • Possible to recover mysql root pass with sudo server access?

    - by jonathonmorgan
    I've inherited development for a website on vps hosting, and have login info for a user with sudo privileges, but don't have the password for the mysql root user. After digging around a little, it looks like the only way to fix this is to stop mysql (something like this: http://waoewaoe.wordpress.com/2010/02/03/recover-reset-mysql-root-password/). But because the website it's serving is currently in production, I'm hoping you guys can enlighten me to any potential consequences (or let me know if there's typically a file where the password would be accessible). a) during the time mysql is stopped, information in the database won't be accessible, right -- even by other users? b) will resetting the root password have any impact on other users after mysql has restarted? Will their username/passwords still be valid? The current application is using an account with limited privileges to read/write to the database, and while 5min downtime in the middle of the night would probably go unnoticed, half a day while I tie up loose ends/figure out what I screwed up will land in me hot water. Thanks in advance for your help!

    Read the article

  • PHPUnit and autoloaders: Determining whether code is running in test-scope?

    - by pinkgothic
    Premise I know that writing code to act differently when a test is run is hilariously bad practise, but I may've actually come across a scenario in which it may be necessary. Specifically, I'm trying to test a very specific wrapper for HTML Purifier in the Zend framework - a View Helper, to be exact. The HTML Purifier autoloader is necessary because it uses a different logic to the autoloaders we otherwise have. Problem require()-ing the autoloader at the top of my View Helper class, gives me the following in test-scope: HTML Purifier autoloader registrar is not compatible with non-static object methods due to PHP Bug #44144; Please do not use HTMLPurifier.autoload.php (or any file that includes this file); instead, place the code: spl_autoload_register(array('HTMLPurifier_Bootstrap', 'autoload')) after your own autoloaders. Replacing the require() with spl_autoload_register(array('HTMLPurifier_Bootstrap', 'autoload')) as advertised means the test runs fine, but the View Helper dies a terrible death claiming: Zend_Log[3707]: ErrorController caught LogicException "Passed array does not specify an existing static method (class 'HTMLPurifier_Bootstrap' not found)" (Our test folder structure is slightly different to our Zend folder structure by necessity.) Question(s) After tinkering with it, I'm thinking I'll need to pick an autoloader-loading depending on whether things are in the test scope or not. Do I have another option to include HTMLPurifier's autoloading routine in both cases that I'm not seeing due to tunnel vision? If not, do I have to find a means to differentiate between test-environment and production-environment this with my own code (e.g. APPLICATION_ENV) - or does PHPUnit support this godawful hackery of mine natively by setting a constant that I could check whether its been defined(), or similar shenanigans? (My Google-fu here is weak! I'm probably just doing it wrong.)

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >