Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 424/549 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Reduce number of config files to as few as possible

    - by Scott
    For most of my applications I use iBatis.Net for database access/modeling and log4Net for logging. In doing this, I need a number of *.config files for each project. For example, for a simple application I need to have the following *.config files: app.config ([AssemblyName].[Extention].config) [AssemblyName].SqlMap.config [AssemblyName].log4Net.config [AssemblyName].SqlMapProperties.config providers.config When these applications go from DEV to TEST to PRODUCTION environments, the settings contained in these files change depending on the environment. When the number of files get compounded by having 5-10 (or more) supporting executables per project, the work load on the infrastructure team (the ones doing the roll-outs to the different environments) gets rather high. We also have a high risk of one of the config files being missed, or a mistype in the config file. What is the best way to avoid these risks? Should I combine all of the config files into one file? (is that possible with iBatis?) I know that with VisualStudio 2010 they introduce transforms for these config files that allow the developer to setup all the settings for the different environments and then dynamically (depending on the build kicked off) the config files get updated to the correct versions. (VS 2010 - transforms) Thank you for any help that you can provide.

    Read the article

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Authenticating to Google Search Appliance using Basic HTTP auth and ASP.NET (VB)

    - by Chainlink
    I've run into a snag though which has to do with authentication between the Google Search Appliance and ASP. Normally, when asking for secure pages from the search appliance, the search appliance asks for credentials, then uses these credentials to try and access the secure results. If this attempt is successful, the page shows up in the results list. Since ASP is contacting the search appliance on the client's behalf, it will need to collect credentials and pass them along to the search appliance. I have tried a couple of different documented ways of accomplishing this, but they don't seem to work. Below is the code I have tried: 'Bypass SSL since discovery.gov.mb.ca does not have valid SSL cert (NOT PRODUCTION SAFE) ServerCertificateValidationCallback = New System.Net.Security.RemoteCertificateValidationCallback(AddressOf customXertificateValidation) googleUrl = "https://removed.com" Dim rdr As New XmlTextReader(googleUrl) Dim resolver As New XmlUrlResolver() Dim myCred As New System.Net.NetworkCredential("USERNAME", "PASSWORD", Nothing) Dim credCache As New CredentialCache() credCache.Add(New Uri(googleUrl), "Basic", myCred) resolver.Credentials = credCache rdr.XmlResolver = resolver doc = New System.Xml.XPath.XPathDocument(rdr) path = doc.CreateNavigator() Private Function customXertificateValidation(ByVal sender As Object, ByVal certificate As System.Security.Cryptography.X509Certificates.X509Certificate, ByVal chain As System.Security.Cryptography.X509Certificates.X509Chain, ByVal sslPolicyErrors As Net.Security.SslPolicyErrors) As Boolean Return True End Function

    Read the article

  • General Drools Question

    - by El Guapo
    For the last few months my company has been using a product from a company called Informatica (previously AgentLogic) called RulePoint. This product has proven itself very easy to use with a well-developed and easy-to-use SDK for customization. The way we use the product for CEP is fairly trivial, we have 2 sources which we monitor for our rule data, the first being a JMS Queue, the second being a Jabber IM account. The product runs on any java-based application server (WebLogic, Tomcat, etc) and runs just about flawlessly. Last week my boss says, "Hey, I've heard that we may be able to do the same thing we are doing with RulePoint with an open-source product called Drools. Check it out and let me know what you think." I've heard of people using Drools for flow-based operations (validation, etc), however, I've never heard of anyone using their CEP product (Fusion) in practice. So, being the diligent worker, I have undertaken this task. I've downloaded all the files (version 5.0) and accompanying documentation and have started to read. I've read through just about all the docs and run most of the examples, but I still don't really see HOW drools works for CEP. While there are examples for using Data (or Facts, I guess) from JMS, I don't see how this thing stays "running", continuously monitoring a queue until the application is actually stopped. RulePoint pretty must just sits and listens, however, Drools seems to not. I could probably write a full-blown command-line application for our needs, however, I was hoping to leverage some of the benefits of using a application server provides. I guess I'm looking for some good tutorials or an example of how someone is using Drools and CEP in production. Thanks in advanced for any information, advice you may be able to provide.

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • Access denied error on select into outfile using Zend

    - by Peter
    Hi, I'm trying to make a dump of a MySQL table on the server and I'm trying to do this in Zend. I have a model/mapper/dbtable structure for all my connections to my tables and I'm adding the following code to the mappers: public function dumpTable() { $db = $this->getDbTable()->getAdapter(); $name = $this->getDbTable()->info('name'); $backupFile = APPLICATION_PATH . '/backup/' . date('U') . '_' . $name . '.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $name"; $db->query( $query ); } This should work peachy, I thought, but Message: Mysqli prepare error: Access denied for user 'someUser'@'localhost' (using password: YES) is what this results in. I checked the user rights for someUser and he has all the rights to the database and table in question. I've been looking around here and on the net in general and usually turning on "all" the rights for the user seems to be the solution, but not in my case (unless I'm overlooking something right now with my tired eyes + I don't want to turn on "all" on my production server). What am I doing wrong here? Or, does anybody know a more elegant way to get this done in Zend?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • 'memcache-client' problem - app can't load the gem

    - by Max Williams
    Hi all - i'm trying to get memcached and the Interlock plugin working with a new rails app. The weird thing is that they both work fine in another app on the same machine and i can't figure out the difference that's stopping this app. The new app is rails 2.3.4 and the old one is 2.2.2 in case that's a factor. When the app starts, i get a warning from interlock: `install_memcached':Interlock::ConfigurationError: 'memcache-client' client requested but not installed. Try 'sudo gem install memcache-client'. Now, i have memcache-client installed: $> gem list | grep memcache memcache-client (1.7.8) The gem is in /var/lib/gems/1.8, which is in my GEM_PATH variable. On a bit of further investigation, the above error is raised by interlock when it refers to the MemCache class, which doesn't exist and so raises an 'anonymous module' error. So, ultimately, the problem is that MemCache isn't loaded. I have a memcached.yml in my config folder (below) however. I'm stuck - any advice anyone? #contents of config/memcached.yml defaults: namespace: millionaire #sessions: true sessions: false client: memcache-client with_finders: true development: servers: - 127.0.0.1:11211 production: servers: - 127.0.0.1:11211

    Read the article

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • iPhone SDK Push notification randomly fails

    - by Jameson
    I have a PHP file with the following content that works perfectly on development ceritficates, but when I switch to a production certificate the PHP errors and gives the below message, but it only does this about 50% of the time. The other 50% it works. Anyone know why this might be happening? <?php // masked for security reason $deviceToken = 'xxxxxx'; // jq $ctx = stream_context_create(); stream_context_set_option($ctx, 'ssl', 'local_cert', dirname(__FILE__)."/prod.pem"); $number = 5; $fp = stream_socket_client('ssl://gateway.push.apple.com:2195', $err, $errstr, 60, STREAM_CLIENT_CONNECT, $ctx); if (!$fp) { print "Failed to connect $err $errstr\n"; } else { print "Connection OK\n"; $msg = $_GET['msg']; $payload['aps'] = array('alert' => $msg, 'badge' => 1, 'sound' => 'default'); $payload = json_encode($payload); $msg = chr(0) . pack("n",32) . pack('H*', str_replace(' ', '', $deviceToken)) . pack("n",strlen($payload)) . $payload; print "sending message :" . $payload . "\n"; fwrite($fp, $msg); fclose($fp); } ?> The PHP error: Warning: stream_socket_client() [function.stream-socket-client]: Unable to set local cert chain file `/var/www/vhosts/thissite.com/httpdocs/prod.pem'; Check that your cafile/capath settings include details of your certificate and its issuer in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: failed to create an SSL handle in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: Failed to enable crypto in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Warning: stream_socket_client() [function.stream-socket-client]: unable to connect to ssl://gateway.sandbox.push.apple.com:2195 (Unknown error) in /var/www/vhosts/thissite.com/httpdocs/pushMessageLive.php on line 19 Failed to connect 0

    Read the article

  • Symfony/Propel NestedSet left/right ID corruption/adjustment

    - by Mike Crowe
    Hi folks, I have a nested set application that seems to be getting corrupted. Here's what I'm seeing: We're using nested sets for a binary tree (any node can have 2 children). It appears to be working fine, but some event causes a discrepancy. For instance, when I do a getNumberOfDescendants() for the root node, it will slowly increase as this event happens. However, displaying the tree works fine, as does inserting (apparently). Has anybody seen anything like this before? For instance, my repair program shows this as the repairs that it makes: User pxxxxx left 0=>0, right 145=>129 User axxxxx left 1=>1, right 124=>106 User mxxxxx left 119=>117, right 120=>118 User fxxxxx left 125=>107, right 144=>128 User fxxxxx left 126=>108, right 131=>113 User rxxxxx left 127=>109, right 128=>110 User mxxxxx left 129=>111, right 130=>112 User mxxxxx left 132=>114, right 143=>127 User cxxxxx left 133=>115, right 142=>126 User gxxxxx left 134=>116, right 137=>121 User mxxxxx left 135=>119, right 136=>120 User jxxxxx left 138=>122, right 141=>125 User axxxxx left 139=>123, right 140=>124 I thought at first it was when I deleted a user, but it has since occurred w/o that event. Anybody know of a cause that might generate this? I've tested ad nauseum on my local machine, but I can't duplicate it. I do have an issue where my production box is PHP 5.2.0, whereas my test device is 5.2.10. Could that be an issue? TIA Mike

    Read the article

  • Extending spring based app

    - by pitr
    I have a spring-based Web Service. I now want to build a sort of plugin for it that extends it with beans. What I have now in web.xml is: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/classes/*-configuration.xml</param-value> </context-param> My core app has main-configuration.xml which declares its beans. My plugin app has plugin-configuration.xml which declares additional beans. Now when I deploy, my build deploys plugin.jar into /WEB-INF/lib/ and copies plugin-configuration.xml into /WEB-INF/classes/ all under main.war. This is all fine (although I think there could be a better solution), but when I develop the plugin, I don't want to have two projects in Eclipse with dependencies. I wish to have main.jar that I include as a library. However, web.xml from main.jar isn't automatically discovered. How can I do this? Bean injection? Bean discovery of some sort? Something else? Note: I expect to have multiple different plugins in production, but development of each of them will be against pure main.jar Thank you.

    Read the article

  • Where is my Drupal View pager?

    - by anotherthink
    Hi, I have a Drupal 6 site where I've created a view that shows a list of nodes. Nothing complicated -- except that when I choose "use pager" -- "yes" (and choose the "full pager" option), the pager doesn't show up on the page. The first page of nodes shows up, but there's no way to get to other pages. Through googling, I saw that some people had an issue with the "Pager Element" item, so I changed that from 0 to 1 -- no luck. This shouldn't be very complicated, but I've been at it for a while! Help!? ETA: I've tracked it down to the following lines in /modules/views/theme/theme.inc: $pager_theme = views_theme_functions($pager_type, $view, $view->display_handler->display); $vars['pager'] = theme($pager_theme, $exposed_input, $view->pager['items_per_page'], $view->pager['element']); The first line returns an array; the second line returns nothing. I suspect now that this is a theming problem with the custom theme I'm using that may not have fully been correctly updated for Drupal 6 -- like, maybe I'm missing a pager template somehow? -- however, I'm quite new to Drupal and don't really understand how to further track down and fix the issue. Any advice would be much appreciated! ETA yet again: The pager also doesn't show up when using Garland, so it's not a theme issue after all. ALSO: I have a copy of this site set up on a development server as well, and that copy has working pagination! I've checked what I thought might be different -- files in the theme, what modules are enabled -- and it seems like pretty much everything is the same. The one thing that I know is different, however, is that the production server has a lower version of MySQL (lower than recommended for Drupal 6 -- we're waiting on the hosting company being able to change this later). Would it make sense that the old version of MySQL is unable to do pagination correctly in Drupal 6? If so, does anyone know a workaround I can do until we are able to update MySQL?

    Read the article

  • Vlad the deployer on Dreamhost - initial script

    - by xmariachi
    Hi, I'm trying to deploy an app with SVN and Vlad the deployer. Vlad and its dependencies are installed and seem OK. I'm trying the following: rake prod vlad:update Being my config/deploy.rb file: task :prod do set :application, "xxx" set :deploy_timestamped, "false" set :user, "username" set :scm_user, "scmusername" set :repository, "http://domain.com/svn/app" set :domain, "domain.com" set :deploy_to, "/home/username/deployments/app" puts "Production deployment to #{deploy_to}" end I have done "rake prod vlad:setup" already, that's fine. But when calling "rake prod vlad:update", I get the following A ...file Exported revision 14. ln: creating symbolic link `/home/username/deployments/drupalgestalt/releases/20100503164225/public/system' to `/home/username/deployments/drupalgestalt/shared/system': No such file or directory rake aborted! execution failed with status 1: ssh domain.com ln -s /home/username/deployments/app/shared/log /home/username/deployments/app/releases/20100503164225/log && ln -s /home/username/deployments/app/shared/system /home/username/deployments/app/releases/20100503164225/public/system && ln -s /home/username/deployments/app/shared/pids /home/username/deployments/app/releases/20100503164225/tmp/pids Apparently it complains when creating the ln, but permissions are all set up fine. Am I doing anything wrong? I'm just starting with Vlad on the assumption it was super-easy to set up. Had played a bit with cap in the past, and I do like Vlad idea.

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • What are the limitations of the .NET Assembly format?

    - by McKAMEY
    We just ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! So my question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous. Less disk to hit if it is all pregen'd stuff.

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • Shared Server Dreamhost

    - by Jseb
    I am trying to install an app on a shared server. If i understand properly because i am using a shared server, and that Dreamhost doesn't suppose rails 3.2.8 I must use FCGI, although i am not sure how to install and to make it run properly. From this tutorial http://wiki.dreamhost.com/Rails_3. To my understand here what I did, In dreamhost, activate PHP 5.x.x FastCGI and made sure Phusion Passenger is unchecked Create an app on my localmachine Because rails doesn't create a dispatch and access file i create the two following file in my /public folder dispatch.fcgi #!/home/username/.rvm/rubies/ruby-1.9.3-p327/bin/ruby ENV['RAILS_ENV'] ||= 'production' ENV['HOME'] ||= `echo ~`.strip ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') ENV['GEM_PATH'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') + ":" + File.expand_path('~/.rvm/gems/ruby 1.9.3-p327@global') require 'fcgi' require File.join(File.dirname(__FILE__), '../config/environment') class Rack::PathInfoRewriter def initialize(app) @app = app end def call(env) env.delete('SCRIPT_NAME') parts = env['REQUEST_URI'].split('?') env['PATH_INFO'] = parts[0] env['QUERY_STRING'] = parts[1].to_s @app.call(env) end end Then created the file .htaccess <IfModule mod_fastcgi.c> AddHandler fastcgi-script .fcgi </IfModule> <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi </IfModule> Options +FollowSymLinks +ExecCGI RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ dispatch.fcgi/$1 [QSA,L] ErrorDocument 500 "Rails application failed to start properly" Uploaded to a folder and pointed to the public folder in dreamhost Made sure dispatch.fcgi has 777 for write ssh and run the following command in the public folder : ./dispatch.fcgi Crossing my finger but it doesn't work I get the following errors ./dispatch.fcgi: line 1: ENV[RAILS_ENV]: command not found ./dispatch.fcgi: line 1: =: command not found ./dispatch.fcgi: line 2: ENV[HOME]: command not found ./dispatch.fcgi: line 2: =: command not found ./dispatch.fcgi: line 3: syntax error near unexpected token (' ./dispatch.fcgi: line 3:ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327')' Doing wrong??? Oh and if i go on the server i get this Rails application failed to start properly

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • asp.Net 2005 "Could not load type"

    - by Melody Friedenthal
    Although I have seen dozens of forum questions relating to "Could not load type", none of the advice in them seemed to apply to my situation. I wrote a new web application using aspx.net VB 2005. It is tiny, with just 2 pages, 1 of which has no code-behind. It runs aok in the IDE but when I installed it on my pc (and also when installed on the production server), and tried to run it this error came up: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'EMTTrainingDatabase.pageMain'. Source Error: Line 1: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="pageMain.aspx.vb" Inherits="EMTTrainingDatabase.pageMain" %> Line 2: Line 3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Source File: /EMTTrainingDatabase/pageMain.aspx Line: 1 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082 I checked the web site properties in IIS and the correct ASP.net version is specified: 2.0.50727. I checked the virtual path and it looks correct too: /EMTTrainingDatabase pageMain source code header looks like: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="pageMain.aspx.vb" Inherits="EMTTrainingDatabase.pageMain" %> Some posters suggest that the bin is in the wrong folder or the bin doesn't contain the rigt contents. I don't have enough knowledge to evaulate this. Can someone help? Thank you.

    Read the article

  • How can I turn a column name into a result value in SQL Server?

    - by Brennan
    I have a table which has essentially boolean values in a legacy database. The column names are stored as string values in another table so I need to match the column names of one table to a string value in another table. I know there has to be a way to do this directly with SQL in SQL Server but it is beyond me. My initial thought was to use PIVOT but it is not enabled by default and enabling it would likely be a difficult process with pushing that change to the Production database. I would prefer to use what is enabled by default. I am considering using COALESCE to translate the boolean value to the string that value that I need. This will be a manual process. I think I will also use a table variable to insert the results of the first query into that variable and use those results to do the second query. I still have the problem that the columns are on a single row so I wish I could easily pivot the values to put the column names in the result set as strings. But if I could easily do that I could easily write the query with a sub-select. Any tips are welcome.

    Read the article

  • How to organize the work when project needs to be re-implemented due to poor code quality?

    - by Dmitriy Nagirnyak
    Hi, I have joined a very small where one main developer has been buiding the web app (.NET 4.0) during ~6 months. The project should be delivered within next 2 months. After first look at the code I can say that I would never allow it to go to production (things like catch { }, not tests at all with WebForms etc). So the code quality is incredibly low. My task is to improve that and still deliver the solution. So I plan to start with unit testing and MVC2 reimplementing most of the functionality (though using some of the existing code). I estimate that I will need about 6 weeks to catch up with the current progress and be on te same functionality level as the application will be in 6 months. The problem is that the main developer who has been working on the project does not seem to be very 'professional' and skillful (he seems to be really starting in IT and many basic things are unknown to him). It will take significant amount of time and effort to educate him how to do the proper testing, development and apply some patterns. I am ready to take responsibility for the reimplemnting the application but at the same time I don't want the main developer to be on idle but as he won't be able to significantly contribute to the better-world project at this stage I am not sure what would the best way to keep productivity high for both of us. Currently I think following solution is good enough: He proceeds doing what he does until I will catch up with him and then start working on a new project together. The problem is that of course this approach is not very productive as one developer will do better-world project while the other will proceed with what he did, effectively doing similar tasks. Can you suggest how we could better organise the work together in order to be most efficient for the overall project? Thanks, Dmitriy.

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • Asynchronous background processes in Python?

    - by Geuis
    I have been using this as a reference, but not able to accomplish exactly what I need: http://stackoverflow.com/questions/89228/how-to-call-external-command-in-python/92395#92395 I also was reading this: http://www.python.org/dev/peps/pep-3145/ For our project, we have 5 svn checkouts that need to update before we can deploy our application. In my dev environment, where speedy deployments are a bit more important for productivity than a production deployment, I have been working on speeding up the process. I have a bash script that has been working decently but has some limitations. I fire up multiple 'svn updates' with the following bash command: (svn update /repo1) & (svn update /repo2) & (svn update /repo3) & These all run in parallel and it works pretty well. I also use this pattern in the rest of the build script for firing off each ant build, then moving the wars to Tomcat. However, I have no control over stopping deployment if one of the updates or a build fails. I'm re-writing my bash script with Python so I have more control over branches and the deployment process. I am using subprocess.call() to fire off the 'svn update /repo' commands, but each one is acting sequentially. I try '(svn update /repo) &' and those all fire off, but the result code returns immediately. So I have no way to determine if a particular command fails or not in the asynchronous mode. import subprocess subprocess.call( 'svn update /repo1', shell=True ) subprocess.call( 'svn update /repo2', shell=True ) subprocess.call( 'svn update /repo3', shell=True ) I'd love to find a way to have Python fire off each Unix command, and if any of the calls fails at any time the entire script stops.

    Read the article

  • MissingMethodException thrown when calling new form in Compact Framework

    - by Boerema
    I'm updating an old mobile device application for better flexibility. I had basically added the ability to configure the address of our SQL server in the case that we want to use our test server as opposed to our production server. I don't think this is causing the problem, but I wanted to state it. I also upgraded the project from a VS 2000 project to a VS 2005 project. The issue I am having is that when I try to run the program in the VS emulator for Pocket PC, I get an error. It occurs after our "main menu" form loads and the user selects the next form to load. The form is initialized without issue, but when we try to run the .ShowDialog() method, it throws a System.MissingMethodException. I don't have a lot of experience with the Compact Framework and really have no idea where to start looking for problems. I stepped the debugger through the entire initializing process for the new form and it ran without issue. But, again, when we come to the ShowDialog call, it throws the error. Any ideas in where to start looking or known issues would be greatly appreciated.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >