Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 301/549 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • The way to deploy from repos in svn

    - by fatnjazzy
    Hi, we are 5 developers working in an svn environment. every programmer can work on small bugs and commit whenever he wants. after the work has done, i want to give them the way to deploy to the production without considering the other programmers and their deployment. for example: while i am committing, other user is committing too but he did not finish to commit. his revisions 1,3 my revisions 2,4 if i will deploy the HEAD(4), ill also deploy his work. and i will deploy 2 and 4 i will include his files as well. how can i free every programmer to deploy his files only? Thanks

    Read the article

  • Maintaining Logging and/or stdout/stderr in Python Daemon

    - by dave mankoff
    Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example). This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are. What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

    Read the article

  • Memcached - how to deal with adding/deploying servers

    - by Industrial
    Hi everybody, How do you handle replacing/adding/removing memcached nodes in your production applications? I will have a number of applications that are cloned and customized due to each customers need running on one and same webserver, so i'll guess that there will be a day when some of the nodes will be changed. Here's how memcached is populated by normal: $m = new Memcached(); $servers = array( array('mem1.domain.com', 11211, 33), array('mem2.domain.com', 11211, 67) ); $m->addServers($servers); My initial idea, is to make the $servers array to be populated from the database, also cached, but file-based, done once a day or something, with the option to force an update on next run of the function that holds the $addservers call. However, I am guessing that this might add some additional overhead since disks are quite slow storage... What do you think?

    Read the article

  • Should we develop code on a local machine in a VLAN?

    - by red tiger
    Because of security reasons, we will not be able to use IIS on our local machines. I'm sure that many of you have faced the same problem, so how did you solve it? Here are the options that we're looking at: Create a VLAN that is isolated from the network for development. This will allow us to use any software, including IIS, that we want. A disadvantage is testing Web services with external organizations, which can be overcome by using stubs. Not use a VLAN and use only the ASP.NET Development Server that comes with Visual Studio, and then deploying that code to the development server. This has the disadvantage of not being able to replicate the production environment during local development. In addition, at least one developer needs IIS for GIS development, so he couldn't develop locally. Thank you for comments or suggestions that you may have!

    Read the article

  • SSIS Script Component + Helper Assemblies (.dll's)

    - by Nev_Rahd
    I got a script component which does Transformation / DataType conversions / Creating some calculated columns. All the transform validations / datatype conversion methods and for new column generation is put into custom .dll. As this script component would be same for all other tables, only thing is to define input / ouput columns and apply validation methods on required columns. This all works fine. On production server where do I need to deploy my .dll. Would just putting it into GAC will be enough or need to do something else. Regards

    Read the article

  • Easy plugin or procedure for sqlserver Management Studio to script row inserts.

    - by Patrick Karcher
    I've never been able to find a good script or plugin for sql server Management Studio (2005 and or 2008) for a very common scripting need: specifying a few/all rows in a table and scripting their insert. You can guess my story: I've got some configuration data in my dev db and I need to script it for deployment to UAT and then production. I've found a few cludgy systems in the past, that were more trouble than they were worth. I need something free and unobtrusive. Once I find it I'll share it with the other 20 developers in my shop who are annoyed by this. Aren't we all annoyed by this by the way? What is the best, easiest, free, way to specify a few/all rows in a table and get a script their insert?

    Read the article

  • PDOException “could not find driver”

    - by Mike
    I have just installed Debian Lenny with Apache, MySQL, and PHP and I am receiving a PDOException could not find driver. This is the specific line of code it is referring to: $dbh = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB_NAME, DB_USER, DB_PASS) DB_HOST, DB_NAME, DB_USER, and DB_PASS are constants that I have defined. It works fine on the production server (and on my previous Ubuntu Server setup). Is this something to do with my PHP installation? Searching the internet has not helped, all I get is experts-exchange and examples, but no solutions.

    Read the article

  • What's the best way to do Ruby gemspec creation and dependency management?

    - by John Feminella
    Over the last few months, there have been a number of rapid developments in the state of Ruby dependency management and gem creation, to the point where I've been having trouble keeping up with everything. If I'm writing a new gem, what's the best tool for me to use to create my gemspec? Are there disadvantages of using this tool over competitors? I've used Bundler a few times on applications and for me it's been a lifesaver. Is the consensus that it is suitable for use with production apps? Are there quirks or idiosyncracies people should be aware of? Links to resources you've used and have found helpful would also be much appreciated.

    Read the article

  • Deploying Rails app over VPN

    - by DavidGouge
    You'll have to bear with me as I'm not a Ruby dev, but have inherited a Ruby system. I need to deploy some changes to the app from my repository to the server. I've been instructed to run cap deploy and told that that script will get the latest code from my repository and deploy it to the server. My problem is that I have to VPN to get to the production server and the VPN client then blocks access to my local network, cutting off the repository. So my question is, how can I change my deploy.rb so that I can deploy from my local machine instead? Or is there a better way. If you need to see the deploy.rb, please let me know. Thanks Dave

    Read the article

  • No mapping for LONGVARCHAR in Hibernate 3.2

    - by jimbokun
    I am running Hibernate 3.2.0 with MySQL 5.1. After updating the group_concat_max_len in MySQL (because of a group_concat query that was exceeding the default value), I got the following exception when executing a SQLQuery with a group_concat clause: "No Dialect mapping for JDBC type: -1" -1 is the java.sql.Types value for LONGVARCHAR. Evidently, increasing the group_concat_max_len value causes calls to group_concat to return a LONGVARCHAR value. This appears to be an instance of this bug: http://opensource.atlassian.com/projects/hibernate/browse/HHH-3892 I guess there is a fix for this issue in Hibernate 3.5, but that is still a development version, so I am hesitant to put it into production, and don't know if it would cause issues for other parts of my code base. I could also just use JDBC queries, but then I have to replace every instance of a SQLQuery with a group_concat clause. Any other suggestions?

    Read the article

  • How can I pass in specific parameters to mstest in Visual Studio

    - by Eric Langland
    I'm trying to modify my test projuect to allow for remote invocation of an api we're building. Right now the tests are hard coded to run locally(against localhost), but I would like to be able to point the tests at any endpoint (even remote ones in production). Ideally there would be a place in the .testsettings for config values to be stored. Sadly this isn't the case. Or, being able to pass parameters to MSTest that the test would read...? Any ideas? Thanks in advance.

    Read the article

  • Waiting for ServerSocket accept() to put socket into "listen" mode

    - by inazaruk
    I need a simple client-server communication in order to implement unit-test. My steps: Create server thread Wait for server thread to put server socket into listen mode ( serverSocket.accept() ) Create client Make some request, verify responses Basically, I have a problem with step #2. I can't find a way to signal me when server socket is put to "listen" state. An asynchronous call to "accept" will do in this case, but java doesn't support this (it seems to support only asynchronous channels and those are incompatible with "accept()" method according to documentation). Of cause I can put a simple "sleep", but that is not really a solution for production code. So, to summarize, I need to detect when ServerSocket has been put into listen mode without using sleeps and/or polling.

    Read the article

  • couchdb: one database per account vs all in one database w. a namespace / property

    - by thruflo
    I'm modelling a document generation system in couchdb. It semi-automates the production of proposal and presentation documents from managable document fragments. Much like, say, Basecamp, it breaks down very simply into self-contained data per 'account'. Each account has multiple users, projects, documents, etc. However, nothing should be shared between accounts. I can see two ways of doing this: one couchdb database per account use a namespace / property to identify the account It seems to me that the first approach is conceptually sound and potentially has security and partitioning advantages. However, it seems to me to restrict some cross-database data querying (that I don't have a use case for now but you never know...) and to make updating views potentially require an awful lot of writes. Does anyone experienced with this kind of decision have any advice?

    Read the article

  • Terminal Asks for Email and Password, how do I Programmatically fill it out (in Ruby)?

    - by viatropos
    I am running a command to push files to Google App Engine, and it might ask me for my email and password: $ appcfg.py update . Email: [email protected] Password: I am running that in Ruby right now using this: %x[appcfg.py update .]. How can I fill out the email and password? I have seen something like this with capistrano: %x[appcfg.py update .] do |channel, stream, data| channel.send_data "#{yaml['production']['email']}\n" if data =~ /^Email:/ end ...but haven't figured out how to set that up without it. What's the best way to fill out things the command line asks for programmatically?

    Read the article

  • When is BIG, big enough for a database?

    - by David ???
    I'm developing a Java application that has performance at its core. I have a list of some 40,000 "final" objects, i.e., I have an initialization input data of 40,000 vectors. This data is unchanged throughout the program's run. I am always preforming lookups against a single ID property to retrieve the proper vectors. Currently I am using a HashMap over a sub-sample of a 1,000 vectors, but I'm not sure it will scale to production. When is BIG, actually big enough for a use of DB? One more thing, an SQLite DB is a viable option as no concurrency is involved, so I guess the "threshold" for db use, is perhaps lower.

    Read the article

  • What is the most efficient way to pass data (list of pairs of [Integer + Double]) between two Google App Engine instances?

    - by ruslan
    What is the most efficient way to pass data (list of pairs of [Integer, Double]) between two Google App Engine instances ? Currently I use Java binary serialization. Frontend servlet receives data from the client in JSON format. I convert it to byte[] using ObjectOutput.writeObject and then send it to backend servlet via HTTP POST. It's not in production yet. Should I just pass client's JSON as it is to backend? It seems more logical. But it's bigger in size. Or should I use Google Protocol Buffers as stated in this benchmark article ? Thank you!!!

    Read the article

  • How to use a self-signed SSL certificate when developing with Trigger.io?

    - by user610345
    Our backend is in rails, and for several reasons the development environment has to be run with rails using a self-signed SSL certificate. This works fine on the desktop after manually trusting the certificate. Using Trigger.io, we're developing a mobile application targeting iOS from the same backend. It would be ideal for us to be able to run the rails server with SSL (so we can compare the browser output) and still have the iOS simulator connect properly without complaining about invalid certs. Production is using a proper ssl-cert, but what's the best way to set up the simulator?

    Read the article

  • TSQL - create a stored proc inside a transaction statement

    - by Chris L
    I have a sql script that is set to roll to production. I've wrapped the various projects into separate transactions. In each of the transactions we created stored procedures. I'm getting error messages Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'procedure'. I created this example script to illustrate Begin Try Begin Transaction -- do a bunch of add/alter tables here -- do a bunch of data manipulation/population here -- create a stored proc create procedure dbo.test as begin select * from some_table end Commit End Try Begin Catch Rollback Declare @Msg nvarchar(max) Select @Msg=Error_Message(); RaisError('Error Occured: %s', 20, 101,@Msg) With Log; End Catch The error seems to imply that I can't create stored procs inside of transaction, but I'm not finding any docs that say otherwise(maybe google isn't being freindly today).

    Read the article

  • Deployment process

    - by Balaji
    We are having a massive system having around 15 servers hosting .Net WCF services, mvc application etc. When we do a deployment (out of office hours) we have to uninstall and install everything on the live servers. This takes lot of time and if something goes wrong we have to rollback everything. can you please suggest something different to this? like Deply into a other environment (whenever you like) and switch the URL to point to new servers [This comes with the overhead of cost of maintaining 2 copies of production (active and passive)] any other ideas please.

    Read the article

  • Implicit type conversion in DB/2 inserts?

    - by IronGoofy
    We're using SQL Inserts to insert some data via a script into DB/2 tables, e.g. CREATE TABLE TICKETS (TICKETID VARCHAR(10) NOT NULL); On my home installation, this statement works fine (note that I'm using an integer which is autoatically cast into a VarChar): INSERT INTO TICKETS (TICKETID) VALUES (1); while at my customer's site I get a type error. My question(s): Is this behavior version dependent? (I use a DB2 Express V9.7, while the customer has an Enterprise V9.5) Is there a config option to change the behavior? (I would like my home install to behave as close as possible as the production environment is going to be.)

    Read the article

  • Is there a Ruby on Rails framework like equivalent for .NET development?

    - by wgpubs
    Answers like ASP.NET MVC or Entity Framework really aren't acceptable as they address just one aspect of the problem domain. I'm looking for a framework ... a REAL framework that gives me the same features out of the box that Rails does. As such it should include at minimum: MVC for presentation ORM Ability to provide simple configuration for whatever environment (dev, QA, Production, etc...) Migration like functionality Ability to generate code in all layers (similar to scaffolding like behavior, etc...) Project template so as to create similar functionality as the "rails my_app" command. Thanks.

    Read the article

  • date comparison inside a list returned

    - by rob
    I have a ArrayList returned from a service which contains date-timestamp as String values (with values: 2010-05-06T23:38:18,2010-05-06T23:32:52,2010-04-28T18:23:06,2010-04-27T20:34:02,2010-04-27T20:37:02) to be more specific, This is part of a parent ArrayList ObjectHistory. This list contains the datestamp and serial number. I need to pick the correct serial number. Objecthistory is the List object and I need to get the latest timestamp within this ObjectHistory. I need to pick the latest timestamp from this Arraylist in Java 6. How should I be doing this? Should I do convert these values into calendar-time? I am in panic mode as this has to be done directly in production.

    Read the article

  • Common strategies to deal with rounding errors in currency-intensive soft?

    - by Max
    What is your advice on: compensation of accumulated error in bulk math operations on collections of Money objects. How is this implemented in your production code? (things like variable rounding, etc...) theory behind rounding in accountancy. any literature on topic. I currently read Fowler. He mentions Money type, but says nothing on strategies. Older posts on money-rounding (here, and here) do not provide a details and formality I need. Thanks for help.

    Read the article

  • Is there any way around the Chrome 5 breakage of Ajax for local files?

    - by nikow
    The recent Chrome 5.0 release completely blocks XMLHTTPRequest requests for local files comming from a local file. Here is just one of the many related bug reports and here is the code change (there is also a SO question caused by this). This breaks a lot of production code, e.g. for documentation systems. Users must be able to browse local documentation without the need to install anything or run executables. My question is if there is any way around this restriction? I'm only interested in solutions that don't require any fancy actions on the users side (nothing beyond a confirmation dialog). Is there any way the HTML5 File API could be used, or maybe postMessage()? Of course this whole issue is very frustrating for many people. Firefox takes a fare more reasonable approach and allows requests inside the directory. So it seems unlikely that other browser vendors will follow Chrome.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >