Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 80/549 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Union on two tables with a where clause in the one

    - by Lostdrifter
    Currently I have 2 tables, both of the tables have the same structure and are going to be used in a web application. the two tables are production and temp. The temp table contains one additional column called [signed up]. Currently I generate a single list using two columns that are found in each table (recno and name). Using these two fields I'm able to support my web application search function. Now what I need to do is support limiting the amount of items that can be used in the search on the second table. the reason for this is become once a person is "signed up" a similar record is created in the production table and will have its own recno. doing: Select recno, name from production UNION ALL Select recno, name from temp ...will show me everyone. I have tried: Select recno, name from production UNION ALL Select recno, name from temp WHERE signup <> 'Y' But this returns nothing? Can anyone help?

    Read the article

  • Cherrypicking versus Rebasing

    - by Lakshman Prasad
    The following is a scenario I commonly face: You have a set of commits on master or design, that I want to put on top of production branch. I tend to create a new branch with the base as production cherry-pick these commits on it and merge it to production Then when I merge master to production, I face merge conflicts because even tho the changes are same, but are registered as a different commit because of cherry-pick. I have found some workarounds to deal with this, all of which are laborious and can be termed "hacks". Altho' I haven't done too much rebasing, I believe that too creates a new commit hash. Should I be using rebasing where I am cherrypicking. What other advantages does that have over this.

    Read the article

  • advice on setting up SVN

    - by Vivek Chandraprakash
    I'm trying to setup an svn server. I maintain couple of websites based on asp. There are three environments currently. Development: Any new modules/enhancements will be done in this environment Staging: Mirror of production Production: The public facing website. Currently when there's an update to the website, this is what we do do the update in development copy file to staging copy file to production In production we take a backup of the old file by renaming it. I would like to make it simpler by installing SVN and stop the file renaming thing. But im not sure how many repositories to have per website. should be it be three or two? I'm absolutely new to svn. Just installed it in a linux based server (ubuntu). Can you pls advice how to go about it? Thanks -Vivek

    Read the article

  • Debugging CodeIgniters 404 errors

    - by Alex
    I'm having a nightmare uploading my site to the production server. The site runs fine locally and on a staging server (exactly the same server, settings as the production site). However when I deploy to production I'm getting a 404 error from CI. CodeIgniters 404 error pages are frustrating because it seems as if i can't access other libraries from them. How can I go about debugging the error? See which controller is trying to be called etc.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Compare structures of two databases?

    - by streetparade
    Hello, I wanted to ask whether it is possible to compare the complete database structure of two huge databases. We have two databases, the one is a development database, the other a production database. I've sometimes forgotten to make changes in to the production database, before we released some parts of our code, which results that the production database doesn't have the same structure, so if we release something we got some errors. Is there a way to compare the two, or synchronize?

    Read the article

  • Deploying Multiple Environments in Spring-MVC

    - by jboyd
    Currently all web apps are deployed using seperate config files: <!-- <import bean.... production/> --> <import bean... development/> This has disadvantages, even if you only need to swap out one config file, that I'm sure everyone is familiar with (wondering what just deployed without searching through XML is one of them) I want to add Logging to my application that basically says 'RUNNING IN PRODUCTION MODE', with a description of the services deployed and what mode they are working in. RUNNING IN PRODUCTION MODE Client Service - Production Messaging Service - Local and so on... Is this possible in Spring using a conventional deployment (putting a war on a server)? What other things do people do to manage deployments and software configurations? If not, what other ways could you achieve something similar?

    Read the article

  • "One or more breakpoints cannot be set and have been disabled. Execution will stop at the beginning

    - by sam
    I set a breakpoint in my code in Visual-C++, but when I run, I see the error mentioned in the title. I know this question has been asked before on Stack Overflow (http://stackoverflow.com/questions/657470/breakpoints-cannot-be-set-and-have-been-disabled-problem), but none of the answers there fully explained the problem I'm seeing. The closest I can see is something about the linker, but I don't understand that - so if someone could explain in more detail that would be great. In my case, I have 2 projects in Visual C++ - the production dsw, and the test code dsw. I have loaded and rebuilt both dsws in debug mode. I want a breakpoint in the production code, which is run via the test scripts. My issue is I get the error message when I run the test code, because the break point is in the production code, which isn't loaded up when the test starts. Near the beginning of the test script there is a mytest_initialize() command. I imagine this goes off and loads up the production dll. Once this line has executed, I can put the breakpoint in my production code and run until I hit it. But it's quite annoying to have to run to this line, set the breakpoint and continue every time I want to run the test. So I think the problem is Visual C++ doesn't realise the two projects are related. Is this a linker issue? What does the linker do and what settings should I change to make this work? Thanks in advance. Apologies if instead I should be appending this question to the existing one, this is my first post so not quite sure how this should work.

    Read the article

  • How do I get Phusion Passenger to work with Django for App Engine?

    - by Mike
    I'm having a devil of a time getting Phusion Passenger to work with django-nonrel for Google's App Engine. I can seem to get it to work for GoogleAppEngineLauncher and for the production server but not Passenger; or for Passenger and GoogleAppEngineLauncher but not the production server; or for Passenger and the production server but not GoogleAppEngineLauncher. How do I get my app to deploy on all three?

    Read the article

  • RMO on Windows 7 - The Specified module could not be found

    - by AGoodDisplayName
    My machine: Windows XP (x86), VS2008, .net 3.5, sql server 2005, WinForms - App works fine. Production Machines: Windows 7 (x64), SQl Server 2005 Express - App Starts but throws exception Visual Studio Targeting x86 on setup project and RMO project. Visual Studio gives me a a couple warnings but I can still build: Unable to find dependency 'MICROSOFT.SQLSERVER.MANAGEMENT.SQLPARSER' (Signature='89845DCD8080CC91' Version='10.0.0.0') of assembly 'Microsoft.SqlServer.Smo.dll' Unable to find dependency 'MICROSOFT.SQLSERVER.MANAGEMENT.SQLPARSER' (Signature='89845DCD8080CC91' Version='10.0.0.0') of assembly 'Microsoft.SqlServer.Management.SmoMetadataProvider.dll' This is a simple RMO (Replication Management Objects) app that initiates a pull subscription in SQL Server 2005 and displays status. Works fine on my machine, but fails on the production machine. I'm using a setup project to install the app on the production machine, but I think I'm missing a dependency somewhere, but I can't figure it out. On the production machine the app starts fine, but when I try to synch the subscription i get: System.IO.FileNotFoundException: The Specified module could not be found. (Exception from HResult: 0x8007007E)

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • SQL Server insert slow

    - by andrew007
    Hi, I have two servers where I installed SQL Server 2008 Production: RAID 1 on SCSI disks Test: IDE disk When I try to execute a script with about 35.000 inserts, on the test server I need 30 sec and instead on the production server more than 2 min! Does anybody know why such difference? I mean, the DB is configured in the same way and the production server has also a RAID config, a better processor and memory... THANKS!

    Read the article

  • Remove redundant entries, scala way

    - by andersbohn
    Edit: Added the fact, that the list is sorted, and realizing 'duplicate' is misleading, replaced that with 'redundant' in the title. I have a sorted list of entries stating a production value in a given interval. Entries stating the exact same value at a later time adds no information and can safely be left out. case class Entry(minute:Int, production:Double) val entries = List(Entry(0, 100.0), Entry(5, 100.0), Entry(10, 100.0), Entry(20, 120.0), Entry(30, 100.0), Entry(180, 0.0)) Experimenting with the scala 2.8 collection functions, so far I have this working implementation: entries.foldRight(List[Entry]()) { (entry, list) => list match { case head :: tail if (entry.production == head.production) => entry :: tail case head :: tail => entry :: list case List() => entry :: List() } } res0: List[Entry] = List(Entry(0,100.0), Entry(20,120.0), Entry(30,100.0), Entry(180,0.0)) Any comments? Am I missing out on some scala magic?

    Read the article

  • How do I effectively store a connection string in machine.config only?

    - by Scott Bedwell
    We are moving to an environment with multiple engines of MS SQL running on the same server (a test engine and a production engine). We also have separate test and production web servers, and would like for our asp.net applications to "magically" use the test database engine on the test web server and the production database engine on the production web servers. We would like to store the connection strings in machine.config rather than in web.config, but when we put it in machine.config, visual studio's IDE (particularly with datasets) does not recognize that the machine.config contains the connection. Does anyone know of a solution for displaying these machine.config connection strings in visual studio, or of a different solution that would accommodate for this? Thanks.

    Read the article

  • Remove duplicates in entries, scala way

    - by andersbohn
    I have a list of entries stating a production value in a given interval. Entries stating the exact same value at a later time adds no information and can safely be left out. case class Entry(minute:Int, production:Double) val entries = List(Entry(0, 100.0), Entry(5, 100.0), Entry(10, 100.0), Entry(20, 120.0), Entry(30, 100.0), Entry(180, 0.0)) Experimenting with the scala 2.8 collection functions, so far I have this working implementation: entries.foldRight(List[Entry]()) { (entry, list) => list match { case head :: tail if (entry.production == head.production) => entry :: tail case head :: tail => entry :: list case List() => entry :: List() } } res0: List[Entry] = List(Entry(0,100.0), Entry(20,120.0), Entry(30,100.0), Entry(180,0.0)) Any comments? Am I missing out on some scala magic?

    Read the article

  • How to disable activerecord cache logging in rails

    - by user1508459
    I'm trying to disable logging of caching in production. Have succeeded in getting SQL to stop logging queries, but no luck with caching log entries. Example line in production log: CACHE (0.0ms) SELECT merchants.* FROM merchants WHERE merchants.id = 1 LIMIT 1 I do not want to disable all logging, since I want logger.debug statements to show up in the production log. Using rails 3.2.1 with Mysql and Apache. Any suggestions?

    Read the article

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • Industry Standard DNS & Authentication?

    - by James Murphy
    I'm just curious as to what is considered industry standard when it comes to doing DNS and authentication on an environment with mainly linux machines? Do people use Windows DNS & Windows AD to do it all if they have at least one windows server (well - alot might, but should they)? Does ANYONE use hosts files or local only user accounts on each server? What would people like Facebook/Google use for their DNS and authentication on their servers? We have an environment where we have about 10-15 linux servers and 1-2 windows servers. We are currently using Windows AD and Windows DNS but it doesn't seem like it's the most secure/stable/scalable way to do it for a mainly linux environment? We use RHEL as our linux environment.

    Read the article

  • CentOS vps is randomly rebooting

    - by develroot
    I have a centos vps (Parallels Virtuozzo container) which has been running for months. However, a few days ago it started to randomly reboot itself, and i can't find out why. And the biggest problem that i don't understand is that it takes 40 minutes to reboot (as far as i can see in the logs) root ~ # cat /var/log/messages | grep shutdown Oct 11 13:52:11 vps27 shutdown[23968]: shutting down for system halt Oct 14 14:55:17 vps27 shutdown[30662]: shutting down for system halt Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt And notice the time difference between shutdown and xinetd's start: Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt Oct 15 06:21:24 vps27 init: Switching to runlevel: 0 Oct 15 06:21:27 vps27 saslauthd[30614]: server_exit : master exited: 30614 Oct 15 06:21:38 vps27 named[30661]: shutting down Oct 15 06:21:47 vps27 exiting on signal 15 Oct 15 07:04:34 vps27 syslogd 1.4.1: restart. Oct 15 07:05:06 vps27 xinetd[1471]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Oct 15 07:05:06 vps27 xinetd[1471]: Started working: 0 available services And here's what Parallels Power Panel says in terms of Status Changes: Time Old Status Status Obtained Oct 15, 2011 06:23:46 AM Mounted Down Oct 15, 2011 06:22:31 AM Running Mounted Oct 14, 2011 03:06:48 PM Starting Running Oct 14, 2011 03:06:23 PM Down Starting Oct 14, 2011 03:06:08 PM Mounted Down Oct 14, 2011 02:58:24 PM Running Mounted For some reason it's getting into Mounting mode and then restarts itself. The only problem that i can imagine is disk space utilization, which is now 84%. But can that be a reson for system halt? Time Category Details Type Parameter Oct 15, 2011 07:08:33 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:27:23 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:23:50 AM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used Oct 14, 2011 03:06:24 PM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 83 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 14, 2011 03:05:50 PM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used

    Read the article

  • Installation of Access Database Engine 32-bit Fails

    - by Rayzor78
    I am trying to install Access Database Engine 2007 32-bit. The splash screen comes up, you click "Next", then it fails with the error: Installation ended prematurely because of an error You click "OK" and another error window says: The installation of the package failed. The exact same situation happens when I try this with Access Database Engine 2010 32-bit. This production server is running Windows Server 2008 R2 SP1 64-bit. Before I tried installing Access Database Engine 32-bit, I first needed to install Microsoft Office 2010 Pro (Excel and Office Tools only). I tried the 32-bit version on the production server since that is how I set it up in our Dev environment. No luck. The 32-bit version would not install. I did NOT get the error "You have 64-bit components of Office installed". I simply received the exact same two errors listed above. So, I knew that 32-bit/64-bit did not really matter for the Office install for my project, so I installed 64-bit of Office Pro 2010 (Excel and Office Tools only) with no problems. I have a requirement that I need to have the 32-bit version of the Access Database Engine installed. 2007 or 2010, doesn't matter. I cannot use the 64-bit version of Access Database Engine 2010 because my SSIS package will not work with it. I require the 32-bit version. I've tried several steps to try to get it installed. I seriously think that the production server has some aversion to installing 32-bit applications. Here's what I've tried: Tried installing via command line with the "/passive" switch....no luck. Tried numerous iterations to copy the install file to the server (downloaded a fresh copy directly to the server, downloaded a fresh copy to my local machine then copied it over, copied it over zipped up) (http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/efd3c1f0-07cd-45ca-a626-2dd0c7ac3e9f). Tried Method 1 from this link. Could not try Method 2 because it requires a server reboot and in my environment that requires a long change management process. I've verified that I am a local administrator on the server. (Evidence, I am able to install other applications (office 64-bit per above)). Verified that there are no other office products that should be blocking the installation. The fore-mentioned install of Excel 2010 64-bit was the first Office product installed on the server. VERY ODD: To test my theory that the production server does not like 32-bit applications, I installed something lightweight. I installed 7-Zip 32-bit on the production server with no problems whatsoever. Here are some things that I have not tried (i will follow-up once I do): Method 2 (as mentioned above). Requires a server reboot. Have not verified that the Dev and Production environments are 100% identical. I've done a cursory check and on the surface they appear to be the same (same OS and SP version). I need to do a deeper dive to be 100% certain. I had no problems in my Dev environment. In Dev, I installed Office 2010 Pro 64-bit (Excel & Office Tools only) then via command line w/ the "/passive" switch, installed Access Database Engine 2010 32-bit. I don't know what else to try. Any suggestions or comments?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >