Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 76/549 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • In a web farm environment, should we base the system date/time to the web servers or the database se

    - by leypascua
    Assuming there are a number of load-balanced web servers in a web farm, is it safe to use DateTime.Now in the application code for getting the system date/time or should we leave this responsibility to the database server? Would there be a chance that machine date/time settings on all servers in the webfarm are out of sync? If date/time will be the responsible of the DBMS, how will this strategy work if we have load-balanced replicated DBs?

    Read the article

  • asp.net and Visual studio root directory question

    - by Mark Kadlec
    I am seeing something very odd and thought I would ask the Stackoverflow community if they knew the answer. I have an asp.net project that runs fine in one environment, but couldn't figure out what happened to the styles in another environment. In the first environment (Windows 2008 Server), the following link worked fine: <link href="/Styles/09/style.css" rel="stylesheet" type="text/css" /> but in the other environment (it's a Windows 7), I had to change it to work: <link href="../Styles/09/style.css" rel="stylesheet" type="text/css" /> Notice that the directories seemed to shift ahead one directory in the Win7, what's going on? It's like the "running" directory seems now be the \bin directory instead of the home! Which environment is configured correctly? How do I determine execution directory level? My concern going forward is pushing to a prod environment and guessing which configuration is correct. Any insight would be appreciated!

    Read the article

  • Is it possible to set a parameterized build or pass an environment variable via a hudson build trigger?

    - by Tim
    I'd like to use hudson to trigger a "promotion" of a build. The build would be a simple script that just copies a release candidate installer file from one location to another. (development dir to release/stable dir) Our build server is not on the public internet, but I want to be able to send an email or a text message or an IM with the build number to hudson, which will parse the build number and then do the copy/move. In looking at the jabber/IM plugin it does not look like this is possible (the parameter part) Has anyone solved this in some way? Should I use some other mechanism? I would prefer not to have to do the manual steps each time (SCP/FTP, etc) - I just want any QA team member to be able to trigger the build server to do the promotion.

    Read the article

  • Store code line in a string?

    - by user1342164
    I need to store code in a string so that if a value is true, it is in the code line if not true its not in the code line. When I populate summarytextbox if consulting amount is "" then dont use this code if is does have an amount include the code. Is this possible? Other wise I would have to do a bunch if then statements. When I do the following below it cant convert to double. Dim ConsultingFee As String Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load If Session("ConsultingFeeAmount") = "" Then Else 'Store the following line in a string???? ConsultingFee = +Environment.NewLine + Session("ConsultingFee") + " Amount: " + Session("ConsultingFeeAmount") End If SummaryTextBox.Text = Session("TeachingHospital") + Environment.NewLine + Session("HospitalAddress") + Environment.NewLine + Session("HospitalCity") + Environment.NewLine + Session("HospitalState") + Environment.NewLine + Session("HospitalZip") + Environment.NewLine + Session("HospitalTIN") + ConsultingFee End Sub

    Read the article

  • Is there anything on a local network or desktop environment that could effect JScript execution?

    - by uku
    I know this sounds odd. The JS on my project functions perfectly, except when the web site is accessed using computers at one specific company. To make things even more difficult, the JS fails only about 50% of the time when run from that company. The JS failure occurs with FireFox, Chrome, and IE. I have tested this myself using FF and Chrome on a thumb drive. The browsers on my thumb always display my project site perfectly, except when run from a computer on said company's network where they fail at the same rate as the installed browsers. My JS is using jQuery and making some Ajax calls. The Ajax calls are where the failure is occurring. To diagnose the problem I created a logging function for my Ajax calls and recorded success and failure. Over a one month period, there were only a handful of failures (about 1%) from all access points other than this company. Oddly enough, the Ajax calls in the logging function are not failing. There is nothing exotic there - Just Win XP SP3. I have never noticed any other unusual behavior from their network. The company is a division of a mega ISP and is on their corporate network. Any other suggestions for troubleshooting would be welcome.

    Read the article

  • Best ways to write a method that updates two objects in a multithreaded java environment?

    - by DanielHonig
    Suppose we have a class called AccountService that manages the state of accounts. AccountService is defined as interface AccountService{ public void debit(account); public void credit(account); public void transfer(Account account, Account account1); } Given this definition, what is the best way to implement transfer() so that you can guarantee that transfer is an atomic operation. I'm interested in answers that reference Java 1.4 code as well as answers that might use resources from java.util.concurrent in Java 5

    Read the article

  • Is it possible to use the ScalaTest BDD syntax in a JUnit environment?

    - by ebruchez
    I would like to describe tests in BDD style e.g. with FlatSpec but keep JUnit as a test runner. The ScalaTest Quick Start does not seem to show any example of this: http://www.scalatest.org/getting_started_with_junit_4 I first tried naively to write tests within @Test methods, but that doesn't work and the assertion is never tested: @Test def foobarBDDStyle { "The first name control" must "be valid" in { assert(isValid("name·1")) } // etc. } Is there any way to achieve this? It would be even better if regular tests can be mixed and matched with BDD-style tests.

    Read the article

  • How do I set bash environment variables from a script?

    - by James A. Rosen
    I have some proxy settings that I only occasionally want to turn on, so I don't want to put them in my ~/.bash_profile. I tried putting them directly in ~/bin/set_proxy_env.sh, adding ~/bin to my PATH, and chmod +xing the script but though the script runs, the variables don't stick in my shell. Does anyone know how to get them to stick around for the rest of the shell session?

    Read the article

  • Can I install Whizzy for Emacs on a Mac (is Mac OS X a unix environment)?

    - by Vivi
    I think my question is pretty stupid, but here it goes: I am using Aquamacs, and I want to install the Whizzy mode. The website for Whizzy says that "it is designed for Unix platforms". I read that Mac OS X is unix certified, but does that mean I can install Whizzy on my mac? If yes, can I install and use it with Aquamacs or do I have to use the Emacs running from the terminal? PS: I don't know whether this question should be posted here or on SuperUser, but as Emacs users seem to hang out here more often, this is the place I chose.

    Read the article

  • Convert IIS / Tomcat Web Application to a multi-server environment.

    - by bill_the_loser
    I have an existing web application built in .Net, running on IIS that leverages a java servlet that we have running on Tomcat 5.5. We need to scale the application and I'm confused about what relates to our situation and what we need to do to get the servlet running on multiple servers. Right now I have 4 servers that can each individually process results, it almost seems like all I should have to do is add the ajp13 worker processes from three additional machines to the machine hosting the load balancer worker. But I can't imagine it should be that easy. What do I need to do to distribute the Tomcat load to the extra three machines? Thanks. Update: The current configuration is using a workers2.properties configuration file. From all of the documentation online I have not been able to determine the distinction between the workers.properties and the workers2.properties. Most of the examples that I have found are configuring the workers.properties and revolve around adding workers and registering them in the worker.list element. The workers2.properties does not appears to have a worker.list element and the syntax is different enough between the workers.properties and the workers2.properties that I'm doubtful that I can add that element. If I just add my multiple AJP workers to the workers2.properties file do I need to worry about the apparent lack of a worker.list element? [ajp13:localhost:8009] channel=channel.socket:localhost:8009 group=lb [ajp13:host2.mydomain.local:8009] channel=channel.socket:host2.mydomain.local:8009 group=lb [ajp13:host3.mydomain.local:8009] channel=channel.socket:host3.mydomain.local:8009 group=lb A couple of side notes... One I've noticed that sometime Tomcat doesn't seem to reload my changes and I don't know why. Also, I have no idea why this configuration has a workers2.properties and not a workers.properties. I've been assuming that it's based on version but I haven't seen anything to back up that assumption.

    Read the article

  • Opening a streaming connection to an HTTP server on iPhone/Cocoa environment?

    - by Paul
    I've been using NSURLConnection to do a HTTP post to establish the connection. I've also implemented the didReceiveData delegagate to process incoming bytes as they become available. As incoming data comes in via didReceiveData, I add the NSData to a data buffer and try parsing the bytesteam if enough data has come in to complete a message segment. I'm having a hard time managing the data buffer (NSMutableData object) to remove bytes that have been parsed to structs. Was curious if there's an easier way. Thanks

    Read the article

  • WCF or Sockets for Communication in a real-time environment?

    - by PieterG
    I have a scenario where I need to send a series of data across 2 clients. The data includes Serialized XML that will contain commands that the other client will need to react to. I will also need to send images across the wire as I need to provide a chat facility in the form of video/audio chat. I would like a single communication medium for both, as the number of messages/commands might be a few. WCF or Sockets?

    Read the article

  • How to manage css of big websites within team environment without mess?

    - by jitendra
    Where multiple people can work on same css. is it possible to follow semantic name rules even in large websites. If I would write all main css first time with semantic names . then what and how i should guideline/instruction to other developer to maintain css readability, validation . and to know quickly where other are adding their own css if required. Right now every one just go to down and write required css classes ot IDs at bottom. and most of the time they don't write semantic names.

    Read the article

  • Fill data gaps without UNION

    - by Dave Jarvis
    Problem There are data gaps that need to be filled, possibly using PARTITION BY. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Code Tables The severity codes and incident type codes are from: severity_vw incident_type_vw Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Any ideas how to use PARTITION BY (or JOINs) to fill in the zero counts?

    Read the article

  • Server-environment and configuration: How bad is fread() etc?

    - by zero
    Hello dear commmunity, good day! I run a little site (now for several months ) that has users accessing big files, for download as well for streaming to the browser. It's fairly active, so assuming the worst, how bad is getting php to read the files that would be stored outside of the webroot and then getting it to echo it to a page dynamically for the browser to then read? My question is: how bad is fread() etc in this context!? zero

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Typical size of unit tests compared to test code

    - by Frank Schwieterman
    I'm curious what a reasonable / typical value is for the ratio of test code to production code when people are doing TDD. Looking at one component, I have 530 lines of test code for 130 lines of production code. Another component has 1000 lines of test code for 360 lines of production code. So the unit tests are requiring roughly 3x to 5x as much code. This is for Javascript code. I don't have much tested C# code handy, but I think for another project I was looking at 2x to 3x as much test code then production code. It would seem to me that the lower that value is, assuming the tests are sufficient, would reflect higher quality tests. Pure speculation, I just wonder what ratios other people see. I know lines of code is an loose metric, but since I code in the same style for both test and production (same spacing format, same ammount of comments, etc) the values are comparable.

    Read the article

  • asp.net mvc userinteractive problem

    - by niao
    Greetings, in my mvc application when I try to connect to wcf service I get this error It is invalid to show a modal dialog or form when the application is not running in UserInteractive mode. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application. The problem only occurs on production server (Win2008, IIS7). On my development server (WinXp, IIS6) it works ok. Additionally. I can connect from asp.net mvc development to wcf production service. I can't do this od production server (asp mvc production to production wcf service)

    Read the article

  • Sqlite3 activerecord :order => "time DESC" doesn't sort

    - by Ole Morten Amundsen
    rails 2.3.4, sqlite3 I'm trying this Production.find(:all, :conditions = ["time ?", start_time.utc], :order = "time DESC", :limit = 100) The condition works perfectly, but I'm having problems with the :order = time DESC. By chance, I discovered that it worked at Heroku (testing with heroku console), which runs PostgreSQL. However, locally, using sqlite3, new entries will be sorted after old ones, no matter what I set time to. Like this (output has been manually stripped): second entry is new: Production id: 2053939460, time: "2010-04-24 23:00:04", created_at: "2010-04-24 23:00:05" Production id: 2053939532, time: "2010-04-25 10:00:00", created_at: "2010-04-27 05:58:30" Production id: 2053939461, time: "2010-04-25 00:00:04", created_at: "2010-04-25 00:00:04" Production id: 2053939463, time: "2010-04-25 01:00:04", created_at: "2010-04-25 01:00:04" Seems like it sorts on the primary key, id, not time. Note that the query works fine on heroku, returning a correctly ordered list! I like sqlite, it's so KISS, I hope you can help me... Any suggestions?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >