Search Results

Search found 4561 results on 183 pages for 'production'.

Page 9/183 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Node.js server crashes , Database operations halfway?

    - by Ranadeep
    I have a node.js app with mongodb backend going to production in a week and i have few doubts on how to handle app crashes and restart . Say i have a simple route /followUser in which i have 2 database operations /followUser ----->Update User1 Document.followers = User2 ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation What happens if there is a server crash(due to power failure or maybe the remote mongodb server is down ) like this scenario : ----->Update User1 Document.followers = User2 SERVER CRASHED , FOREVER RESTARTS NODE What happens to these operations below ? The system is now in inconsistent state and i may have error everytime i ask for User2 followers ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation Also please recommend good logging and restart/monitor modules for apps running in linux

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • What to do with database in dev/production phases of a website?

    - by TheLQ
    For a while now I've been keeping a website I'm developing in the standard dev/production phases. Its been pretty simple: Mercurial repo for dev, repo for production. Do work in dev, get approved, push to production. But now I'm trying to apply this process to a new website that has a database and am struggling on how to figure out a development strategy. What I didn't mention above is that I do all my work on my own repo, push it to dev, then later push it to production, so its 3 different servers. So how do I manage my database? The obvious solution of mysqldump every commit isn't going to happen, and a dump at the end of the day isn't all that helpful when you want to undo later one change that happened in the middle of the day. What is the best way to accomplish this?

    Read the article

  • SSL Certificates, two-way authentication and loadbalancers

    - by 5arx
    We're looking to implement two-way authentication with client certificates for a privileged subset of our application users. The idea will be that if a certificate is detected the user will be asked for an additional password/PIN and that will be used to verify the certificate and user. Ordinary users will continue to authenticate themselves via the standard login mechanism. Our production environment (hosted by a well-known company) comprises load-balanced application servers and I'm unclear as to how this set-up will handle the certificates and I'm not certain if there are any pitfalls I should be aware of. I would very appreciate some thoughts, comments or real-world advice on the subject.

    Read the article

  • Creating a Sharepoint Development Environment from an Existing Production Environment

    - by Starky
    I have very little experience using Sharepoint but a good amount using Visual Studio 2008, SQL Server 2005, Windows Server 2003 and IIS6. I need to create a development environment for a SharePoint 2007 system that will be used internally. The system is already deployed over two servers - one of the servers simply holds the database and everything else is on the other server. We are also using WSS 3.0. I have created a Virtual Machine with all the required software including a clean installation of SharePoint Server 2007 and I wish to use this single Virtual Machine as the development environment. Right now there are no custom assemblies being used on the production server as far as I am aware. There are 3 websites, one over port 80 for user accesss, one over a custom port for central administration, and one over another custom port. Not sure what the last one is for but my blank instance of Sharepoint on my Virtual Machine also has something similar. I attempted to use the STSADM tool to backup and restore these 3 sites from my production environment to my development environment and while the operations completed succesfully, the central administration site in my development environment acted strangely and I could not access port 80 - I did not seem to have correct credentials for it. I suspected that it would not have been so simple so could I please have advice on how to create my development environment so that I can use it to deploy updates to the production one.

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • SVN marking file production ready

    - by dan.codes
    I am kind of new to SVN, I haven't used it in detail basically just checking out trunk and committing then exporting and deploying. I am working on a big project now with several developers and we are looking for the best deployment process. The one thing we are hung up on is the best way to tag, branch and so on. We are used to CVS where all you have to do is commit a file and tag it as production ready and if not that code will not get deployed. I see that SVN handles tagging differently then CVS. I figure I am looking at this and making it overly complex. It seems the only way to work on a project and commit files without it being in the production code is to do it in a branch and then merge those changes when you are ready for it to be deployed. I am assuming you could also be working on other code that should be deployed so you would have to be switching between working copies, because otherwise you are working on a branch that isn't getting mixed in with the trunk or production branch? This process seems overly complex and I was wondering if anyone could give me what you think is the best process for managing this.

    Read the article

  • Determine whether app is communicating with APNS sandbox or production environment

    - by goldierox
    I have push notifications set up in my app. I'm trying to determine whether the device token I've received from APNS in the application:didRegisterForRemoteNotificationsWithDeviceToken: method came from the sandbox or development environment. If I can distinguish which environment initialized the token, I'll be able to tell my server to which environment to send the push notification. I've tried using the DEBUG macro to determine this, but I've seen some strange behavior with this and don't trust it to be 100% correct. #ifdef DEBUG BOOL isProd = YES; #else BOOL isProd = NO; #endif Ideally, I'd be able to examine the aps-environment entitlement (value is Development or Production) in code, but I'm not sure if this is even possible. What's the proper way to determine whether your app is communicating with the APNS sandbox or production environments? I'm assuming that the server needs to know this in the first place. Please correct me if this is assumption is incorrect. Edited: Apple's documentation on Provider Communication with APNS details the difference between communicating with the sandbox and production. However, the documentation doesn't give information on how to be consistent with registering the token (from the iOS client app) and communicating with the server.

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • Bad value for type timestamp on production server

    - by Juan Javaloyes
    I'm working with: seam 2.2.2 + hibernate + richfaces + jboss 5.1 + postgres I have an module which needs to load some data from the database. Easy. The problem is, on development it works fine, 100%, but when I deploy on my production server and try to get the data, an error rise: could not read column value from result set: fechahor9_504_; Bad value for type timestamp : [C@122e5cf SQL Error: 0, SQLState: 22007 Bad value for type timestamp : [C@122e5cf javax.persistence.PersistenceException: org.hibernate.exception.DataException: could not execute query [more errors] Caused by: org.postgresql.util.PSQLException: Bad value for type timestamp : [C@122e5cf at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:232) [more errors] Caused by: java.lang.NumberFormatException: Trailing junk on timestamp: '' at org.postgresql.jdbc2.TimestampUtils.loadCalendar(TimestampUtils.java:226) I can't understand why it works on my machine (development) and why not on production. Any clues? Anyone gone through the same problem? Is exactly the same compilation

    Read the article

  • Play Framework: Real-world production experiences?

    - by Rob
    Has anyone used the Play framework for a reasonably complex or large, deployed production app yet? If so, I would like to hear what the pros and cons of that experience were and what you might do differently if you could start over. In particular, I'm interested in how well it worked for projects that are big enough that it requires a small team and/or apps that had requirements that go beyond what demo/test projects can (e.g. scalability requirements). The other question on this topic does not cover use in production, and as we all know, the true wins (and gotchas) of platforms often don't show up until used in battle.

    Read the article

  • C# / Visual Studio: production and test code placement

    - by Patrick Linskey
    Hi, In JavaLand, I'm used to creating projects that contain both production and test code. I like this practice because it simplifies testing of internal code without artificially exposing the internals in a project's published API. So far, in my experiences with C# / Visual Studio / ReSharper / NUnit, I've created separate projects (i.e., separate DLLs) for production and test code. Is this the idiom, or am I off base? If this idiomatically correct, what's the right way to deal with exposing classes and methods for test purposes? Thanks, -Patrick

    Read the article

  • RoR app running on mongrel development but not production

    - by ANaimi
    Hello all, This is my first stab at Ruby on Rails. Just deployed a very simple app to Heroku. The thing is that my app runs flawlessly on mongrel development; When I run it with "mongrel_rails start -e production" however, I get the error "We're sorry, but something went wrong." For the life of me, I couldn't debug this. Heroku logs is not returning anything, the Exceptional addon in Heroku is not returning anything, and I cannot find mongrel.log on my Windows machine (when I run mongrel using: mongrel_rails start -e production -d" I'm using Rails 2.3.5 and sqlite3 with bundler to pack my gems. I was told that probably rails is not booting up correctly. I can't find any other way to diagnose this. Any ideas? Thanks, ANaimi

    Read the article

  • MongoDB or CouchDB - fit for production?

    - by Alan
    I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: 1) Who is using MongoDB or CouchDB today in a production environment? 2) How are you using MongoDB/CouchDB? 3) What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? 4) How did you deal with any migration issues that you had to deal with? 5) Do you have any good/bad experiences with either of these solutions that you'd like to share? Thanks.

    Read the article

  • Rails passenger production cache definition

    - by mark
    Hi I'm having a bit of a problem with storing data in rails cache under production. What I currently have is this, though I have been trying to work this out for hours already: #production.rb config.cache_store = :mem_cache_store if defined?(PhusionPassenger) PhusionPassenger.on_event(:starting_worker_process) do |forked| if forked Rails.cache.instance_variable_get(:@data).reset end end end I am using a cron job to (try to) save remote data to the cache for display. It is logged as being written to the cache but reportedly null. If anyone could point me toward a decent current tutorial on the subject or offer guidance I'd be extremely grateful. This is really, really frustrating me. :(

    Read the article

  • Uploading files wont work on production server

    - by Camran
    I have a php script which uploads images to a temporary folder on the server. This works on my local computer with the local (virtual) server. (wampserver). However, on the production server (linux) I cant get it working. Everything is loading as it should, except for that the file doesn't show up on the server. My Q is, is there anything I should think about when moving to a production server with uploading images or files? I have set permissions on the folder where the images are supposed to go to 777 and also the php-code which uploads them to 777. Thanks

    Read the article

  • Correct way to get absolute url in django

    - by dreamiurg
    A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be. I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default. However, on production server my app has other url, with path - like "http://server.name/myproj/" I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this urlpatterns = patterns('', (r'^(\d+)?$', 'myproj.myapp.views.index'), (r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }), (r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }), ) So far, I see 2 solutions: modify urls.py, include '/myproj/' in case of production run use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates Are there prettier ways to deal with this problem? Thank you.

    Read the article

  • PHP Can't connect to MySQL on the production server

    - by Jairo Santos
    I'm having problems with connections with MySQL through PHP script. The MySQL user is root and I added GRANTS to root@'%' so I can connect from anywhere. Lets assume my MySQL host as "bigboy.com.br" The funny part is, from my local machine, on my test server, the script can connect to the MySQL server normally. But on the dedicated server where MySQL is running, the same PHP script gives me "Access denied for 'root'@'bigboy.com.br'" error.

    Read the article

  • Celery and Django : How to start at boot in production env (linux)

    - by llazzaro
    Hello, I have and app that uses celery and django to run distribuited tasks (like send email, crawl web,etc). The app never wa sin prod, so I always start celeryd with ./manage celeryd. I want to setup a pre-post env in linux, and I will need information in how to make an init.d script for start the celeryd for django. (I had made some init.d scripts before, no need complete script just relevant part) Thanks!

    Read the article

  • Is virtualenv suitable for a production server?

    - by gnufs
    I'm planning to set up a Python app (Pyblosxom) on my server and considering to run it in its own virtualenv sandbox with --no-site-packages. I'm hoping that such a setup would be easily portable and maintainable over the years. However, I've only used virtualenv for development environments that recreate a certain server setup locally, and most sources about virtualenv seem to also mention virtualenv for such a use. Is there any drawback to running a Python app from a virtualenv on a live server?

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Add a server between router and switch (production)

    - by Kossel
    I have a small office network basically like below, there are more router/pc connected in S1. As you can see, the router is doing job of DHCP, DNS. but now I wish to add a Linux server between R1 and S1, So I can monitor the network traffic and do other more advance server admin stuff. the whole office network is 192.168.1.x and people are using their computer everyday. What network configuration should the new Linux server have (both interfaces) in order to minimize the changes need in the network? tried to change R1 ip to 192.168.100.1 them add the server with FE0/0 192.168.100.1 and FE0/1 192.168.1.1 but looks cannot ping the original Router..

    Read the article

  • Production LAMP server

    - by user36996
    Hi, I am wanting to setup a internal development server (LAMP), I need the web team to be able to access different developments sites ie: example1.local example2.local example3.local etc from within the network. I believe it would be something to do with DNS? Any help would be appriciated. Kyle

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >