Search Results

Search found 409 results on 17 pages for 'prod'.

Page 6/17 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Is this a dual monitor reset bug?

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) A few minutes after I login to my dual monitor setup, it gets reset to mirror screens. If I restore the settings via the displays application, everything is fine. On each reset, the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset, /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like its interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • Ubuntu 12.04 dual monitor reset bug

    - by Tentresh
    My two displays are: Intel GMA x4500 Laptop (1280x800 native resolution of the built-in display) External display (1920x1080) Few minutes after I login to my dual monitor setup its get reset to mirror screens. If I restore the settings via displays application everything is fine. On each reset the following messages are written into /var/log/Xorg.0.log: [ 60.852] (II) PM Event received: Capability Changed [ 60.852] I830PMEvent: Capability change [ 132.920] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 132.920] (II) intel(0): Printing DDC gathered Modelines: [ 132.920] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 134.228] (II) intel(0): Allocated new frame buffer 1280x800 stride 5120, tiled Whereas right on startup or manual resolution reset /var/log/Xorg.0.log reports the expected frame buffer allocation: [ 1562.382] (II) intel(0): EDID vendor "SEC", prod id 12869 [ 1562.382] (II) intel(0): Printing DDC gathered Modelines: [ 1562.382] (II) intel(0): Modeline "1280x800"x0.0 68.94 1280 1296 1344 1408 800 801 804 816 -hsync -vsync (49.0 kHz) [ 1576.740] (II) intel(0): Allocated new frame buffer 3200x1080 stride 12800, tiled Is Ubuntu 12.04 not compatible with my video card? Can this be solved within Ubuntu? I like it's interface, but manually fiddling with resolution on every login is not bearable.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Sporadic name resolution failure happening on web service call

    - by ansleygal
    One of our wcf service applications calls a seperate third party web service to submit information. We are getting the following error every so often, but not all the time: System.Net.WebException: The remote name could not be resolved: 'ws.examplesite.net' at System.Net.HttpWebRequest.GetRequestStream() at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) The wierd thing is that after the error happens, we can hit "Submit" again a second later and it will go through just fine. We have checked and double checked with our network guys and they have confirmed that DNS is correct, and they have done multiple nslookups in a row to confirm. This is happening in all environments (dev, test and prod). We use the third party test and prod urls, and it is happening when we point to both. Does anyone have any other trouble shooting techniques for this or any reason this would happen? Much thanks, ~Ansley

    Read the article

  • python pdb not breaking in files properly?

    - by YGA
    Hi Folks, I wish I could provide a simple sample case that occurs using standard library code, but unfortunately it only happens when using one of our in-house libraries that in turn is built on top of sql alchemy. Basically, the problem is that this break command: (Pdb) print sqlalchemy.engine.base.__file__ /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py (Pdb) break /prod/eggs/SQLAlchemy-0.5.5-py2.5.egg/sqlalchemy/engine/base.py:946 Is just being totally ignored, it seems, by pdb. As in, even though I am positive the code is being hit (both because I can see log messages, and because I've used sys.settrace to check which lines in which files are being hit), pdb is just not breaking there. I suspect that somehow the use of an egg is confusing pdb as to what files are being used (I can't reproduce the error if I use a non-egg'ed library, like pickle; there everything works fine). It's a shot in the dark, but has anyone come across this before? Thanks, /YGA

    Read the article

  • Unix shell script with Iseries command

    - by user293058
    I am trying to ftp a file from unix to as400 and executing iseries command in the script. ftp is working fine,I am getting an error in jobd command as HOST=KCBNSXDD.svr.us.bank.net USER=test PASS=1234 #This is the password for the FTP user. ftp -env $HOST << EOF # Call 2. Here the login credentials are supplied by calling the variables. user $USER $PASS # Call 3. Here you will change to the directory where you want to put or get cd "\$QARCVBEN" # Call4. Here you will tell FTP to put or get the file. #Ebcdic #Mode b quote site crtccsid *user quote site crtccsid *sysval put prod.txt quote rcmd sbmjob cmd(call pgm(pmtiprcc0) parm('prod' 'DEV')) job(\$pmtiprcc) jobd(orderbatch) 550-Error occurred on command SBMJOB cmd(call pgm(pmtiprcc0)) job($pmtiprcc) jobd(orderbatch). 550 Errors occurred on SBMJOB command.. 221 QUIT subcommand received.

    Read the article

  • Sub-query problem on Oracle 10g

    - by Eric
    The following query works on Oracle 10.2.0.1.0 on windows,but doesn't work on Oracle 10.2.0.2.0 on Linux. What's the problem?How can I make it work? Thanks! CREATE TABLE AUDITHISTORY( CASENUM numeric(20, 0) NOT NULL, AUDIT_DATE date NOT NULL, USER_NAME varchar(255) NULL, AUDIT_USECS numeric(6, 0) NOT NULL ) Query: SELECT T.CASENUM, T.USER_NAME, T.AUDIT_DATE AS STARTED, (SELECT * FROM (SELECT S.AUDIT_DATE FROM AUDITHISTORY S WHERE S.CASENUM=T.CASENUM AND S.USER_NAME=T.USER_NAME AND (S.AUDIT_DATE > T.AUDIT_DATE OR (S.AUDIT_DATE = T.AUDIT_DATE AND S.AUDIT_USECS > T.AUDIT_USECS)) ORDER BY S.AUDIT_DATE ASC,S.AUDIT_USECS ASC ) WHERE rownum <= 1) AS ENDED FROM AUDITHISTORY T BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production BANNER Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod PL/SQL Release 10.2.0.2.0 - Production CORE 10.2.0.2.0 Production TNS for Linux: Version 10.2.0.2.0 - Production NLSRTL Version 10.2.0.2.0 - Production

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • Problem with MultiColumn Primary Key

    - by Mike
    DataTable NetPurch = new DataTable(); DataColumn[] Acct_n_Prod = new DataColumn[2]; DataColumn Account; Account = new DataColumn(); Account.DataType = typeof(string); Account.ColumnName = "Acct"; DataColumn Product; Product = new DataColumn(); Product.DataType = typeof(string); Product.ColumnName = "Prod"; NetPurch.Columns.Add(Account); NetPurch.Columns.Add(Product); Acct_n_Prod[0] = Account; Acct_n_Prod[1] = Product; NetPurch.PrimaryKey = Acct_n_Prod; NetPurch.Columns.Add(MoreColumns); the code is based on the example here When it is compiled and runs i get an error saying: "Expecting 2 values for the key being indexed but received only one" if I make Acct_n_Prod = new DataColumn[1] and comment out the line adding product to the acct-n-prod array then it runs fine I'm fairly new to this so I'm not sure where the error is Thanks, -Mike

    Read the article

  • ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load

    - by MGOwen
    I have various servers (dev, 2 x test, 2 x prod) running the same asp.net site. The test and prod servers are in load-balanced pairs (prod1 with prod2, and test1 with test2). The test server pair is exhibiting some kind of (super) slowdown or freezing during about one in ten page loads. Sometimes a line of text appears at the very top of the page which looks something like: 00 OK Date: Thu, 01 Apr 2010 01:50:09 GMT Server: Microsoft-IIS/6.0 X-Powered_By: ASP.NET X-AspNet-Version:2.0.50727 Cache-Control:private Content-Type:text/html; charset=ut (the beginning and end are "cut off".) Has anyone seen anything like this before? Any idea what it means or what's causing it? Edit: I often see this too when clicking something - it comes up as red text on a yellow page: XML Parsing Error: not well-formed Location: http://203.111.46.211/3DSS/CompanyCompliance.aspx?cid=14 Line Number 1, Column 24:2mMTehON9OUNKySVaJ3ROpN" / -----------------------^ If I go back and click again, it works (I see the page I clicked on, not the above error message). Update: ...And, instead of the page loading, I sometimes just get a white screen with text like this in black (looks a lot like the above text): HTTP/1.1 302 Found Date: Wed, 21 Apr 2010 04:53:39 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Location: /3DSS/EditSections.aspx?id=3&siteId=56&sectionId=46 Set-Cookie: .3DSS=A6CAC223D0F2517D77C7C68EEF069ABA85E9392E93417FFA4209E2621B8DCE38174AD699C9F0221D30D49E108CAB8A828408CF214549A949501DAFAF59F080375A50162361E4AA94E08874BF0945B2EF; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 184 object moved here Where "here" is a link that points to a URL just like the one I'm requesting, except with an extra folder in it, meaning something like: http://123.1.2.3/MySite//MySite/Page.aspx?option=1 instead of: http://123.1.2.3/MySite/Page.aspx?option=1 Update: A colleague of mine found some info saying it might be because the test servers are running iis in 64 bit (64bit win 2003) (prod servers are 32 bit win 2003). So we tried telling IIS to use 32 bit: **cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1 %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i ** (from this MS support page) But iis stopped working altogether (got "server unavailable" on a white page instead of web sites). Reversing the above (see the link) didn't work at first either. The ASP.NET tab disappeared from our IIS web site properties and we had to mess around for an hour uninstalling (aspnet_regiis.exe -u) and reinstalling 32 bit ASP.NET and adding Default.aspx manually back into default documents. We'll probably try again in a few days, if anyone has anything to add in the meantime, please do.

    Read the article

  • ASP.NET site freezing up, showing odd text at top of the page while loading, on one server

    - by MGOwen
    I have various servers (dev, 2 x test, 2 x prod) running the same asp.net site. The test and prod servers are in load-balanced pairs. One of these pairs is exhibiting some kind of (super) slowdown or freezing every other page load or so. Sometimes a line of text appears at the very top of the page which looks something like: 00 OK Date: Thu, 01 Apr 2010 01:50:09 GMT Server: Microsoft-IIS/6.0 X-Powered_By: ASP.NET X-AspNet-Version:2.0.50727 Cache-Control:private Content-Type:text/html; charset=ut (the beginning and end are "cut off".) Has anyone seen anything like this before? Any idea what it means or what's causing it?

    Read the article

  • Custom Grails Environments?

    - by tinny
    I have been developing several Grails applications over the past couple of years. I am increasingly finding that the three grails environments (dev, test, prod) aren't enough to satisfy my needs. The more "enterprisey" your application gets, the more environments you tend to have. I tend to use 6 environments for my development cycle... DEVA //My dev DEVB //Team mates dev CI_TEST //CI like Hudson QA_TEST //Testing team environment UAT_TEST //Customers testing environment PROD //Production Im wondering if there is a way to define custom Grails environments? I dont think there is, but the feature could be handy. The way I am getting around this right now is by externalising the config to a properties file. Id imagine that this is a pretty common requirement, so how have you been dealing with your environments?

    Read the article

  • How to create a test environnement on the production FTP

    - by Clement Herreman
    Hello, I'm currently working on a symfony webapp, which is already on production. To develop and add/delete/modify functionnality of the model, I work on my laptop, using symfony 'dev' environnemment. I test if everything work fine, then I pray a little and deploy it on the prod server (with all the risk of data error, like when I add new not null attributes on the model, prod server configuration special stuff, version of php/apache etc.). The problem is that I would like to setup a "staging" server, which would be a copy of the production server (same database, same configuration apache/php), so that, if the deployment goes bad, the production user stay untouched and working, only the staging server is down. But my client has only 1 FTP available. So, the question is : can I run 2 symfony project, with different models, on the same FTP ? Or is there another way to do what I want to do ? Thank you !

    Read the article

  • IIS 6 session timing out a lot quicker than expected

    - by Echiban
    I am working with an web application that has its sessions timing out a lot quicker than expected. We expected a timeout of 15 minutes but it's timing out at 3-4 minutes. Info about environment: IIS6 classic ASP / COM+ app timeout OK on current PROD, much quicker in dev / QA environments We already disabled app pool recycling, and even put IIS in isolation mode - no effect HTTP err log doesn't display any lines when session times out We've done a close comparison of PROD and DEV / QA environments, and given we use virtual machines on all of them, settings should be preserved. I tried to find IIS blog notes from David Wang but many of them now have HTTP 404 errors, and I don't know what else to do. Please help! At the very least, is there a way to get IIS to log every time a session expires? At the very least some means of logging / debugging IIS would be useful. Thanks in advance.

    Read the article

  • Generate lags R

    - by Btibert3
    Hi All, I hope this is basic; just need a nudge in the right direction. I have read in a database table from MS Access into a data frame using RODBC. Here is a basic structure of what I read in: PRODID PROD Year Week QTY SALES INVOICE Here is the structure: str(data) 'data.frame': 8270 obs. of 7 variables: $ PRODID : int 20001 20001 20001 100001 100001 100001 100001 100001 100001 100001 ... $ PROD : Factor w/ 1239 levels "1% 20qt Box",..: 335 335 335 128 128 128 128 128 128 128 ... $ Year : int 2010 2010 2010 2009 2009 2009 2009 2009 2009 2010 ... $ Week : int 12 18 19 14 15 16 17 18 19 9 ... $ QTY : num 1 1 0 135 300 270 300 270 315 315 ... $ SALES : num 15.5 0 -13.9 243 540 ... $ INVOICES: num 1 1 2 5 11 11 10 11 11 12 ... Here are the top few rows: head(data, n=10) PRODID PROD Year Week QTY SALES INVOICES 1 20001 Dolie 12" 2010 12 1 15.46 1 2 20001 Dolie 12" 2010 18 1 0.00 1 3 20001 Dolie 12" 2010 19 0 -13.88 2 4 100001 Cage Free Eggs 2009 14 135 243.00 5 5 100001 Cage Free Eggs 2009 15 300 540.00 11 6 100001 Cage Free Eggs 2009 16 270 486.00 11 7 100001 Cage Free Eggs 2009 17 300 540.00 10 8 100001 Cage Free Eggs 2009 18 270 486.00 11 9 100001 Cage Free Eggs 2009 19 315 567.00 11 10 100001 Cage Free Eggs 2010 9 315 569.25 12 I simply want to generate lags for QTY, SALES, INVOICE for each product but I am not sure where to start. I know R is great with Time Series, but I am not sure where to start. I have two questions: 1- I have the raw invoice data but have aggregated it for reporting purposes. Would it be easier if I didn't aggregate the data? 2- Regardless of aggregation or not, what functions will I need to loop over each product and generate the lags as I need them? In short, I want to loop over a set of records, calculate lags for a product (if possible), append the lags (as they apply) to the current record for each product, and write the results back to a table in my database for my reporting software to use. Any help you can provide will be greatly appreciated! Many thanks in advance, Brock

    Read the article

  • How to nest a Location directive inside a virtual host config?

    - by Josh
    I am trying nest a Location directive inside a virtual host config like this: <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/deployer/apps/mysite/current/public ErrorLog /var/log/prod.log <Location "/shop"> DocumentRoot /home/deployer/apps/mysite_shop/current/public ErrorLog /var/log/prod.log </Location> </VirtualHost> What I want to do is go to mysite.com/shop, and point it to another application. Is this possible? Is there another method of doing this? I get an error because apparently Location directives do not accept DocumentRoot. Thanks.

    Read the article

  • What's the best way to handle web.config file versions in ASP.Net?

    - by MusiGenesis
    I have an ASP.Net web site (ASPX and ASMX pages) with a single web.config file. We have a development version and a production version. Over time, the web.config files for development and production have diverged substantially. What is the best practice for keeping both versions of web.config in source control (we use Tortoise SVN but I don't think that matters)? It seems like I could add the production web.config file with a name like "web.config.prod", and then when we turnover all the files we would just add the step of deleting the existing web.config and renaming web.config.prod to web.config. This seems hackish, although I'm sure it would work. Is there not some mechanism for dealing with this built in to Visual Studio? It seems like this would be a common issue, but I haven't found any questions (with answers) about this.

    Read the article

  • bash check if value in list

    - by tkoomzaaskz
    I've got a script with a variable taken from command line parameters. I want to check if it's value is one of dev, beta or prod. I've got following code snippet: #!/usr/bin/env bash ENV_NAME=$1 echo "env name = $ENV_NAME" ENVIRONMENTS=('dev','beta','prod') if [[ $ENVIRONMENTS =~ $ENV_NAME ]]; then echo 'correct' exit else echo 'incorrect' exit fi When I run my script, it doesn't matter which parameters I pass: ./script.sh beta or ./script.sh or ./script.sh whatever, I always get correct echoed. What is wrong in my script?

    Read the article

  • How to use unlinked result of linq?

    - by user46503
    Hello, for example I'm trying to get the data from database like: using (ExplorerDataContext context = new ExplorerDataContext()) { ObjectQuery<Store> stores = context.Store; ObjectQuery<ProductPrice> productPrice = context.ProductPrice; ObjectQuery<Product> products = context.Product; res = from store in stores join pp in productPrice on store equals pp.Store join prod in products on pp.Product equals prod select store; } After this code I cannot transfer the result to some another method because the context doesn't exist more. How could I get the unlinked result independent on the context? Thanks

    Read the article

  • One code on dev server, and production server - how to deal with links?

    - by Yegor
    I have a copy of a code running ont he prod server, and I sue my local machine(s) running xampp as a dev server. I have several websites that I actively develop, so Im forced to use http://localhost/sitename All my URLs are relative to the domain, (/file.php). They work fine on the prod server, but on a local server, they all point to localhost, when I want to make them all work relative to the site folder they are in. Is there anything I could do, other than what I do now, which is this: if($_SERVER['SERVER_NAME'] == "localhost") { $path_to = "http://" . $_SERVER['SERVER_NAME'] . "/folder"; $path_to_files = $_SERVER['DOCUMENT_ROOT'] . "/folder"; } else { $path_to = "http://" . $_SERVER['SERVER_NAME']; $path_to_files = $_SERVER['DOCUMENT_ROOT']; } and simply putting $path_to before each link on the site.

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >