Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 78/437 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • To Virtual or Not to Virtual

    - by Kevin Shyr
    I recently made a comment "I hate everything virtual" while responding to a SQL server performance question.  I then promptly fired up my Hyper-V development environment to do my proof of concept stuff, and realized that I made the cardinal sin of making a generalized comment about something, instead of saying "It depends". The bottom line is if the virtual environment gives the throughput that the server needs, then it is not that big of a deal.  I just have seen so many environment set up with SQL server sitting in virtual environment sitting in a SAN, so on top of having to plan for loss data, I now have to plan for my virtual environment failing for so many different reasons, thought SQL 2012 High Availability Group should make that easier.  To me, a virtual environment makes sense for a stateless application with big scalibility requirement, but doesn't give much benefit to an application where performance and data integrity are both important.  If security is not a concern, I would just build servers with multiple instances on them to balance the workload. Maybe this is also too generalized a comment, and I'll confess that I'm not a DBA by trade.  I'd love to hear the pros and cons of virtualizing a SQL server, or other examples where virtualization makes total sense (not just money, but recovery, rollback, etc.)

    Read the article

  • Unable to fetch initial output of "defrag" commad in Windows Server 2008 R2 in WOW64 environment.

    - by Ganesh
    Hi All, [Application & code back ground] I have an MFC application which is executing on Windows Server 2008 R2 in WOW64 environment. In which on user input it defragments the selected drive on the disk. I initiated the process(.i.e. cmd /c defrag –v c:) of defragmentation using the CreateProcess() API, along with this to display output of the process on the main screen I created the pipe using CreatePipe() API. I used PeekNamedPipe() & ReadFile() API to get the output and display. [Problem Area] When the process is launched I am not getting the initial output as below: Microsoft Disk fragmenter Copyright © 2007 Microsoft Crop. Invoking defragmentation on (C:)…. I constantly monitor the output of the process while it is under progress but not able to get any thing as output in the pipe. It seem the process is not doing any thing and appears as if the application is not responding. But after certain period of time, once the process is about to completed I get result along with the initial data. [Sample code] //Pipe created if(0 == ::CreatePipe(&l_hStdOutRead, &l_hStdOutWrite, &l_SecurityAttribute, (DWORD)NULL)) { //Error code } //Process created/launched if (0 == ::CreateProcess(NULL, (LPTSTR)f_csProcessName, &l_stSecurityAttributes, NULL, TRUE, CREATE_NO_WINDOW, NULL,NULL, &l_StartupInfo, &l_CmdPI)) { //Error code } //Read output if (0 == ::PeekNamedPipe(m_hStdOutRead, l_cArrPeekBuffer, (DWORD)NULL, (LPDWORD)NULL, &l_dwAvailable, (LPDWORD)NULL)) { //Return to read again } if (MPLUSFALSE == ::ReadFile(m_hStdOutRead, l_cArrOutput, min(BUFFER_SIZE - 2, l_dwAvailable), &l_dwRead, NULL) || !l_dwRead) { //error code } //Display data. If any one is aware of similar problem or worked on the similar issue please let me know the solution. Thanks in Advance, Ganesh

    Read the article

  • IBM Worklight v6.1.0.1 : Error when using Ionic Framework with Worklight and run on IOS environment

    - by NickNguyen
    I have created demo app using Ionic with Worklight and it worked on Android but got error on IOS, when i used mobile browser simulator and debugged on IOS environment, i got the folowing error message: Uncaught InvalidCharacterError: Failed to execute 'add' on 'DOMTokenList': The token provided ('platform-ios - iphone') contains HTML space characters, which are not valid in tokens. I just add Ionic files in index.html: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>index</title> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0"> <link rel="shortcut icon" href="images/favicon.png"> <link rel="apple-touch-icon" href="images/apple-touch-icon.png"> <link rel="stylesheet" href="css/main.css"> <link rel="stylesheet" href="ionic/css/ionic.css"> <script src="ionic/js/ionic.bundle.js"></script> </head> <body style="display: none;"> <!--application UI goes here--> <div class="bar bar-header bar-positive"> <h1 class="title">bar-positive</h1> </div> <script src="js/initOptions.js"></script> <script src="js/main.js"></script> <script src="js/messages.js"></script> </body> </html> I also tested on mobile device on both Android and IOS and only got error on IOS device. I don't know how to fix this. Can anyone help? Thanks.

    Read the article

  • getting CS1502 compiler error on dev environment but not production.

    - by nw
    When I try to run my ASP.NET app from my development environment I get the following error message: Compiler Error Message: CS1502: The best overloaded method match for 'mmars.Printing.printFunctions.SetPrintSummaryProperties(mmars.contextInfo, ref mmars.Printing.printObjSummary)' has some invalid arguments. When I publish and run on our production server I don't get this error. It seems to compile fine when I build from the build menu (in fact if I change the second argument of the bolded function call below, i get a compiler error in visual studio), but now i've suddenly started getting this error message at runtime. So another question I have in addition to getting rid of the error is why is the .NET development server even trying to do JIT compilation on my project if it is already compiled into a DLL? Printing.printObjSummary myPrintObj = new Printing.printObjSummary(); Printing.printFunctions.SetPrintSummaryProperties(ci, ref myPrintObj); printObjects.Add(myPrintObj); This seems to have just suddenly appeared from nowhere today and it's extremely frustrating. Also, though there are no warnings at compile-time, when I get redirected to the page with that first compilation error there are many warnings like the following: Warning: CS0436: The type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs' conflicts with the imported type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\assembly\dl3\7179c19a\345f948c_ece7ca01\mmars.DLL'. Using the type defined in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs'. What's the deal with that? Is the webserver complaining about name conflicts in the source file and dll resulting from the source file?

    Read the article

  • Why does a MFC application behaves mysteriously in encrypted hard drive environment.

    - by MauriceL
    I'm working on a bug where I have an MFC application that does weird things when installed in when Sophos Safeguard hard drive encryption is installed. I'm sorry to be so vague here, but I'm writing this away from the office so this is all from my (poor) memory. Three things I've noticed: AfxGetResourceHandle() doesn't return a consistent resource handle. There is a single case where we try to load a string resource, and for some reason, the resource handle that we get from this method is different to all the other stings. Can't construct a CDocumentTemplate. There is a trace error which I cant seem to recall. Will edit and post when I'm in tomorrow. This behaviour appears to manifest in a Visual Studio 2005 version of the project, but not a Visual Studio 2008 version. Unfortunately moving to the 2008 version is not an option. The bug is not always reproducable if I step through with a debugger. Also, bringing up debug message boxes changes the behaviuor, which leads me to think that either there is some kind of race condition going on with the way MFC events are being handled, but I'm not sure how I'll ever know for sure, or even what I can do about it if I did. I think there's some underlying reason that the app is behaving weirdly, but what I've posted are more symptoms. Can anyone think of what I should check for? I've run Windows update on the test environment to ensure everything was up to date, and I've examined the process in procmon to see if the disk encryption stuff was getting in the way and conflicting with files - it didn't appear to be, but it does appear to be involved in some way as our app accesses Sophos related paths in the temp directory.

    Read the article

  • .NET Development of iPhone App with MonoTouch - which development environment?

    - by Click Ahead
    Hi All, I'm a .NET developer (C#) with several years developing Windows Mobile Apps. I would like to get into developing iPhone Apps and MonoTouch looks good based on reviews I've read. So I'm going to go with MonoTouch. My understanding is that I'll need a new Mac, but as it happens I also need a new PC for my .NET windows development. My question is should I (a) Purchase a Mac Book Pro and dual boot with Windows 7 (b) Purchase a Mac Pro and dual boot with Windows 7 (c) Purchase a good Dev PC and a slighlty less well spec'd Mac Book Pro or Mac Pro Bear in mind I'm only doing MonoTouch development with the Mac, most of my development (approx. 80% initially) will be done on the Windows side. My budget is approx. €3,000 / $4,000 and I'd like a good, fast development environment.It's purely for development so on the windows side installing SQL 2008/VS 2010/Office and on the OS X side installing MonoTouch. BTW - my budget excludes licensing for VS/MonoTouch/etc, I have a MonoTouch and MSDN license. Any opinions are greatly appreciated. I'm a newbie to Mac's !

    Read the article

  • Compare those hard-to-reach servers with SQL Snapper

    - by Michelle Taylor
    If you’ve got an environment which is at the end of an unreliable or slow network connection, or isn’t connected to your network at all, and you want to do a deployment to that environment – then pointing SQL Compare at it directly is difficult or impossible. While you could run SQL Compare locally on that environment, if it’s a server – especially if it’s a locked-down server – you probably don’t want to go through the hassle of using another activation on it. Or possibly you’re not allowed to install software at all, because you don’t have admin rights – but you can run user-mode software. SQL Snapper is a standalone, licensing-free program which takes SQL Compare snapshots of a database. It can create a snapshot within the context of that environment which can then be moved to your working environment to run SQL Compare against, allowing you to create a deployment script for environments you can’t get SQL Compare into. Where can I find it? You can find RedGate.SQLSnapper.exe in your SQL Compare installation directory – if you haven’t changed it, that will be something like C:\Program Files (x86)\Red Gate\SQL Compare 10 (or 11 if you’re using our SQL Server 2014 support beta). As well as copying the executable, you’ll also currently need to copy the System.Threading.dll and RedGate.SOCCompareInterface.dll files from the same directory alongside it. How do I use it? SQL Snapper’s UI is just a cut-down version of the snapshot creation UI in SQL Compare – just fill in the boxes and create your snapshot, then bring it back to the place you use SQL Compare to compare against your difficult-to-reach environment. SQL Snapper also has a command-line mode if you can’t run the UI in your target environment – just specify the server, database and output location with the /server, /database and /mksnap arguments, and optionally the username and password if you’re using SQL security, e.g.: RedGate.SQLSnapper.exe /database:yourdatabase /server:yourservername /username:youruser /password:yourpassword /mksnap:filename.snp What’s the catch? There are a few limitations of SQL Snapper in its current form – notably, it can’t read encrypted objects, and you’ll also currently need to copy the System.Threading.dll and RedGate.SOCCompareInterface.dll files alongside it, which we recognise is a little awkward in some environments. If you use SQL Snapper and want to share your experiences, or help us work on improving the experience in future, please comment here or leave a request on the SQL Compare UserVoice at https://redgate.uservoice.com/forums/141379-sql-compare.

    Read the article

  • SQLAuthority News – A Quick Note on @Pluralsight Video – Call Me Maybe Developer Way

    - by pinaldave
    I write a lot about how important learning and training is.  Any of my readers will know that I think the key to success is staying current with your education and taking very opportunity to increase your “tool kit” of skills.  I hope that I have not made the impression that it is all in the employees hands to make sure they are happy and satisfied at their jobs. I also firmly believe that a good boss will make good employees.  A boss who is good at communicating,  and leading, who knows how to nip problem in the bud and allocate resources wisely will have a well-oiled machine.  This means happy employees and a great work environment. It is important to have a healthy work environment because you will not succeed without one.  Successful business will always have the type of environment that fosters creativity and has efficient employees.  A healthy environment doesn’t force employees to produce results, but allows them to progress and create the results themselves. The result of a healthy work environment is that employees will enjoy their work and then work harder.  This can bring the company more revenue, and hopefully the employees will see the result of their hard work in bonuses and raises.  However, money is important but it is certainly secondary – the important part is the dedication of the employees to their work and to their company.  This is the true key to success. Any employee who recognizes this description as their working environment should consider themselves fortunate.  They are allowed to grow and do better, and employees being treated fairly can be a rarity in this world.  One company that I believe adheres to this principle is Pluralsight – as evidenced by this fun video. I have blogged about it earlier. (check out my cameo at 0:37). It was great fun to work with the employees at Pluralsight while making this video.  They are a great bunch and clearly have a great work environment – we wouldn’t have had this much fun if not!  I have to tell you a little bit about making this video.  My wife shot it with her mobile phone, which was certainly a different but exciting experience!  It was hard to get the look of the video right, since I was trying to portray a body builder – this was a little outside of my own personal experience.  I have what I like to call a “healthy” body type, so trying to look extremely fit like some of the other “actors” in this video was a challenge – but I do hope that you all think I succeeded.  All in all, it was great fun to participate in this video and I hope to see my friends at Pluralsight again soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • R: getting "inside" environments

    - by Mike Dewar
    Given an environment object e: > e <environment: 0x10f0a6e98> > class(e) [1] "environment" How do you access the variables inside the environment? Just in case you're curious, I have found myself with this environment object. I didn't make it, a package in Bioconductor made it. You can make it, too, using these commands: library('GEOquery') eset <- getGEO("GSE4142")[[1]] e <- assayData(eset)

    Read the article

  • How can I strip line breaks from my XML with XSLT?

    - by Eric
    I have this XSLT: <xsl:strip-space elements="*" /> <xsl:template match="math"> <img class="math"> <xsl:attribute name="src">http://latex.codecogs.com/gif.latex?<xsl:value-of select="text()" /></xsl:attribute> </img> </xsl:template> Which is being applied to this XML (notice the line break): <math>\text{average} = \alpha \times \text{data} + (1-\alpha) \times \text{average}</math> Unfortunately, the transform creates this: <img class="math" src="http://latex.codecogs.com/gif.latex?\text{average} = \alpha \times \text{data} + (1-\alpha) \times&#10;&#9;&#9;&#9;&#9;&#9;\text{average}" /> Notice the whitespace character literals. Although it works, it's awfully messy. How can I prevent this?

    Read the article

  • STORED PROCEDURE working in my local test machine cannot be created in production environment.

    - by Marcos Buarque
    Hi, I have an SQL CREATE PROCEDURE statement that runs perfectly in my local SQL Server, but cannot be recreated in production environment. The error message I get in production is Msg 102, Level 15, State 1, Incorrect syntax near '='. It is a pretty big query and I don't want to annoy StackOverflow users, but I simply can't find a solution. If only you could point me out what settings I could check in the production server in order to enable running the code... I must be using some kind of syntax or something that is conflicting with some setting in production. This PROCEDURE was already registered in production before, but when I ran a DROP - CREATE PROCEDURE today, the server was able to drop the procedure, but not to recreate it. I will paste the code below. Thank you! =============== USE [Enorway] GO /****** Object: StoredProcedure [dbo].[Spel_CM_ChartsUsersTotals] Script Date: 03/17/2010 11:59:57 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROC [dbo].[Spel_CM_ChartsUsersTotals] @IdGroup int, @IdAssessment int, @UserId int AS SET NOCOUNT ON DECLARE @RequiredColor varchar(6) SET @RequiredColor = '3333cc' DECLARE @ManagersColor varchar(6) SET @ManagersColor = '993300' DECLARE @GroupColor varchar(6) SET @GroupColor = 'ff0000' DECLARE @SelfColor varchar(6) SET @SelfColor = '336600' DECLARE @TeamColor varchar(6) SET @TeamColor = '993399' DECLARE @intMyCounter tinyint DECLARE @intManagersPosition tinyint DECLARE @intGroupPosition tinyint DECLARE @intSelfPosition tinyint DECLARE @intTeamPosition tinyint SET @intMyCounter = 1 -- Table that will hold the subtotals... DECLARE @tblTotalsSource table ( IdCompetency int, CompetencyName nvarchar(200), FunctionRequiredLevel float, ManagersAverageAssessment float, SelfAssessment float, GroupAverageAssessment float, TeamAverageAssessment float ) INSERT INTO @tblTotalsSource ( IdCompetency, CompetencyName, FunctionRequiredLevel, ManagersAverageAssessment, SelfAssessment, GroupAverageAssessment, TeamAverageAssessment ) SELECT e.[IdCompetency], dbo.replaceAccentChar(e.[Name]) AS CompetencyName, (i.[LevelNumber]) AS FunctionRequiredLevel, ( SELECT ROUND(avg(CAST(ac.[LevelNumber] AS float)),0) FROM Spel_CM_AssessmentsData aa INNER JOIN Spel_CM_CompetenciesLevels ab ON aa.[IdCompetencyLevel] = ab.[IdCompetencyLevel] INNER JOIN Spel_CM_Levels ac ON ab.[IdLevel] = ac.[IdLevel] INNER JOIN Spel_CM_AssessmentsEvents ad ON aa.[IdAssessmentEvent] = ad.[IdAssessmentEvent] WHERE aa.[EvaluatedUserId] = @UserId AND aa.[AssessmentType] = 't' AND aa.[IdGroup] = @IdGroup AND ab.[IdCompetency] = e.[IdCompetency] AND ad.[IdAssessment] = @IdAssessment ) AS ManagersAverageAssessment, ( SELECT bc.[LevelNumber] FROM Spel_CM_AssessmentsData ba INNER JOIN Spel_CM_CompetenciesLevels bb ON ba.[IdCompetencyLevel] = bb.[IdCompetencyLevel] INNER JOIN Spel_CM_Levels bc ON bb.[IdLevel] = bc.[IdLevel] INNER JOIN Spel_CM_AssessmentsEvents bd ON ba.[IdAssessmentEvent] = bd.[IdAssessmentEvent] WHERE ba.[EvaluatedUserId] = @UserId AND ba.[AssessmentType] = 's' AND ba.[IdGroup] = @IdGroup AND bb.[IdCompetency] = e.[IdCompetency] AND bd.[IdAssessment] = @IdAssessment ) AS SelfAssessment, ( SELECT ROUND(avg(CAST(cc.[LevelNumber] AS float)),0) FROM Spel_CM_AssessmentsData ca INNER JOIN Spel_CM_CompetenciesLevels cb ON ca.[IdCompetencyLevel] = cb.[IdCompetencyLevel] INNER JOIN Spel_CM_Levels cc ON cb.[IdLevel] = cc.[IdLevel] INNER JOIN Spel_CM_AssessmentsEvents cd ON ca.[IdAssessmentEvent] = cd.[IdAssessmentEvent] WHERE ca.[EvaluatedUserId] = @UserId AND ca.[AssessmentType] = 'g' AND ca.[IdGroup] = @IdGroup AND cb.[IdCompetency] = e.[IdCompetency] AND cd.[IdAssessment] = @IdAssessment ) AS GroupAverageAssessment, ( SELECT ROUND(avg(CAST(dc.[LevelNumber] AS float)),0) FROM Spel_CM_AssessmentsData da INNER JOIN Spel_CM_CompetenciesLevels db ON da.[IdCompetencyLevel] = db.[IdCompetencyLevel] INNER JOIN Spel_CM_Levels dc ON db.[IdLevel] = dc.[IdLevel] INNER JOIN Spel_CM_AssessmentsEvents dd ON da.[IdAssessmentEvent] = dd.[IdAssessmentEvent] WHERE da.[EvaluatedUserId] = @UserId AND da.[AssessmentType] = 'm' AND da.[IdGroup] = @IdGroup AND db.[IdCompetency] = e.[IdCompetency] AND dd.[IdAssessment] = @IdAssessment ) AS TeamAverageAssessment FROM Spel_CM_AssessmentsData a INNER JOIN Spel_CM_AssessmentsEvents c ON a.[IdAssessmentEvent] = c.[IdAssessmentEvent] INNER JOIN Spel_CM_CompetenciesLevels d ON a.[IdCompetencyLevel] = d.[IdCompetencyLevel] INNER JOIN Spel_CM_Competencies e ON d.[IdCompetency] = e.[IdCompetency] INNER JOIN Spel_CM_Levels f ON d.[IdLevel] = f.[IdLevel] -- This will link with user's assigned functions INNER JOIN Spel_CM_FunctionsCompetenciesLevels g ON a.[IdFunction] = g.[IdFunction] INNER JOIN Spel_CM_CompetenciesLevels h ON g.[IdCompetencyLevel] = h.[IdCompetencyLevel] AND e.[IdCompetency] = h.[IdCompetency] INNER JOIN Spel_CM_Levels i ON h.[IdLevel] = i.[IdLevel] WHERE (NOT c.[EndDate] IS NULL) AND a.[EvaluatedUserId] = @UserId AND c.[IdAssessment] = @IdAssessment AND a.[IdGroup] = @IdGroup GROUP BY e.[IdCompetency], e.[Name], i.[LevelNumber] ORDER BY e.[Name] ASC -- This will define the position of each element (managers, group, self and team) SELECT @intManagersPosition = @intMyCounter FROM @tblTotalsSource WHERE NOT ManagersAverageAssessment IS NULL IF IsNumeric(@intManagersPosition) = 1 BEGIN SELECT @intMyCounter += 1 END SELECT @intGroupPosition = @intMyCounter FROM @tblTotalsSource WHERE NOT GroupAverageAssessment IS NULL IF IsNumeric(@intGroupPosition) = 1 BEGIN SELECT @intMyCounter += 1 END SELECT @intSelfPosition = @intMyCounter FROM @tblTotalsSource WHERE NOT SelfAssessment IS NULL IF IsNumeric(@intSelfPosition) = 1 BEGIN SELECT @intMyCounter += 1 END SELECT @intTeamPosition = @intMyCounter FROM @tblTotalsSource WHERE NOT TeamAverageAssessment IS NULL -- This will render the final table for the end user. The tabe will flatten some of the numbers to allow them to be prepared for Google Graphics. SELECT SUBSTRING( ( SELECT ( '|' + REPLACE(ma.[CompetencyName],' ','+')) FROM @tblTotalsSource ma ORDER BY ma.[CompetencyName] DESC FOR XML PATH('') ), 2, 1000) AS 'CompetenciesNames', SUBSTRING( ( SELECT ( ',' + REPLACE(ra.[FunctionRequiredLevel]*10,' ','+')) FROM @tblTotalsSource ra FOR XML PATH('') ), 2, 1000) AS 'FunctionRequiredLevel', SUBSTRING( ( SELECT ( ',' + CAST(na.[ManagersAverageAssessment]*10 AS nvarchar(10))) FROM @tblTotalsSource na FOR XML PATH('') ), 2, 1000) AS 'ManagersAverageAssessment', SUBSTRING( ( SELECT ( ',' + CAST(oa.[GroupAverageAssessment]*10 AS nvarchar(10))) FROM @tblTotalsSource oa FOR XML PATH('') ), 2, 1000) AS 'GroupAverageAssessment', SUBSTRING( ( SELECT ( ',' + CAST(pa.[SelfAssessment]*10 AS nvarchar(10))) FROM @tblTotalsSource pa FOR XML PATH('') ), 2, 1000) AS 'SelfAssessment', SUBSTRING( ( SELECT ( ',' + CAST(qa.[TeamAverageAssessment]*10 AS nvarchar(10))) FROM @tblTotalsSource qa FOR XML PATH('') ), 2, 1000) AS 'TeamAverageAssessment', SUBSTRING( ( SELECT ( '|t++' + CAST([FunctionRequiredLevel] AS varchar(10)) + ',' + @RequiredColor + ',0,' + CAST(ROW_NUMBER() OVER(ORDER BY CompetencyName) - 1 AS varchar(2)) + ',9') FROM @tblTotalsSource FOR XML PATH('') ), 2, 1000) AS 'FunctionRequiredAverageLabel', SUBSTRING( ( SELECT ( '|t++' + CAST([ManagersAverageAssessment] AS varchar(10)) + ',' + @ManagersColor + ',' + CAST(@intManagersPosition AS varchar(2)) + ',' + CAST(ROW_NUMBER() OVER(ORDER BY CompetencyName) - 1 AS varchar(2)) + ',9') FROM @tblTotalsSource FOR XML PATH('') ), 2, 1000) AS 'ManagersLabel', SUBSTRING( ( SELECT ( '|t++' + CAST([GroupAverageAssessment] AS varchar(10)) + ',' + @GroupColor + ',' + CAST(@intGroupPosition AS varchar(2)) + ',' + CAST(ROW_NUMBER() OVER(ORDER BY CompetencyName) - 1 AS varchar(2)) + ',9') FROM @tblTotalsSource FOR XML PATH('') ), 2, 1000) AS 'GroupLabel', SUBSTRING( ( SELECT ( '|t++' + CAST([SelfAssessment] AS varchar(10)) + ',' + @SelfColor + ',' + CAST(@intSelfPosition AS varchar(2)) + ',' + CAST(ROW_NUMBER() OVER(ORDER BY CompetencyName) - 1 AS varchar(2)) + ',9') FROM @tblTotalsSource FOR XML PATH('') ), 2, 1000) AS 'SelfLabel', SUBSTRING( ( SELECT ( '|t++' + CAST([TeamAverageAssessment] AS varchar(10)) + ',' + @TeamColor + ',' + CAST(@intTeamPosition AS varchar(2)) + ',' + CAST(ROW_NUMBER() OVER(ORDER BY CompetencyName) - 1 AS varchar(2)) + ',10') FROM @tblTotalsSource FOR XML PATH('') ), 2, 1000) AS 'TeamLabel', (Count(src.[IdCompetency]) * 30) + 100 AS 'ControlHeight' FROM @tblTotalsSource src SET NOCOUNT OFF GO

    Read the article

  • suppose there is a class which contains 4 data fields.i have to read these value from xml file and s

    - by SunilRai86
    suppose there is a class which contains 4 fields.i have to read these value from xml file and set that value to fields the xml file is like that <Root> <Application > <AppName>somevalue</AppName> <IdMark>somevalue</IdMark> <ClassName>ABC</ClassName> <ExecName>XYZ</ExecName> </Application> <Application> <AppName>somevalue</AppName> <IdMark>somevalue</IdMark> <ClassName>abc</ClassName> <ExecName>xyz</ExecName> </Application> </Root> now i have to read all the values from xml file and set each value to particular fields. i hav done reading of the xml file and i saved the retrieved value in arraylist. the code is like that public class CXmlFileHook { string appname; string classname; string idmark; string execname; string ctor; public CXmlFileHook() { this.appname = "Not Set"; this.idmark = "Not Set"; this.classname = "Not Set"; this.execname = "Not Set"; this.ctor = "CXmlFileHook()"; } public void readFromXmlFile(string path) { XmlTextReader oRreader = new XmlTextReader(@"D:\\Documents and Settings\\sunilr\\Desktop\\MLPACK.xml"); //string[] strNodeValues = new string[4] { "?","?","?","?"}; ArrayList oArrayList = new ArrayList(); while (oRreader.Read()) { if (oRreader.NodeType == XmlNodeType.Element) { switch (oRreader.Name) { case "AppName": oRreader.Read(); //strNodeValues[0] =oRreader.Value; oArrayList.Add(oRreader.Value); break; case "IdMark": oRreader.Read(); //strNodeValues[1] = oRreader.Value; oArrayList.Add(oRreader.Value); break; case "ClassName": oRreader.Read(); //strNodeValues[2] = oRreader.Value; oArrayList.Add(oRreader.Value); break; case "ExecName": oRreader.Read(); //strNodeValues[3] = oRreader.Value; oArrayList.Add(oRreader.Value); break; } } } Console.WriteLine("Reading from arraylist"); Console.WriteLine("-------------------------"); for (int i = 0; i < oArrayList.Count; i++) { //Console.WriteLine("Reading from Sting[]"+ strNodeValues[i]); Console.WriteLine(oArrayList[i]); } //this.appname = strNodeValues[0]; //this.idmark = strNodeValues[1]; //this.classname = strNodeValues[2]; //this.execname = strNodeValues[3]; this.appname = oArrayList[0].ToString(); this.idmark = oArrayList[1].ToString(); this.classname = oArrayList[2].ToString(); this.execname = oArrayList[3].ToString(); } static string vInformation; public void showCurrentState(string path) { FileStream oFileStream = new FileStream(path, FileMode.Append, FileAccess.Write); StreamWriter oStreamWriter = new StreamWriter(oFileStream); oStreamWriter.WriteLine("****************************************************************"); oStreamWriter.WriteLine(" Log File "); oStreamWriter.WriteLine("****************************************************************"); CXmlFileHook oFilehook = new CXmlFileHook(); //Type t = Type.GetType(this._classname); //Type t = typeof(CConfigFileHook); DateTime oToday = DateTime.Now; vInformation += "Logfile created on : "; vInformation += oToday + Environment.NewLine; vInformation += "Public " + Environment.NewLine; vInformation += "----------------------------------------------" + Environment.NewLine; vInformation += "Private " + Environment.NewLine; vInformation += "-----------------------------------------------" + Environment.NewLine; vInformation += "ctor = " + this.ctor + Environment.NewLine; vInformation += "appname = " + this.appname + Environment.NewLine; vInformation += "idmark = " + this.idmark + Environment.NewLine; vInformation += "classname = " + this.classname + Environment.NewLine; vInformation += "execname = " + this.execname + Environment.NewLine; vInformation += "------------------------------------------------" + Environment.NewLine; vInformation += "Protected" + Environment.NewLine; vInformation += "------------------------------------------------" + Environment.NewLine; oStreamWriter.WriteLine(vInformation); oStreamWriter.Flush(); oStreamWriter.Close(); oFileStream.Close(); } } here i set set the fields according to arraylist index but i dont want is there any another solution for this....

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • How do I work around this problem creating a virtualenv environment with a custom-build Python?

    - by Daryl Spitzer
    I need to run some code on a Linux machine with Python 2.3.4 pre-installed. I'm not on the sudoers list for that machine, so I built Python 2.6.4 into (a subdirectory in) my home directory. Then I attempted to use virtualenv (for the first time), but got: $ Python-2.6.4/python virtualenv/virtualenv.py ENV New python executable in ENV/bin/python Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Installing setuptools......... Complete output from command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg: Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "<string>", line 67, in <module> ImportError: No module named md5 ---------------------------------------- ...Installing setuptools...done. Traceback (most recent call last): File "virtualenv/virtualenv.py", line 1488, in <module> main() File "virtualenv/virtualenv.py", line 529, in main use_distribute=options.use_distribute) File "virtualenv/virtualenv.py", line 619, in create_environment install_setuptools(py_executable, unzip=unzip_setuptools) File "virtualenv/virtualenv.py", line 361, in install_setuptools _install_req(py_executable, unzip) File "virtualenv/virtualenv.py", line 337, in _install_req cwd=cwd) File "virtualenv/virtualenv.py", line 590, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg failed with error code 1 Should I be setting PYTHONHOME to some value? (I intentionally named my ENV "ENV" for lack of a better name.) Not knowing if I can ignore those errors, I tried installing nose (0.11.1) into my ENV: $ cd nose-0.11.1/ $ ls AUTHORS doc/ lgpl.txt nose.egg-info/ selftest.py* bin/ examples/ MANIFEST.in nosetests.1 setup.cfg build/ functional_tests/ NEWS PKG-INFO setup.py CHANGELOG install-rpm.sh* nose/ README.txt unit_tests/ $ ~/ENV/bin/python setup.py install Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "setup.py", line 1, in <module> from nose import __version__ as VERSION File "/apps/users/dspitzer/nose-0.11.1/nose/__init__.py", line 1, in <module> from nose.core import collector, main, run, run_exit, runmodule File "/apps/users/dspitzer/nose-0.11.1/nose/core.py", line 3, in <module> from __future__ import generators ImportError: No module named __future__ Any advice?

    Read the article

  • How can I use fossil (DVCS) in a home environment?

    - by Mosh
    I'm trying fossil as my new VCS, since I'm a lone developer working on small projects. I started testing fossil but I encountered a (probably major newbie) problem. How does one push or pull to another directory (which is easy on Hg). Fossil pull or push commands expect a URL and not a directory. When I start a server in one directory and try to push from another directory I get the "server loop" error message. Any ideas?

    Read the article

  • In a distributed environment, how can I configure log4j to log to different files for each JVM insta

    - by Renan Mozone
    My application runs on IBM WebSphere 6.1 Network Deployment. The application have several JSP files and Java classes. Today each host have only one JVM instance but my intention is to start another instance on each host. How can I configure log4j to log to different files for each JVM instance in the same host? I thought of using variable substitution on log4j XML configuration file but it only works with system properties. So, it is safe and recommended to set a custom system property just to store the JVM name? Anyone knows another strategy to achieve this in a 'elegant' way?

    Read the article

  • Custom ASP.NET MVC cache controllers in a shared hosting environment?

    - by Daniel Crenna
    I'm using custom controllers that cache static resources (CSS, JS, etc.) and images. I'm currently working with a hosting provider that has set me up under a full trust profile. Despite being in full trust, my controllers fail because the caching strategy relies on the File class to directly open a resource file prior to treatment and storage in memory. Is this something that would likely occur in all full trust shared hosting environments or is this specific to my host? The static files live within my application's structure and not in an arbitrary server path. It seems to me that custom caching would require code to access the file directly, and am hoping someone else has dealt with this issue.

    Read the article

  • Pros/Cons of MySQL vs Postgresql for production Ruby on Rails environment?

    - by cakeforcerberus
    I will soon be switching from sqlite3 to either postgres or mysql. What should I consider when making this decision? Is mysql more suited for Rails than postgres in some areas and/or vice versa? Or, as I somewhat suspect, does it not really matter either way? Another factor that might play into my decision is the availability of tools to data pump my test data from the sqlite3 db to my new one. Is there anything that ActiveRecord provides natively to do this or any decent plugins/gems to help with this task? BONUS: How do I pronounce "Postgresql" and sound like I know what I'm talking about? :) Thanks Greg Smith for providing the following link that shows the most common pronunciations: http://www.postgresql.org/community/survey.33 UPDATE: Reference this question for more: http://stackoverflow.com/questions/110927/do-you-recommend-postgresql-over-mysql FYI: I ended up using MySQL. There is a neat plugin called yamldb that really saved me some time with the data transfer from my sqlite db to my new mysql one. Instructions on how to install and use it can be found here: http://accidentaltechnologist.com/ruby/change-databases-in-rails-with-yamldb/ Thanks Tom

    Read the article

  • How to set SQL_BIG_SELECTS = 1 from VB(legacy ASP) with ADODB environment?

    - by conecon
    I encountered The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is okay error with my ASP code. ASP code has server side ADODB connection with MySQL and connection seems not be able to execute multiple query. How to implement SQL_BIG_SELECTS = 1 in my code? Set obj_db = Server.CreateObject("ADODB.Connection") Session("ConnectionString") = "dsn=dsn1016189_mysql;uid=apns;pwd=mypassword;DATABASE=mydb;APP=ASP Script;STMT=SET CHARACTER SET SJIS" obj_db.Open Session("ConnectionString") Set obj_ret = Server.CreateObject("ADODB.Recordset") obj_ret.CursorLocation = 3 and executing SQL... SQL_BIG_SELECTS = 1; SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.invited, pu.reason, qaa1.answer AS qaa1_answer, COUNT(pu2.p_login_id) AS companion FROM party_user pu LEFT OUTER JOIN party_user pu2 ON pu2.p_login_id = pu.login_id LEFT OUTER JOIN qa_answer qaa1 ON qaa1.login_id = pu.login_id AND qaa1.party_id = pu.party_id AND qaa1.sort_num = '1' WHERE pu.party_id = '92' AND pu.p_login_id = '' GROUP BY pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.reason, qaa1.answer, pu.invited ORDER BY pu.login_id ASC; I can't execute multiple query and above query become error. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_ad' at line 1

    Read the article

  • 'sript/console test' with spork and rspec not loading the whole environment?

    - by TheDeeno
    I'm trying to load up console to interact with some of my rspec mocking helpers. I expected that running script/console test would load an env similar to when I run spec. However, this doesn't appear to be the case. It looks like spec_helper is never loaded. Or, if it is, it's not actually running through the logic because spork has polluted it a bit. In short, is there a quick and easy way to get an interactive rspec party going?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >