Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 400/1333 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • MOSS 2007 team site page title

    - by nav
    Hi, I'm trying to display the page title (html title) on the default.aspx page of a custom site template. The template is based on a MOSS team site template. All that displays is the URL of the page as the page title. Can I change the code in the default.aspx and/or the sites master page to define the title myself? Details of the deafult.aspx and default.master page as below: Thanks. Default.aspx: <asp:Content ContentPlaceHolderId="PlaceHolderPageTitle" runat="server"> <SharePoint:EncodedLiteral runat="server" text="<%$Resources:wss,multipages_homelink_text%>" EncodeMethod="HtmlEncode"/> - <SharePoint:ProjectProperty Property="Title" runat="server"/> </asp:Content> default.master <Title ID=onetidTitle><asp:ContentPlaceHolder id=PlaceHolderPageTitle runat="server"/></Title>

    Read the article

  • TFS and team project portal question

    - by DotnetDude
    The team explorer for my project looks like this: mytfsserver\mycollection -- Project 1 solution -- Project 2 solution -- Project 3 solution When I right click on one of the solutions and do a "Show Project Portal", I see the following hierarchy: mycollection - WSS site Project 1 site with dashboard (appears to be a MOSS site) Project 2 site with dashboard (appears to be a MOSS site) Project 3 site with dashboard (appears to be a MOSS site) Are the dashboard sites MOSS sites? If I want to create a wiki, do I have to create a subsite with the wiki template under each of the Project sites? Can someone point me to a document/video that talks about the sites that area created by TFS by default?

    Read the article

  • Looking for Team Development type Software like SVN

    - by SoLoGHoST
    I am in need of a software that is PHP-Based, or similar that can be installed on my server that doesn't offer SVN Perks. It should be somewhat similar to an SVN, however, since the server doesn't support SVN, we'll need another means of doing sort of the same thing. We have a team of Developers and need to accomplish progress in the same way that an SVN does, but without that type of server support. Is there any software that could be installed via webhosting that would be somewhat, if not exactly, similar to an SVN? Please help, Thanks :) P.S. - This is related to development AFAIK, but not exactly code-related.

    Read the article

  • Setting up Maven for a team

    - by Bytecode Ninja
    I have used Maven extensively for my personal projects, but now I have to set it up for a small team of 5-7 developers. My main question is, what's the recommended approach for managing dependencies to commercial libraries that are not available in the Maven repos? How can I manage the dependency of these commercial libraries on the open source libraries that they use and are available in Maven repos? Also, are there any other recommendations and advices that I should know or implement to make the setup as efficient, effective, and smooth as possible? Thanks in advance.

    Read the article

  • How to handle multiple projects in a small team

    - by meo
    We just started to use scrum for our project management. We are a very small team (2 developers, 1 UI/Web-deisgner ) and we have a lot of running projects at once. How do you handle having multiple projects running at once in the scrum model? Most of the time we have a main projects and some small ones. How do you combine multiple sprints efficiently? PS: I'm not sure stackoverflow is the right place to ask this kind of question, i hope there is a scrum master out there reading this.

    Read the article

  • TFS 2008 ignores team project check-in settings

    - by JoshEarl
    I'm trying to set up our TFS 2008 instance to require that projects build before they can be checked in. I have created a check-in policy using the out of the box "Builds" policy, but I'm still able to check broken projects in after mangling the code and attempting to build the project. We're a small shop, and TFS was originally set up with our team's Active Directory group listed as TFS admins. Is this the problem? Do check-in policies apply to TFS admins? Any other suggestions?

    Read the article

  • Code reviews for larger ASP.NET MVC team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • Standardize word macro usage within the team

    - by user138010
    Hi, I have a team of 10 persons who work on word documents, they format them as per our defined guidlines. To complete the work fast we have created a macro that has been updated on all the machines. This macro corrects the font, size and formatting. How can I ensure and implement that nobody can change/replace or delete this macro from their system? In case this happens I should get an alert. Can something be done at the system or programme level? Thanks, PK

    Read the article

  • Microsoft 2010 Product Tour

    - by dmccollough
    Randy Walker, Co-Founder of the Northwest Arkansas .Net User Group and Microsoft MVP has arranged for a couple of Microsoft experts, Sarika Calla Team Lead on the IDE Team and Kevin Halverson to give presentations on newly released Visual Studio 2010.   June 1 – Bentonville, Arkansas Wal-Mart .Net User Group June 1 – Rogers, Arkansas Northwest Arkansas SQL Server User Group (lunch meeting) June 1 – Springdale, Arkansas Tyson devLoop June 1 – Fayetteville, Arkansas Northwest Arkansas .Net User Group June 2 – Fort Smith, Arkansas Datatronics June 2 – Little Rock, Arkansas Little Rock .Net User Group June 3 – Fort Worth, Texas Fort Worth .Net User Group   Please contact Randy Walker with questions at [email protected].

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Parsing flat files using SSIS : SSIS Nugget

    - by jamiet
    Often when using SQL Server Integration Services (SSIS) you will find there is more than one way of accomplishing a task and that the most obvious method of doing so might not be the optimal one. In the video below I demonstrate this by way of an experiment using SSIS’s Flat File Source component; I show different ways that you can pull data from a flat file into the SSIS dataflow and also how the nature of the data itself can influence your choice as to how this task should be accomplished. If you are having trouble viewing the video in your blog reader then head to http://sqlblog.com/blogs/jamie_thomson/archive/2010/03/25/parsing-flat-files-using-ssis-ssis-nugget.aspx to see it as it is hosted on my blog!  The main point I want to get across from this video is that a little bit of creative thinking when building your dataflows can sometimes be very beneficial for performance; quite often building a solution that isn’t the most obvious might actually turn out to be the best one. You’ll notice, if you have watched the video, that my editing skills weren’t quite up to snuff and I cut off the final few words however all I was saying was that if you have any feedback on this video then I would love to hear it either via email or preferably the comments section below. I hope this turns out to be useful to some of you. @Jamiet P.S. Incidentally the parsing that we do using SSIS expressions in the video would be much easier if we had a TOKENISE function in SSIS’s expression language and I have asked for the introduction of such a function on Connect at [SSIS] TOKEN(string, tokeniser_string, occurence) function. Feel free to go and vote that up if you think this feature would be useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • A whole site for reviewing of SQL Server MVP Deep Dives

    - by Rob Farley
    This book just keeps amazing me. Not only as I read through some chapters for the first time, and others for the second and third times, but also as I read reviews of it written by other people. The guys over at http://sqlperspectives.wordpress.com are a prime example. They’ve been going through each chapter, each writing a review on it, and often getting a guest blogger to write something as well – and they’re clearly getting a lot of stuff out of this brilliant book. Back when I first heard about them doing this, I had offered to be involved, and recently did an interview with them about my chapters (chapter seven and chapter forty). That interview can be found at http://sqlperspectives.wordpress.com/2010/03/20/interview-with-rob-farley/ – and covers how I got into databases, and how I think the database roles in the IT industry are changing. If you don’t have a copy of SQL Server MVP Deep Dives yet, why not get a copy from http://www.sqlservermvpdeepdives.com (or persuade your local bookstore to get some copies in), and read through chapters with these guys? Treat it like a book club, discussing each chapter with others (guest blogging perhaps?), and you’ll probably end up getting even more out of it. Remember that the proceeds of the book go to charity (instead of the authors – we get nothing), so you don’t need to consider that you’re splashing out on a treat for yourself. Think of the kids helped by War Child instead. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Scripting Asset Creation

    - by The Old Toxophilist
    So far in this series we have looked at creating asset within the EMOC BUI but the Exalogic 2.0.1 installation also provide the Iaas cli as an alternative to most of the common functionality available within EMOC. The IaaS cli interface provides access to the functions that are available to a user logged into the BUI with the CloudUser Role. As such not all functionality is available from the command line interface however having said that the IaaS cli provides all the functionality required to create the Assets within a specific Account (Tenure). Because these action are common and repeatable I decided to wrap the functionality within a simple script that takes a simple input file and creates the Asset. Following the Script through will show us the required steps needed to create the various Assets within an Account and hence I will work through the various functions within the script below describing the steps. You will note from the various steps within the script that it is designed to pause between actions allowing the proceeding action to complete. The reason for this is because we could swamp EMOC with a series of actions and may end up with a situation where we are trying to action a Volume attached before the creation of the vServer and Volume have completed. processAssets() This function simply reads through the passed input file identifying what assets need to be created. An example of the input file can be found below. It can be seen that the input file can be used to create Assets in multiple Accounts during a single run. The order of the entries define the functions that need to be actioned as follows: Input Command Iaas Actions Parameters Production:Connect akm-describe-accounts akm-create-access-key iaas-create-key-pair iaas-describe-vnets iaas-describe-vserver-types iaas-describe-server-templates Username Password Production:Create|vServer iaas-run-vserver vServer Name vServer Type Name Template Name Comma separated list of network names which the vServer will connect to. Comma separated list of IPs for the specified networks. Production:Create|Volume iaas-create-volume Volume Name Volume Size Production:Attach|Volume iaas-attach-volumes-to-vserver vServer Name Comma separated list of volume names Production:Disconnect iaas-delete-key-pair akm-delete-access-key None connectToAccount() It can be seen from the connectToAccount function that before we can execute any Asset creation we must first connect to the appropriate account. To do this we will need the ID associated with the Account. This can be found by executing the akm-describe-accounts cli command which will return a list of all Accounts and there IDs. Once we have the Account ID we generate and Access key using the akm-create-access-key command and then a keypair with the iaas-create-key-pair command. At this point we now have all the information we need to access the specific named account. createVServer() This function simply retrieved the information from the input line and then will create the vServer using the iaas-run-vserver cli command. Reading the function you will notice that it takes the various input names for vServer Type, Template and Networks and converts them into the appropriate IDs. The IaaS cli will not work directly with component names and hence all IDs need to be found. createVolume() Function that simply takes the Volume name and Size then executes the iaas-create-volume command to create the volume. attachVolume() Takes the name of the Volume, which we may have just created, and a Volume then identifies the appropriate IDs before assigning the Volume to the vServer with the iaas-attach-volumes-to-vserver. disconnectFromAccount() Once we have finished connecting to the Account we simply remove the key pair with iaas-delete-key-pair and the access key with akm-delete-access-key although it may be useful to keep this if ssh is required and you do not subsequently modify the sshd information to allow unsecured access. By default the key is required for ssh access when a vServer is created from the command-line. CreateAssets.sh 1 export OCCLI=/opt/sun/occli/bin 2 export IAAS_HOME=/opt/oracle/iaas/cli 3 export JAVA_HOME=/usr/java/latest 4 export IAAS_BASE_URL=https://127.0.0.1 5 export IAAS_ACCESS_KEY_FILE=iaas_access.key 6 export KEY_FILE=iaas_access.pub 7 #CloudUser used to create vServers & Volumes 8 export IAAS_USER=exaprod 9 export IAAS_PASSWORD_FILE=root.pwd 10 export KEY_NAME=cli.recreate 11 export INPUT_FILE=CreateAssets.in 12 13 export ACCOUNTS_FILE=accounts.out 14 export VOLUMES_FILE=volumes.out 15 export DISTGRPS_FILE=distgrp.out 16 export VNETS_FILE=vnets.out 17 export VSERVER_TYPES_FILE=vstype.out 18 export VSERVER_FILE=vserver.out 19 export VSERVER_TEMPLATES=template.out 20 export KEY_PAIRS=keypairs.out 21 22 PROCESSING_ACCOUNT="" 23 24 function cleanTempFiles() { 25 rm -f $ACCOUNTS_FILE $VOLUMES_FILE $DISTGRPS_FILE $VNETS_FILE $VSERVER_TYPES_FILE $VSERVER_FILE $VSERVER_TEMPLATES $KEY_PAIRS $IAAS_PASSWORD_FILE $KEY_FILE $IAAS_ACCESS_KEY_FILE 26 } 27 28 function connectToAccount() { 29 if [[ "$ACCOUNT" != "$PROCESSING_ACCOUNT" ]] 30 then 31 if [[ "" != "$PROCESSING_ACCOUNT" ]] 32 then 33 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 34 $IAAS_HOME/bin/akm-delete-access-key $AK 35 fi 36 PROCESSING_ACCOUNT=$ACCOUNT 37 IAAS_USER=$ACCOUNT_USER 38 echo "$ACCOUNT_PASSWORD" > $IAAS_PASSWORD_FILE 39 $IAAS_HOME/bin/akm-describe-accounts --sep "|" > $ACCOUNTS_FILE 40 while read line 41 do 42 ACCOUNT_ID=${line%%|*} 43 line=${line#*|} 44 ACCOUNT_NAME=${line%%|*} 45 # echo "Id = $ACCOUNT_ID" 46 # echo "Name = $ACCOUNT_NAME" 47 if [[ "$ACCOUNT_NAME" == "$ACCOUNT" ]] 48 then 49 echo "Found Production Account $line" 50 AK=`$IAAS_HOME/bin/akm-create-access-key --account $ACCOUNT_ID --access-key-file $IAAS_ACCESS_KEY_FILE` 51 KEYPAIR=`$IAAS_HOME/bin/iaas-create-key-pair --key-name $KEY_NAME --key-file $KEY_FILE` 52 echo "Connected to $ACCOUNT_NAME" 53 break 54 fi 55 done < $ACCOUNTS_FILE 56 fi 57 } 58 59 function disconnectFromAccount() { 60 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 61 $IAAS_HOME/bin/akm-delete-access-key $AK 62 PROCESSING_ACCOUNT="" 63 } 64 65 function getNetworks() { 66 $IAAS_HOME/bin/iaas-describe-vnets --sep "|" > $VNETS_FILE 67 } 68 69 function getVSTypes() { 70 $IAAS_HOME/bin/iaas-describe-vserver-types --sep "|" > $VSERVER_TYPES_FILE 71 } 72 73 function getTemplates() { 74 $IAAS_HOME/bin/iaas-describe-server-templates --sep "|" > $VSERVER_TEMPLATES 75 } 76 77 function getVolumes() { 78 $IAAS_HOME/bin/iaas-describe-volumes --sep "|" > $VOLUMES_FILE 79 } 80 81 function getVServers() { 82 $IAAS_HOME/bin/iaas-describe-vservers --sep "|" > $VSERVER_FILE 83 } 84 85 function getNetworkId() { 86 while read line 87 do 88 NETWORK_ID=${line%%|*} 89 line=${line#*|} 90 NAME=${line%%|*} 91 if [[ "$NAME" == "$NETWORK_NAME" ]] 92 then 93 break 94 fi 95 done < $VNETS_FILE 96 } 97 98 function getVSTypeId() { 99 while read line 100 do 101 VSTYPE_ID=${line%%|*} 102 line=${line#*|} 103 NAME=${line%%|*} 104 if [[ "$VSTYPE_NAME" == "$NAME" ]] 105 then 106 break 107 fi 108 done < $VSERVER_TYPES_FILE 109 } 110 111 function getTemplateId() { 112 while read line 113 do 114 TEMPLATE_ID=${line%%|*} 115 line=${line#*|} 116 NAME=${line%%|*} 117 if [[ "$TEMPLATE_NAME" == "$NAME" ]] 118 then 119 break 120 fi 121 done < $VSERVER_TEMPLATES 122 } 123 124 function getVolumeId() { 125 while read line 126 do 127 export VOLUME_ID=${line%%|*} 128 line=${line#*|} 129 NAME=${line%%|*} 130 if [[ "$NAME" == "$VOLUME_NAME" ]] 131 then 132 break; 133 fi 134 done < $VOLUMES_FILE 135 } 136 137 function getVServerId() { 138 while read line 139 do 140 VSERVER_ID=${line%%|*} 141 line=${line#*|} 142 NAME=${line%%|*} 143 if [[ "$VSERVER_NAME" == "$NAME" ]] 144 then 145 break; 146 fi 147 done < $VSERVER_FILE 148 } 149 150 function getVServerState() { 151 getVServers 152 while read line 153 do 154 VSERVER_ID=${line%%|*} 155 line=${line#*|} 156 NAME=${line%%|*} 157 line=${line#*|} 158 line=${line#*|} 159 VSERVER_STATE=${line%%|*} 160 if [[ "$VSERVER_NAME" == "$NAME" ]] 161 then 162 break; 163 fi 164 done < $VSERVER_FILE 165 } 166 167 function pauseUntilVServerRunning() { 168 # Wait until the Server is running before creating the next 169 getVServerState 170 while [[ "$VSERVER_STATE" != "RUNNING" ]] 171 do 172 getVServerState 173 echo "$NAME $VSERVER_STATE" 174 if [[ "$VSERVER_STATE" != "RUNNING" ]] 175 then 176 echo "Sleeping......." 177 sleep 60 178 fi 179 if [[ "$VSERVER_STATE" == "FAILED" ]] 180 then 181 echo "Will Delete $NAME in 5 Minutes....." 182 sleep 300 183 deleteVServer 184 echo "Deleted $NAME waiting 5 Minutes....." 185 sleep 300 186 break 187 fi 188 done 189 # Lets pause for a minute or two 190 echo "Just Chilling......" 191 sleep 60 192 echo "Ahhhhh we're getting there......." 193 sleep 60 194 echo "I'm almost at one with the universe......." 195 sleep 60 196 echo "Bong Reality Check !" 197 } 198 199 function deleteVServer() { 200 $IAAS_HOME/bin/iaas-terminate-vservers --force --vserver-ids $VSERVER_ID 201 } 202 203 function createVServer() { 204 VSERVER_NAME=${ASSET_DETAILS%%|*} 205 ASSET_DETAILS=${ASSET_DETAILS#*|} 206 VSTYPE_NAME=${ASSET_DETAILS%%|*} 207 ASSET_DETAILS=${ASSET_DETAILS#*|} 208 TEMPLATE_NAME=${ASSET_DETAILS%%|*} 209 ASSET_DETAILS=${ASSET_DETAILS#*|} 210 NETWORK_NAMES=${ASSET_DETAILS%%|*} 211 ASSET_DETAILS=${ASSET_DETAILS#*|} 212 IP_ADDRESSES=${ASSET_DETAILS%%|*} 213 # Get Ids associated with names 214 getVSTypeId 215 getTemplateId 216 # Convert Network Names to Ids 217 NETWORK_IDS="" 218 while true 219 do 220 NETWORK_NAME=${NETWORK_NAMES%%,*} 221 NETWORK_NAMES=${NETWORK_NAMES#*,} 222 getNetworkId 223 if [[ "$NETWORK_IDS" != "" ]] 224 then 225 NETWORK_IDS="$NETWORK_IDS,$NETWORK_ID" 226 else 227 NETWORK_IDS=$NETWORK_ID 228 fi 229 if [[ "$NETWORK_NAME" == "$NETWORK_NAMES" ]] 230 then 231 break 232 fi 233 done 234 # Create vServer 235 echo "About to execute : $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES" 236 $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES 237 pauseUntilVServerRunning 238 } 239 240 function createVolume() { 241 VOLUME_NAME=${ASSET_DETAILS%%|*} 242 ASSET_DETAILS=${ASSET_DETAILS#*|} 243 VOLUME_SIZE=${ASSET_DETAILS%%|*} 244 # Create Volume 245 echo "About to execute : $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE" 246 $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE 247 # Lets pause 248 echo "Just Waiting 30 Seconds......" 249 sleep 30 250 } 251 252 function attachVolume() { 253 VSERVER_NAME=${ASSET_DETAILS%%|*} 254 ASSET_DETAILS=${ASSET_DETAILS#*|} 255 VOLUME_NAMES=${ASSET_DETAILS%%|*} 256 # Get vServer Id 257 getVServerId 258 # Convert Volume Names to Ids 259 VOLUME_IDS="" 260 while true 261 do 262 VOLUME_NAME=${VOLUME_NAMES%%,*} 263 VOLUME_NAMES=${VOLUME_NAMES#*,} 264 getVolumeId 265 if [[ "$VOLUME_IDS" != "" ]] 266 then 267 VOLUME_IDS="$VOLUME_IDS,$VOLUME_ID" 268 else 269 VOLUME_IDS=$VOLUME_ID 270 fi 271 if [[ "$VOLUME_NAME" == "$VOLUME_NAMES" ]] 272 then 273 break 274 fi 275 done 276 # Attach Volumes 277 echo "About to execute : $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS" 278 $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS 279 # Lets pause 280 echo "Just Waiting 30 Seconds......" 281 sleep 30 282 } 283 284 function processAssets() { 285 while read line 286 do 287 ACCOUNT=${line%%:*} 288 line=${line#*:} 289 ACTION=${line%%|*} 290 line=${line#*|} 291 if [[ "$ACTION" == "Connect" ]] 292 then 293 ACCOUNT_USER=${line%%|*} 294 line=${line#*|} 295 ACCOUNT_PASSWORD=${line%%|*} 296 connectToAccount 297 298 ## Account Info 299 getNetworks 300 getVSTypes 301 getTemplates 302 303 continue 304 fi 305 if [[ "$ACTION" == "Create" ]] 306 then 307 ASSET=${line%%|*} 308 line=${line#*|} 309 ASSET_DETAILS=$line 310 if [[ "$ASSET" == "vServer" ]] 311 then 312 createVServer 313 314 continue 315 fi 316 if [[ "$ASSET" == "Volume" ]] 317 then 318 createVolume 319 320 continue 321 fi 322 fi 323 if [[ "$ACTION" == "Attach" ]] 324 then 325 ASSET=${line%%|*} 326 line=${line#*|} 327 ASSET_DETAILS=$line 328 if [[ "$ASSET" == "Volume" ]] 329 then 330 getVolumes 331 getVServers 332 attachVolume 333 334 continue 335 fi 336 fi 337 if [[ "$ACTION" == "Connect" ]] 338 then 339 disconnectFromAccount 340 341 continue 342 fi 343 done < $INPUT_FILE 344 } 345 346 # Should Parameterise this 347 348 while [ $# -gt 0 ] 349 do 350 case "$1" in 351 -a) INPUT_FILE="$2"; shift;; 352 *) echo ""; echo >&2 \ 353 "usage: $0 [-a <Asset Definition File>] (Default is CreateAssets.in)" 354 echo""; exit 1;; 355 *) break;; 356 esac 357 shift 358 done 359 360 361 362 363 processAssets 364 365 echo "**************************************" 366 echo "***** Finished Creating Assets *****" 367 echo "**************************************" 368 CreateAssetsProd.in Production:Connect|exaprod|welcome1 Production:Create|vServer|VS006|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.13,192.168.0.13,10.117.81.67,172.17.0.14 Production:Create|vServer|VS007|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.14,192.168.0.14,10.117.81.68,172.17.0.15 Production:Create|vServer|VS008|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.61,192.168.0.61,10.117.81.61,172.17.0.16 Production:Create|vServer|VS009|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.62,192.168.0.62,10.117.81.62,172.17.0.17 Production:Create|vServer|VS000|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.63,192.168.0.63,10.117.81.63,172.17.0.18 Production:Create|vServer|VS001|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.64,192.168.0.64,10.117.81.64,172.17.0.19 Production:Create|vServer|VS002|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.65,192.168.0.65,10.117.81.65,172.17.0.20 Production:Create|vServer|VS003|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.66,192.168.0.66,10.117.81.66,172.17.0.21 Production:Create|Volume|VS006|50 Production:Create|Volume|VS007|50 Production:Create|Volume|VS008|50 Production:Create|Volume|VS009|50 Production:Create|Volume|VS000|50 Production:Create|Volume|VS001|50 Production:Create|Volume|VS002|50 Production:Create|Volume|VS003|50 Production:Attach|Volume|VS006|VS006 Production:Attach|Volume|VS007|VS007 Production:Attach|Volume|VS008|VS008 Production:Attach|Volume|VS009|VS009 Production:Attach|Volume|VS000|VS000 Production:Attach|Volume|VS001|VS001 Production:Attach|Volume|VS002|VS002 Production:Attach|Volume|VS003|VS003 Production:Disconnect Development:Connect|exadev|welcome1 Development:Create|vServer|VS014|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.24,10.117.81.71,172.17.0.24 Development:Create|vServer|VS015|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.25,10.117.81.72,172.17.0.25 Development:Create|vServer|VS016|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.26,10.117.81.73,172.17.0.26 Development:Create|vServer|VS017|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.27,10.117.81.74,172.17.0.27 Development:Create|vServer|VS018|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.28,10.117.81.75,172.17.0.28 Development:Create|vServer|VS019|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.29,10.117.81.76,172.17.0.29 Development:Create|vServer|VS020|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.30,10.117.81.77,172.17.0.30 Development:Create|vServer|VS021|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.31,10.117.81.78,172.17.0.31 Development:Create|vServer|VS022|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.32,10.117.81.79,172.17.0.32 Development:Create|vServer|VS023|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.33,10.117.81.80,172.17.0.33 Development:Create|vServer|VS024|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.34,10.117.81.81,172.17.0.34 Development:Create|vServer|VS025|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.35,10.117.81.82,172.17.0.35 Development:Create|vServer|VS026|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.36,10.117.81.83,172.17.0.36 Development:Create|vServer|VS027|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.37,10.117.81.84,172.17.0.37 Development:Create|Volume|VS014|50 Development:Create|Volume|VS015|50 Development:Create|Volume|VS016|50 Development:Create|Volume|VS017|50 Development:Create|Volume|VS018|50 Development:Create|Volume|VS019|50 Development:Create|Volume|VS020|50 Development:Create|Volume|VS021|50 Development:Create|Volume|VS022|50 Development:Create|Volume|VS023|50 Development:Create|Volume|VS024|50 Development:Create|Volume|VS025|50 Development:Create|Volume|VS026|50 Development:Create|Volume|VS027|50 Development:Attach|Volume|VS014|VS014 Development:Attach|Volume|VS015|VS015 Development:Attach|Volume|VS016|VS016 Development:Attach|Volume|VS017|VS017 Development:Attach|Volume|VS018|VS018 Development:Attach|Volume|VS019|VS019 Development:Attach|Volume|VS020|VS020 Development:Attach|Volume|VS021|VS021 Development:Attach|Volume|VS022|VS022 Development:Attach|Volume|VS023|VS023 Development:Attach|Volume|VS024|VS024 Development:Attach|Volume|VS025|VS025 Development:Attach|Volume|VS026|VS026 Development:Attach|Volume|VS027|VS027 Development:Disconnect This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model #ssas #tabular #bism

    - by Marco Russo (SQLBI)
    I, Alberto and Chris spent many months (many nights, holidays and also working days of the last months) writing the book we would have liked to read when we started working with Analysis Services Tabular. A book that explains how to use Tabular, how to model data with Tabular, how Tabular internally works and how to optimize a Tabular model. All those things you need to start on a real project in order to make an happy customer. You know, we’re all consultants after all, so customer satisfaction is really important to be paid for our job! Now the book writing is finished, we’re in the final stage of editing and reviews and we look forward to get our print copy. Its title is very long: Microsoft SQL Server 2012 Analysis Services – The BISM Tabular Model. But the important thing is that you can already (pre)order it. This is the list of chapters: 01. BISM Architecture 02. Guided Tour on Tabular 03. Loading Data Inside Tabular 04. DAX Basics 05. Understanding Evaluation Contexts 06. Querying Tabular 07. DAX Advanced 08. Understanding Time Intelligence in DAX 09. Vertipaq Engine 10. Using Tabular Hierarchies 11. Data modeling in Tabular 12. Using Advanced Tabular Relationships 13. Tabular Presentation Layer 14. Tabular and PowerPivot for Excel 15. Tabular Security 16. Interfacing with Tabular 17. Tabular Deployment 18. Optimization and Monitoring And this is the book cover – have a good read!

    Read the article

  • Runaway version store in tempdb

    - by DavidWimbush
    Today was really a new one. I got back from a week off and found our main production server's tempdb had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues - including runaway tempdb databases. How just-in-time was that?! I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you've won something.) That left tempdb pretty big but for some reason it grew several more times. And since I'd left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I've got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.  I'm afraid I've got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly. Just because something has always worked before doesn't mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

    Read the article

  • Cool SQL formatter tool

    - by AndyScott
    I have to deal with all types of code that was written by people from different organizations, different countries, using different languages, obviously standards are different across these sources.  One of the biggest headaches that I ahve come across is how people differ in the formatting of their SQL statements, specifically stored procs.  When you regularly get over 500 lines in a sproc, if the code is not formatted correctly, you can get lost trying to figure out where one nested BEGIN begins, and another nested END ends.  One of my co-workers showed me this site today that does a pretty damn good job of making sense of that type of code: http://www.dpriver.com/pp/sqlformat.htm.   This is a free website that offers a box to enter your nasty code, and click "Format SQL" and have it clean it for you.  I am sure that there are situations where this may not work, but given the code that I have been working with recently, it does a really good job.  There is a pay version with more options, including VS add-in, desktop component (with quickkeys to clean text in programs like notepad), the ability to output in HTML, and other stuff.  Heck, I watched a demo where the purchased version will take formatted SQL code and turn it into a generic Stringbuilder object embedded in a formatted. Yes, this seems like a shameless plug, but no, I have no relation/knowledge of anyone involved in the development of this product, it just seems useful.  Either way, I recommend checking out the free version.

    Read the article

  • Code formatter for SSMS

    - by blakmk
      I was searching recently for a code formatter for T-Sql and I came accross this nice little utility that I wanted to share: http://www.wangz.net/cgi-bin/pp/gsqlparser/sqlpp/sqlformat.tpl I've been dealing with a lot of legacy code latley and there is nothing I find more infuriating than unformatted code. This tool seems to work quite well. Just one click and it formats everything nicely. There is also a free web version.                                           This Web Page Created with PageBreeze Free HTML Editor

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >