Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 87/437 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • New Exadata Customer Cases

    - by Javier Puerta
    New reference stories available for Exadata: Procter & Gamble Completes Point-of-Sale Data Queries up to 30 Times Faster, Reduces IT Costs, and Improves Insight with Engineered Data Warehouse Solution ZLM Verzekeringen Improves Customer Service with Integrated Back-Office Environment on Exadata KyivStar, JSC Reduces Storage Volumes to 15% of Its Legacy Environment and Increases System Productivity by 500% with High-Performance IT Infrastructure GfK Group Retail and Technology ensures Successful Growth with Exadata Consolidation

    Read the article

  • Through the Virtual Microscope–SQL Server and Hyper-V

    - by GavinPayneUK
    In my recent SQLBits conference session, Through the Virtual Looking Glass available to watch here , I spoke about monitoring SQL Server in a virtualised environment.  We looked at good and bad contention, where resource pressures which can adversely affect SQL Server might come from and most importantly how we can monitor the environment to detect them. Since then, I’ve been in discussions with some of the Microsoft product team who are focussed on optimising Windows 8 Hyper-V and SQL Server...(read more)

    Read the article

  • Can anyone help with this exception, please? (2 replies)

    Hi all The following stacktrace is generated in the environment and action detailed below, environment: vs2005, c#, winforms, dataset, detail view, bindingNavigator control user action: load form, click addnew button, then click movefirst button I have tried using the bindingNavigator MoveFirstItem Click to check on null values, but the event is not raised before this exception, and I can not find...

    Read the article

  • C++ Without Source Files

    - by Snowman
    Bjarne Stroustrup mentions in his book "The C++ Programming Language, 4th Edition" that not all C++ implementations use files to store and compile code: There are systems that do not store, compile, and present C++ programs to the programmer as sets of files. (Chapter 15, page 419) Later in the chapter, he reiterates that certain implementations do not use files but he does not give any examples. How would such an environment function compared to a more common file-based environment?

    Read the article

  • Vim: Bind <C-Enter>

    - by mtkoan
    Using vim for editing latex; how can I bind C-Enter? I tried the following, which does not work: imap <C-Enter> \\<CR> and imap <C-Return> \\<CR> However, something like: imap <C-i> \\<CR> Does work. Any ideas are suggestions for vim latex-addons?

    Read the article

  • How to Submit to an Environmental Directory

    Environmental directory is a niche directory that offers inclusion for sites in the environment niche such as agriculture, environmental issues, oceans, sciences, climate change, energy, wildlife and etc. Most environment directories are maintained by a non profit organization. The main purpose of the environmental directory is to provide access of environmental resources to the public.

    Read the article

  • SQL Azure - Creating backups and copies of your databases

    As a DBA you always followed a practice to back up your database (or take a snapshot of your database) before making any changes so that you can revert to your old database state if something goes wrong. Also to setup a development or test environment you use a backup of your database and restore it in the respective environment. If you are moving to SQL Azure, what would you do in these cases as backup / restore and database snapshots are not supported as of now?

    Read the article

  • Steps to Apply a Service Pack or Patch to Mirrored SQL Server Databases

    Planning on patching my SQL Servers to the latest service pack, but not sure how to complete this for a environment that is using database mirroring? In this tip, will outline the environment and then walk through the process of patching mirrored servers. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • View, Control, Instruct with iTalc

    <b>Linux.com:</b> "If you work in an educational or training environment where you instruct users on the ins and outs of using computers, or you need to be able to (for whatever reason) control the PC user's use of a machine, the tools available are often quite expensive or quite difficult to use. Neither is the case in the Linux environment, where tools like iTalc are available."

    Read the article

  • Adding a simple .js redirect through RoR? [closed]

    - by user18294
    I've just been tasked with adding a .js redirect from an RoR site to another domain. The files are being pulled from GitHub, which means I obviously need to edit the files there. The problem is I know nothing about RoR and coming from an FTP environment, this simple task is proving quite confusing. Can someone please guide me step-by-step on how I would redirect one site to another using JS in this environment? Thanks for your help!

    Read the article

  • Can anyone help with this exception, please? (2 replies)

    Hi all The following stacktrace is generated in the environment and action detailed below, environment: vs2005, c#, winforms, dataset, detail view, bindingNavigator control user action: load form, click addnew button, then click movefirst button I have tried using the bindingNavigator MoveFirstItem Click to check on null values, but the event is not raised before this exception, and I can not find...

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

  • jenkins-maven-android when running throwing the error "android-sdk-linux/platforms" is not a directory"

    - by Sam
    I start setting up the jenkins-maven-android and i'm facing an issue when running the jenkin job. My Machine Details $uname -a Linux development2 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux Steps to install the Android SDK in Ubuntu https://help.ubuntu.com/community/AndroidSDK since i'm working on headless env (ssh to client machine) i used following command to install the platform tools android update sdk --no-ui download apache maven and install on http://maven.apache.org/download.html mvn -version output root@development2:/opt/android-sdk-linux/tools# mvn -version Apache Maven 3.0.4 (r1232337; 2012-01-17 08:44:56+0000) Maven home: /opt/apache-maven-3.0.4 Java version: 1.6.0_24, vendor: Sun Microsystems Inc. Java home: /usr/lib/jvm/java-6-openjdk/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.0.0-12-virtual", arch: "amd64", family: "unix" root@development2:/opt/android-sdk-linux/tools# ran the following two command as mention in below sudo apt-get update sudo apt-get install ia32-libs Problems with Eclipse and Android SDK http://developer.android.com/sdk/installing/index.html As error suggest i gave the path to android SDK in jenkins build config still im getting the error clean install -Dandroid.sdk.path=/opt/android-sdk-linux Can someone help me to resolve this. Thanks Error I'm Getting Waiting for Jenkins to finish collecting data mavenExecutionResult exceptions not empty message : Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. cause : Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. Stack trace : org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources (default-generate-sources) on project base-template: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156) at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(Maven3Launcher.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.jvnet.hudson.maven3.agent.Maven3Main.launch(Maven3Main.java:158) at hudson.maven.Maven3Builder.call(Maven3Builder.java:98) at hudson.maven.Maven3Builder.call(Maven3Builder.java:64) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:679) Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-generate-sources of goal com.jayway.maven.plugins.android.generation2:android-maven-plugin:3.1.1:generate-sources failed: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 27 more Caused by: com.jayway.maven.plugins.android.InvalidSdkException: Path "/opt/android-sdk-linux/platforms" is not a directory. Please provide a proper Android SDK directory path as configuration parameter <sdk><path>...</path></sdk> in the plugin <configuration/>. As an alternative, you may add the parameter to commandline: -Dandroid.sdk.path=... or set environment variable ANDROID_HOME. at com.jayway.maven.plugins.android.AndroidSdk.assertPathIsDirectory(AndroidSdk.java:125) at com.jayway.maven.plugins.android.AndroidSdk.getPlatformDirectories(AndroidSdk.java:285) at com.jayway.maven.plugins.android.AndroidSdk.findAvailablePlatforms(AndroidSdk.java:260) at com.jayway.maven.plugins.android.AndroidSdk.<init>(AndroidSdk.java:80) at com.jayway.maven.plugins.android.AbstractAndroidMojo.getAndroidSdk(AbstractAndroidMojo.java:844) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:329) at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute(GenerateSourcesMojo.java:102) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) ... 28 more channel stopped Finished: FAILURE* android home Echo root@development2:~# echo $ANDROID_HOME /opt/android-sdk-linux

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >