Search Results

Search found 42321 results on 1693 pages for 'sql reporting services 05'.

Page 542/1693 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Why don't I have a "Web Service References" menu item in excel/VBA?

    - by Draemon
    I'm trying to consume a SOAP web service from excel. Now according to This article (and confirmed by other articles and MSDN) if I do the following: Install the web services toolkit (I've installed v2.01) Install SOAP Toolkit 3.0 Add a reference to Microsoft Soap Type Library (I've tried v3.0 and an older one) I should get a "Web Service References" menu item in the Tools menu but I don't. I've also tried adding every reference that seemed to have anything to do with SOAP or XML, but it hasn't helped. Any ideas?

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Exchange 2010 + Sharepoint on single server

    - by ct2k7
    I seem to have the most unideal server setup so here we go: Situation: 1 Server (2008 Std), Exchange 2010 (CAS + HUB) and Sharepoint Services 3.0 installed on it. Mission: To get OWA working at: mail.systems.com and Sharepoint at, intranet.systems.net Execution: you tell me how, becuase I do not know where to start :( Shamil

    Read the article

  • Is it possible to install custom software on Amazon EC2

    - by quickquestioner
    I'm trudging through the Amazon docs for a quick answer, but while I'm looking I thought it wouldn't hurt to ask here. My client uses custom software that uses (wait for it) Microsoft Excel to store data as opposed a RDBMS. Either way, their server is outdated and they are interested in using Amazon's cloud services, but would installing this software be possible, or am I barking up the wrong tree? Links are welcome! Thanks for your help!

    Read the article

  • Run Tomcat Service as Different User on Windows 7

    - by sdoca
    I have installed Tomcat6 using the 32-bit/64-bit Windows Service Installer download version. In the setup instructions, it is recommended that "For optimal security, the service should be run as a separate user, with reduced permissions". I created a new local/standard user (Tomcat) to run the service. The Tomcat service is listed in my list of Services and it's running under my user profile. However, I can't figure out how to set/change which user to start it as.

    Read the article

  • Monitoring the status of accounts with IT Service providers (ISP, Domain Registrar etc.)

    - by Sholom
    Hi All, Short version: You have software that tells you when your servers power-outlet is down. It monitors multiple servers from one management console, alerts you when something is wrong etc. Does anyone know of software that will let me take the same approach to monitor if the money-outlet (the bill!) is down (not paid) to my IT Services providers (ISP, Domain Registrar, MX Backup service etc). I need a top down, centrally managed service that is capable of sending out alerts. Just like the one that monitors my own exchange server etc. I don't mind if i have to manually enter every payment. Long version: Our very likable but absent minded bookkeeper keeps neglecting to pay our IT vendors on time. Just this past week our internet service was disconnected. Same could happen to many other mission critical accounts (domain registrar, backup MX, anti-virus license, HackerSafe (McAfee secure) service and even an 800 number to name a few). As the sysadmin, i monitor my severs to make sure they are plugged into the power-outlet. I believe i should also monitor my services to make sure they are plugged in to their money-outlet. To compound the problem, when the power goes out someone else will likely notice and notify me. But if a bill is not payed, no one will ever notice until service is lost. Lost as in losing our domain name which would cause a lot more damage then the power failing on our server. [Solution] = [Doesn't work because]: Retrain the bookkeeper = Wishful thinking. Notify my manager = Already have (via email). Protects me, does not solve problem. Fire bookkeeper = What makes you so sure the next one will never forget? Bottom line: Humans are humans and sooner or later something critical will be royally messed up. We need to partner with a machine to help us out here. Anybody have the same problem? What software/solution do you use? I would like software that emails me when a bill is passed due just like i get an email when the power outlet fails. Anyone hear of anything like that? Thanks

    Read the article

  • windows server 2003 cannot accept connections

    - by Seb
    Hi everyone, I am running a Windows Server 2003 OS and am noticing that no one is able to connect to the machine through Remote Desktop. I have gone through the Terminal Services Configuration to make sure that we had the RDP-Tcp connection enabled and I've checked to see that the server was listening to port 3389. Are there any other options since I've tried to ping into our host server with no results. Thanks in advance.

    Read the article

  • How can I find out which service is keeping Vista from restarting?

    - by Jeff
    Running Vista on my Toshiba laptop for several years now. Recently, I noticed it will not restart - I have to power cycle it. I have enabled Verbose Status messages, so I know it's stuck at "Stopping services". Is there a way to figure out which service is not stopping? I'm hoping for some kind of log like the bootlog. I've tried looking through the event viewer - no luck there.

    Read the article

  • Windows 7: How to stop/start service from commandline (like services.msc does it)?

    - by john
    I have developed a program in Java that uses on a local SQL Server instance to store its data. On some installations the SQL Server instance is not running sometimes. Users can fix this problem by manually starting the SQL Server instance (via services.msc). I am thinking about automating this task: the software would check if the database server is reachable, if not try to (re)start it. The problem is that on the same user account the Services can be stopped /started via services.msc (without any UAC prompt), but not via (non-elevated) command line. The operating system seems to treat services.msc differently: c:\>sc start mssql$db1 [SC] StartService: OpenService FEHLER 5: Zugriff verweigert (Access denied) c:\>net start mssql$db1 Systemfehler 5 aufgetreten. Zugriff verweigert (Access denied) So the question is: how can I stop/start the service from a java-program/command line without having my users to use services.msc (preferrably via on-board-tools)

    Read the article

  • System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse request failed with HTTP status 40

    - by John Galt
    I am trying to make some enhancements to a production web app. After quite a bit of unit testing on my WinXP IIS 5.1 development machine, everything works on my localhost so I used the Visual Studio 2008 PUBLISH dialog on my Dev PC to push the following projects to a staging server: the primary web app the "primary" webservice (the home page tries to invoke this WS) a "secondary" webservice (not yet a problem because home page does not invoke this WS) I get the following when I try to browse to the home page of the web app typing this into my browser: link text Server Error in '/zVersion2' Application. The request failed with HTTP status 404: Not Found. Description: An unhandled exception occurred during the execution of the current web request.Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.WebException: The request failed with HTTP status 404: Not Found. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [WebException: The request failed with HTTP status 404: Not Found.] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431289 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 ProxyZipeeeService.WSZipeee.Zipeee.GetMessageByType(Int32 iMsgType) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\ProxyZipeeeService\ProxyZipeeeService\Web References\WSZipeee\Reference.vb:2168 Zipeee.frmZipeee.LoadMessage() in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:43 Zipeee.frmZipeee.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3607; ASP.NET Version:2.0.50727.3082 Here is a bit of the corresponding source code: Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load If Not Page.IsPostBack Then LoadMessage() End If End Sub Private Sub LoadMessage() Dim iCnt As Integer Dim iValue As Integer dsStandardMsg = wsZipeee.GetMessageByType(BizConstants.MsgType.Standard) End Sub I suspect I may have configured things incorrectly on the staging server. The staging server is Win Server 2003 ServicePack 2 running IIS 6.0. When I published the primary site and the 2 webservices on the staging server called MOJITO I created the physical directories for each on the D drive. Then using INETMGR, I configured the following virtual directories: zVersion2 zVersion2wsSQL zVersion2wsEmergency All of the above are configured to use a new application pool I setup and named zVersion2aspNet20. The default web site for this machine MOJITO is configured to use ASP.NET 1.1 and the IP address is set to (All Unassigned). The production versions of the latter 2 webservices run on the MOJITO machine (named ZipeeeService and EmergencyService respectively). Can my staging versions of the above webservices (named zVersion2wsSQL and zVersion2wsEmergency respectively) co-exist on the same web server with the same IP address? Please note that when I test the zVersion2wsSQL webservice independently (from INETMGR right-mouse and click Browse) it works as expected (i.e. presenting all the methods of the webservice) like this snippet: GetMessageByType MessageName="Get_x0020_Message_x0020_By_x0020_Type" I can test this webmethod by clicking on it and it presents the Test dialog (because it takes a simple datatype and I am invoking it on localhost (i.e. MOJITO): **Get Message By Type** **Test** To test the operation using the HTTP POST protocol, click the 'Invoke' button. Parameter Value iMsgType: _______ [INVOKE button] SOAP 1.1 ....etc. I fear I may have rambled with too much information so I will stop but I hope someone can help me as I cannot understand why this request results in a "not found". Thanks.

    Read the article

  • Opening an SQL CE file at runtime with Entity Framework 4

    - by David Veeneman
    I am getting started with Entity Framework 4, and I an creating a demo app as a learning exercise. The app is a simple documentation builder, and it uses a SQL CE store. Each documentation project has its own SQL CE data file, and the user opens one of these files to work on a project. The EDM is very simple. A documentation project is comprised of a list of subjects, each of which has a title, a description, and zero or more notes. So, my entities are Subject, which contains Title and Text properties, and Note, which has Title and Text properties. There is a one-to-many association from Subject to Note. I am trying to figure out how to open an SQL CE data file. A data file must match the schema of the SQL CE database created by EF4's Create Database Wizard, and I will implement a New File use case elsewhere in the app to implement that requirement. Right now, I am just trying to get an existing data file open in the app. I have reproduced my existing 'Open File' code below. I have set it up as a static service class called File Services. The code isn't working quite yet, but there is enough to show what I am trying to do. I am trying to hold the ObjectContext open for entity object updates, disposing it when the file is closed. So, here is my question: Am I on the right track? What do I need to change to make this code work with EF4? Is there an example of how to do this properly? Thanks for your help. My existing code: public static class FileServices { #region Private Fields // Member variables private static EntityConnection m_EntityConnection; private static ObjectContext m_ObjectContext; #endregion #region Service Methods /// <summary> /// Opens an SQL CE database file. /// </summary> /// <param name="filePath">The path to the SQL CE file to open.</param> /// <param name="viewModel">The main window view model.</param> public static void OpenSqlCeFile(string filePath, MainWindowViewModel viewModel) { // Configure an SQL CE connection string var sqlCeConnectionString = string.Format("Data Source={0}", filePath); // Configure an EDM connection string var builder = new EntityConnectionStringBuilder(); builder.Metadata = "res://*/EF4Model.csdl|res://*/EF4Model.ssdl|res://*/EF4Model.msl"; builder.Provider = "System.Data.SqlServerCe"; builder.ProviderConnectionString = sqlCeConnectionString; var entityConnectionString = builder.ToString(); // Connect to the model m_EntityConnection = new EntityConnection(entityConnectionString); m_EntityConnection.Open(); // Create an object context m_ObjectContext = new Model1Container(); // Get all Subject data IQueryable<Subject> subjects = from s in Subjects orderby s.Title select s; // Set view model data property viewModel.Subjects = new ObservableCollection<Subject>(subjects); } /// <summary> /// Closes an SQL CE database file. /// </summary> public static void CloseSqlCeFile() { m_EntityConnection.Close(); m_ObjectContext.Dispose(); } #endregion }

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

  • php web services not getting data from iphone application

    - by user317192
    Hi, I am connecting with a php web service from my iphone application, I am doing a simple thing i.e. 1. Getting user inputs for: username password in a text field from the iphone form and sending the same to the PHP Post request web service. At the web service end I receive nothing other than blank fields that are inserted into the MySQL Database....... The code for sample web service is: ***********SAMPLE CODE FOR WEB SERVICES***** mysql_select_db("eventsfast",$con); $username = $_REQUEST['username']; $password = $_REQUEST['password']; echo $username; echo $password; $data = $_REQUEST; $fp = fopen("log.txt", "w"); fwrite($fp, $data['username']); fwrite($fp, $data['password']); $sql="INSERT INTO users(username,password) VALUES('{$username}','{$password}')"; if(!mysql_query($sql,$con)) { die('Error:'.mysql_error()); } echo json_encode("1 record added to users table"); mysql_close($con); echo "test"; ? ***************PHP******** ****** **************IPHONE EVENT CODE******* import "postdatawithphpViewController.h" @implementation postdatawithphpViewController @synthesize userName,password; -(IBAction) postdataid) sender { NSLog(userName.text); NSLog(password.text); NSString * dataTOB=[userName.text stringByAppendingString:password.text]; NSLog(dataTOB); NSData * postData=[dataTOB dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSLog(postLength); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:@"http://localhost:8888/write.php"]]; [request setURL:url]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; NSURLResponse *response; NSError *error; [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if(error==nil) NSLog(@"Error is nil"); else NSLog(@"Error is not nil"); NSLog(@"success!"); } Please help.............

    Read the article

  • Does GoDaddy supports RESTful services via WCF

    - by Amir Naor
    After deploying a WCF RESTful service that i created using the REST started kit, i got several errors that i managed to solve following this post: http://www.edoverip.com/edoverip/index.php/2009/01/30/running-wcf-on-godaddy Now i'm stuck with this error: IIS specified authentication schemes 'Basic, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used I saw that others got to this point without a solution. GoDaddy support dont know nothing. Is it possible at all? Are there any web hosting services that you know that support that?

    Read the article

  • ISA Server 2006 "Global denied packets rate limit"

    - by lofi42
    Does someone know how to change the "Global denied packets rate limit" on a ISA Server 2006 (SP1) on Windows 2003? We have a strange software which does mutiple sql querys and reaches this limit and the ISA server blocks the traffic. The Floodprotection Option is already disabled on the ISA. SQLDB <= ISA <= SQL-Client

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • Best way to learn iphone audio queue services, step by step tutorial

    - by optician
    Hi Everyone, I'm trying to learn how to handle audio at a fairly low level with audio queue services. I have been progrmaing in memory managed languages for quite a while, and have just completed the c programing tutorial by vtc (2007). This has left me comfortable with the understanding of pointers and memory allocation, but the apple documention still leaves me wanting for a simpler implenation and explaination. Maybe I need to learn objective c and cocoa better. I have heard that this book is good. Cocoa(R) Programming for Mac(R) OS X (3rd Edition) Could someone suggest a learning path that is going to help me get an better understanding of working with audio and an iphone. I want to be able to play mp3 files back and also alter the pitch of them as they are playing. I am prepared that I may have to temporarily convert the mp3 files into pcm files to do things like that to them. Thanks everyone.

    Read the article

  • Editing Data in Child Window with RIA Services and Silverlight 4

    - by Rick Arthur
    Is it possible to edit data in a SilverLight Child window when using RIA Services and Silverlight 4? It sounds like a simple enough question, but I have not been able to get any combination of scenarios to work. Simply put, I am viewing data in a grid that was populated through a DomainDataSource. Instead of editing the data on the same screen (this is the pattern that ALL of the Microsoft samples seem to use), I want to open a child window, edit the data and return. Surely this is a common design pattern. If anyone knows of a sample out there that uses this pattern, a link would be much appreciated. Thanks, Rick Arthur

    Read the article

  • Win CE 6.0 client using WCF Services

    - by Sean
    We have a Win CE 6.0 device that is required to consume services that will be provided using WCF. We are attempting to reduce bandwidth usage as much as possible and with a simple test we have found that using UDP instead of HTTP saved significant data usage. I understand there are limitations regarding WCF on .NET Compact Framework 3.5 devices and was curious what people thought would be the appropriate way forward. Would it make sense to develop a custom UDP binding, and would that work for both sides? Any feedback would be appreciated. Thanks.

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >