Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 28/188 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • SQL SERVER – Mirroring Configured Without Domain – The server network address TCP://SQLServerName:50

    - by pinaldave
    Regular readers of my blog will be aware of my friend who called me few days ago with very a funny SQL Problem SQL SERVER – SSMS Query Command(s) completed successfully without ANY Results. This time, it did not take long before he called me up with another interesting problem, although the issue he was facing this time was not that interesting and also very specific to him, however, he insisted me to share with all of you. Let us understand his situation at first. My friend is preparing for DBA exam Exam 70-450: PRO: Designing, Optimizing and Maintaining a Database Server Infrastructure using Microsoft SQL Server 2008 and for the same, he was trying to set up replication on his local laptop. He had installed two different instances of SQL Server on his computer and every time when he started the mirroring, it failed with common error message. The server network address “TCP://SQLServer:5023? cannot be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418) Well, before he contacted me, he searched online and checked my article written on the error in mirroring. However, he tried all the four suggestions, but it did not solve his problem. He called me at a reasonable time of late evening (unlike last time, which was midnight!). I even tried all the seven different suggestions myself, as previously proposed in my article; however, none of them worked. While looking at closely at services, I noticed something very simple. He was running all the instances on ‘Network Services’. In fact, his computer was a stand-alone computer. There was no network at all. Also, there was no domain or any other advance network concepts implemented. I just changed services from ‘Network Services’ to ‘Local System’ as his SQL Server was running on his local system and there were no network services. This prompted to restart the services. As this was not the production server and his development machine, we restarted the services on the laptop (do not restart services on production server without proper planning). After changing the ‘services log on’ account to localsystem, when he attempted to reconfigure the mirroring it worked right away. As usually in production server, proper domains are configured and advance network concepts are implemented I had never faced this type of problem earlier. My friend insisted to post this solution to his situation, wherein there was no domain configured and setting up mirroring was throwing an error. According to him, this is bound to help people, like him, who are preparing for certification using single system. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Certifications, SQL Mirroring

    Read the article

  • 5 Useful Wordpress Plugins For Google Adsense

    - by Jyoti
    Google Adsense has become the most popular online contextual advertising program and proper custom integration with Wordpress can help to increase Adsense earnings. Now on this post we have describe 5 useful wordpress plugin for google adsense. Few weeks ago we did a "10 Wordpress Plugins For Google Adsense ". Wordpress allows bloggers to easily [...]

    Read the article

  • pingback / trackback support for photo sharing?

    - by cboettig
    Is there a photo sharing service, such as flickr or picasa, that will collect urls of the locations where the photo has been posted on other blogs (or mentioned in tweets, etc?) This could be accomplished by posting each photo as a blog entry on a site like wordpress, which would then automatically handle pingbacks, but of course a blog doesn't perform quite like a proper photo service. Perhaps this could be done with a private photo hosting server like zenphoto by editing the php, but that seems rather involved. Does such a service already exist?

    Read the article

  • Get Professional SEO Service From SEO Experts

    SEO service is imperative to online business. Business today has gone digital. With its power to perform international promotion, almost every brand is on a bid to establish an online presence. However, it's not easy to gain online presence without proper SEO service.

    Read the article

  • How to Use and Manage Extensions to Safari 5

    - by Mysticgeek
    While there have been hacks to include extensions in Safari for some time now, Safari 5 now offers proper support for them. Today we take a look at managing extensions in the latest version of Safari. Installation and Setup Download and install Safari 5 (link below). Make sure to download the installer that doesn’t include QuickTime if you don’t want it. Also, uncheck getting Apple updates and news in your email. Then decide if you want to install Bonjour for Windows and have Safari automatically update or not. Once it’s installed, launch Safari and select Show Menu Bar from the the Settings Menu. Then go into Preferences \ Advanced and check the box Show Develop menu in the menu bar. Develop will now appear on the Menu Bar…click on it and select Enable Extensions. Using Extensions Now you can find and start using extensions (link below) that will work with Safari 5. In this example we’re installing PageSaver which takes an image of what is showing in your browser. Click on the link for the Extension you want to install…   Then you’ll get a confirmation asking if you want to open or save it. Opening it will install it right away. Click Install in the dialog that asks if you’re sure you want to. Here we see the Extension was successfully installed and you can see the camera icon on the Toolbar. When you’re on a portion of a webpage you want to take an image of, click on the camera icon and you’ll have the image saved in your Downloads folder. Then you can open it up in a browser or image editor. Go into Preferences \ Extensions and from here you can turn the extensions on or off, uninstall, or check for updates. If you’re a Safari user, or thinking about trying it, you’ll enjoy proper support for extensions in version 5. At the time of this writing we couldn’t find any extensions on the Apple site, but you might want to keep your eye on it to see if they do start listing them.  Download Safari 5 for Mac & PC Safari Extensions Similar Articles Productive Geek Tips Manage Web Searches In SafariMake Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarMake Your Safari Web Browsing PrivateSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

  • Internet Protocol Suite: Transition Control Protocol (TCP) vs. User Datagram Protocol (UDP)

    How do we communicate over the Internet?  How is data transferred from one machine to another? These types of act ivies can only be done by using one of two Internet protocols currently. The collection of Internet Protocol consists of the Transition Control Protocol (TCP) and the User Datagram Protocol (UDP).  Both protocols are used to send data between two network end points, however they both have very distinct ways of transporting data from one endpoint to another. If transmission speed and reliability is the primary concern when trying to transfer data between two network endpoints then TCP is the proper choice. When a device attempts to send data to another endpoint using TCP it creates a direct connection between both devices until the transmission has completed. The direct connection between both devices ensures the reliability of the transmission due to the fact that no intermediate devices are needed to transfer the data. Due to the fact that both devices have to continuously poll the connection until transmission has completed increases the resources needed to perform the transmission. An example of this type of direct communication can be seen when a teacher tells a students to do their homework. The teacher is talking directly to the students in order to communicate that the homework needs to be done.  Students can then ask questions about the assignment to ensure that they have received the proper instructions for the assignment. UDP is a less resource intensive approach to sending data between to network endpoints. When a device uses UDP to send data across a network, the data is broken up and repackaged with the destination address. The sending device then releases the data packages to the network, but cannot ensure when or if the receiving device will actually get the data.  The sending device depends on other devices on the network to forward the data packages to the destination devices in order to complete the transmission. As you can tell this type of transmission is less resource intensive because not connection polling is needed,  but should not be used for transmitting data with speed or reliability requirements. This is due to the fact that the sending device can not ensure that the transmission is received.  An example of this type of communication can be seen when a teacher tells a student that they would like to speak with their parents. The teacher is relying on the student to complete the transmission to the parents, and the teacher has no guarantee that the student will actually inform the parents about the request. Both TCP and UPD are invaluable when attempting to send data across a network, but depending on the situation one protocol may be better than the other. Before deciding on which protocol to use an evaluation for transmission speed, reliability, latency, and overhead must be completed in order to define the best protocol for the situation.  

    Read the article

  • Book Review: Pro SQL Server 2008 Relational Database Design and Implementation

    - by Alexander Kuznetsov
    Investing in proper database design is a very efficient way to cut maintenance costs. If we expect a system to last, we need to make sure it has a good solid foundation - high quality database design. Surely we can and sometimes do cut corners and save on database design to get things done faster. Unfortunately, such cutting corners frequently comes back and bites us: we may end up spending a lot of time solving issues caused by poor design. So, solid understanding of relational database design is...(read more)

    Read the article

  • MSDN Webcast: Security Talk - Protecting Your Data from the Application to the Database

    Securing a database is a difficult task. Learn about areas that application developers and database administrators should consider to help increase data security. Hear about topics ranging from securing the network channel to using proper authentication to new authorization features introduced in Microsoft SQL Server 2005....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Should I refer to browser-based games as HTML5 games or Javascript games?

    - by Bane
    First of all, I know that there are alternatives to both HTML5 and Javascript, but I worded the question so generally ("browser-based") because if I had said "HTML5" or "Javascript" games that would already imply an answer to the question. When writing wiki posts or discussing, I usually call these games "HTML5/Javascript" games. They are written in Javascript, using the new HTML5 technology. What is the proper way to call them: HTML5 or Javascript games? I see that most people opt for HTML5, why?

    Read the article

  • Kiss your MySQL website goodbye

    <b>Really Linux:</b> "In this article Mark Rais offers beginning Linux web administrators some guidance with regard to proper back up and restore of a dynamic website to help prevent a catastrophic moment."

    Read the article

  • Should I be using a JavaScript SPA designed when security is important

    - by ryanzec
    I asked something kind of similar on stackoverflow with a particular piece of code however I want to try to ask this in a broader sense. So I have this web application that I have started to write in backbone using a Single Page Architecture (SPA) however I am starting to second guess myself because of security. Now we are not storing and sending credit card information or anything like that through this web application but we are storing sensitive information that people are uploading to us and will have the ability to re-download too. The obviously security concern that I have with JavaScript is that you can't trust anything that comes from JavaScript however in a Backbone SPA application, everything is being sent through JavaScript. There are two security features that I will have to build in JavaScript; permissions and authentication. The authentication piece is just me override the Backbone.Router.prototype.navigate method to check the fragment it is trying to load and if the JavaScript application.session.loggedIn is not set to true (and they are not viewing a none authenticated page), they are redirected to the login page automatically. The user could easily modify application.session.loggedIn to equal true (or modify Backbone.Router.prototype.navigate method) but then they would also have to not so easily dynamically embedded a link into the page (or modify a current one) that has the proper classes, data-* attributes, and href values to then load a page that should only be loaded when they user has logged in (and has the permissions). So I have an acl object that deals with the permissions stuff. All someone would have to do to view pages or parts of pages they should not be able to is to call acl.addPermission(resource, permission) with the proper permissions or modify the acl.hasPermission() to always return true and then navigate away and then back to the page. Now certain things is EMCAScript 5 like Object.seal() or Object.freeze() would help with some of this however we have to support IE 8 which does not support those pieces of functionality. Now the REST API also performs security checks on every request so technically even if they are able to see parts of the interface that they should not be able to, they still should not be able to actually affect any data. The main benefits for me in developing a JavaScript SPA application is that the application is a lot more responsive since it is only transferring the minimum amount of JSON data for the requested action and performing the minimum amount of work too. There are also other things that I think are beneficial like you are going to have to develop an API for the data (which is good if you want expand your application to different platforms/technologies) or their is more of a separation between front-end and back-end however if security is a concern, it is really wise to go down the road of a JavaScript SPA application for the front-end?

    Read the article

  • Minimal set of critical database operations

    - by Juan Carlos Coto
    In designing the data layer code for an application, I'm trying to determine if there is a minimal set of database operations (both single and combined) that are essential for proper application function (i.e. the database is left in an expected state after every data access call). Is there a way to determine the minimal set of database operations (functions, transactions, etc.) that are critical for an application to function correctly? How do I find it? Thanks very much!

    Read the article

  • SQL SERVER Four Posts on Removing the Bookmark Lookup Key Lookup

    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Friday Fun: Shape Fold

    - by Asian Angel
    This week’s game comes with lots of puzzle solving, brain-teasing goodness to keep you busy. On each level you will need to rotate, twist, and/or move the hinged puzzle pieces into their proper shape. Do you have the patience and skill to solve all the puzzles or will you be forced to admit defeat? 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • Design Issues With Forms

    - by ultan o'broin
    Interesting article on UX Matters, well worth reading, especially the idea that global design research can take for a better user experience in all languages: Label Placement in Austrian Forms, with Some Lessons for English Forms What is perhaps underplayed here is the cultural influence of how people worked with forms in the past, and how a proper global user-centered design process needs to address this issue and move usability gains (in the enterprise space, productivity especially) in the right direction.

    Read the article

  • Keyword Research - Why Bother?

    "Every long journey, starts with a single step" says Confucius. And with every successful SEO campaign, one must start with proper keyword research. Keyword research is more than just knowing which keywords to use for a campaign, it also determines the direction of your campaign in relation to whatever key phrase you are targeting.

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • upstart-supervised init script for Apache?

    - by Ben Williams
    I want to run apache on Ubuntu 10.04, and use the nice supervision stuff in upstart (I'm not just talking about the apache init script, but proper service supervision a la daemontools - which is to say, restarting apache when it dies, things like that). Does anyone have a running upstart config for supervising apache on ubuntu 10.04? The Googles have been no help to me, but it could be that my google-fu is weak.

    Read the article

  • How to use Ajax Validator Collout Extender

    - by SAMIR BHOGAYTA
    Steps:- Step 1 : Insert any validation control with textbox Step 2 : Insert Validator Collout Extender with validation control from the Ajax Control Toolkit Step 3 : Set the property of the Validation control : ControlToValidate,ErrorMessage,SetFocusOnError=True,Display=none and Give the proper name to the validation control Step 4 : Set the ValidationControlID into the Validator collout Extender Property TargetControlID

    Read the article

  • PHP Web Application Development - The Value of Smart Planning in Development

    If you've outsourced web application development, or worked as a programmer or project leader of development team, you've definitely experienced the difficult strive towards meeting a deadline. Time always seems to be a constraint. The client may bring up changes which he or she feels should have been understood by the development team (sometimes rightfully and other times not) which further puts pressure on the team to deliver faster than what they may be able to. At least without proper planning that is.

    Read the article

  • PHP MVC error handling, view display and user permissions

    - by cen
    I am building a moderation panel from scratch in a MVC approach and a lot of questions cropped up during development. I would like to hear from others how they handle these situations. Error handling Should you handle an error inside the class method or should the method return something anyway and you handle the error in controller? What about PDO exceptions, how to handle them? For example, let's say we have a method that returns true if the user exists in a table and false if he does not exist. What do you return in the catch statement? You can't just return false because then the controller assumes that everything is alright while the truth is that something must be seriously broken. Displaying the error from the method completely breaks the whole design. Maybe a page redirect inside the method? The proper way to show a view The controller right now looks something like this: include('view/header.php'); if ($_GET['m']=='something') include('view/something.php'); elseif ($_GET['m']=='somethingelse') include('view/somethingelse.php'); include('view/foter.php'); Each view also checks if it was included from the index page to prevent it being accessed directly. There is a view file for each different document body. Is this way of including different views ok or is there a more proper way? Managing user rights Each user has his own rights, what he can see and what he can do. Which part of the system should verify that user has the permission to see the view, controller or view itself? Right now I do permission checks directly in the view because each view can contain several forms that require different permissions and I would need to make a seperate file for each of them if it was put in the controller. I also have to re-check for the permissions everytime a form is submitted because form data can be easily forged. The truth is, all this permission checking and validating the inputs just turns the controller into a huge if/then/else cluster. I feel like 90% of the time I am doing error checks/permissions/validations and very little of the actual logic. Is this normal even for popular frameworks?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >