Search Results

Search found 1297 results on 52 pages for 'disaster stories'.

Page 4/52 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organization's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimize downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Retrieving SQL Server Fixed Database Roles for Disaster Recovery

    We ran into a case recently where we had the logins and users scripted out on my SQL Server instances, but we didn't have the fixed database roles for a critical database. As a result, our recovery efforts were only partially successful. We ended up trying to figure out what the database role memberships were for that database we recovered but we'd like not to be in that situation again. Is there an easy way to do this?

    Read the article

  • Encrypted usb stick not booting anymore how can I resolve the Disaster

    - by statquant
    Guys I am experiencing massive problem here. I created a bootable usb stick that I "full disk" encrypted with Ubuntu 13.04. I was using it when it froze. I had to reboot it manually and now I cannot boot on it anymore (It is not listed in the boot menu anymore) If I put when I run Ubuntu on my laptop ubuntu can see it and I am asked for a password But the correct password does not seem to work (I assume that this passphrase is the one I was using previously at startup) Can you please help me figure this thing out, this is massive problem to me...

    Read the article

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organisation's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimise downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist. "It's the freaking iPhone of SQL monitoring""Everyone just gets it… that has tremendous value" - Rob Sullivan, DBA, IdeasRun. Get started with SQL Monitor today - download a free trial.

    Read the article

  • The Huge Disaster Within The Linux 2.6.35 Kernel

    <b>Phoronix:</b> "While the 2.6.35 code has not even seen its first release candidate yet, there are some massive performance drops in a variety of different tests that have yet to be corrected and nothing like we have encountered with previous kernel release cycles especially for a regression that has lived now for about one week."

    Read the article

  • Practically Cloudy: SQL Server Disaster Recovery to Microsoft Azure - Backups

    In the first in a series on the practicalities of using the Microsoft Azure Platform for the SQL Server professional, Buck Woody shows that, whatever your version of SQL Server, there is a way of storing offsite backups in the cloud. Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • What is the BRU Server Disaster Recovery Procedure?

    - by Jonskichov
    How do I go about performing a full, bare metal disaster recovery from BRU Server backups? I have backed up the entire C:\ (including Windows, Program Files etc) of a test machine (using Open File Manager) and want to restore this to a new server. What is the procedure I need to go through to restore to a clean server using my backup? How does this work with various services such as DHCP, DNS, Active Directory, SQL Server, Windows Registry etc? Thanks in advance.

    Read the article

  • So Your Laptop’s Fan Has Stopped Working Then? [Humorous Image]

    - by Asian Angel
    There is such a thing as dust build-up and then there are the odd cases of dust-ball evolution… What is the worst case of dust build-up that you have dealt with? Make sure to share your stories with your fellow readers in the comments! Help, my laptop’s fan is not working! [via Reddit Tech Support Gore] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Voting on Hacker News stories programmatically?

    - by igorgue
    I decided to write an app like: http://michaelgrinich.com/hackernews/ but for Android devices, my idea will use a web application backend (because I rather code in Python and for the web than completely in Java for Android devices). What I have right now implemented is something like this: $ curl -i http://localhost:8080/stories.json?page=1\&stories=1 HTTP/1.0 200 OK Date: Sun, 25 Apr 2010 07:59:37 GMT Server: WSGIServer/0.1 Python/2.6.5 Content-Length: 296 Content-Type: application/json [{"title": "Don\u2019t talk to aliens, warns Stephen Hawking", "url": "http://www.timesonline.co.uk/tol/news/science/space/article7107207.ece?", "unix_time": 1272175177, "comments": 15, "score": 38, "user": "chaostheory", "position": 1, "human_time": "Sun Apr 25 01:59:37 2010", "id": "1292241"}] The next step (and final I think) is voting, my design is doing something like this: $ curl -i http://localhost:8080/stories/1?vote=up -u username:password Will vote up and: $ curl -i http://localhost:8080/stories/1?vote=down -u username:password Down. I have no idea how to do it though... I was planning to use Twill but the login link is always different, e.g.: http://news.ycombinator.com/x?fnid=7u89ccHKln Later the Android app will consume this API. Any experience with programmatically browsing Hacker News?

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • NTFS partition size not recognized after disaster recovery clone

    - by djechelon
    I'm in the middle of a disaster recovery of a 250GB hard disk that was "clicking". Obviously I didn't have a backup copy. I managed to salvage all the files thanks to GParted Live that was able to read the disk without a single "click" sound. So I cloned the partition to a new drive sized 500GB. Unfortunately, GParted process went to some kind of infinite loop, disks stopped I/O and after a couple of hours I interrupted the clone process I started. Now the problem is: when cloning the partition I also chose to expand 250GB to the whole 500GB of the target disk. Windows sees the partition sized 500GB in computer management, but Windows Explorer only sees 250. chkdsk e: /f says the filesystem is OK. How can I repair the file system and let Windows see the full 500GB of the new partition? An alternate idea is to deep-copy the files from the backup disk to a newly formatted disk. This should definitely fix. Any other ideas?

    Read the article

  • Wiki/CMS with synchronization?

    - by Clinton Blackmore
    We're looking into putting up a wiki or CMS for internal use by our IT department. One of the big things we want to use it for is disaster recovery procedures. Given that a disaster, such as a power or network outage, might render the wiki inaccessible, it seems sensible to to host the wiki in two places so that if one is inaccessible, we can fall back to the other. Are there any wikis or CMSes that synchronize (or an alternate way to achieve a similar end)?

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • A good data model for finding a user's favorite stories

    - by wings
    Original Design Here's how I originally had my Models set up: class UserData(db.Model): user = db.UserProperty() favorites = db.ListProperty(db.Key) # list of story keys # ... class Story(db.Model): title = db.StringProperty() # ... On every page that displayed a story I would query UserData for the current user: user_data = UserData.all().filter('user =' users.get_current_user()).get() story_is_favorited = (story in user_data.favorites) New Design After watching this talk: Google I/O 2009 - Scalable, Complex Apps on App Engine, I wondered if I could set things up more efficiently. class FavoriteIndex(db.Model): favorited_by = db.StringListProperty() The Story Model is the same, but I got rid of the UserData Model. Each instance of the new FavoriteIndex Model has a Story instance as a parent. And each FavoriteIndex stores a list of user id's in it's favorited_by property. If I want to find all of the stories that have been favorited by a certain user: index_keys = FavoriteIndex.all(keys_only=True).filter('favorited_by =', users.get_current_user().user_id()) story_keys = [k.parent() for k in index_keys] stories = db.get(story_keys) This approach avoids the serialization/deserialization that's otherwise associated with the ListProperty. Efficiency vs Simplicity I'm not sure how efficient the new design is, especially after a user decides to favorite 300 stories, but here's why I like it: A favorited story is associated with a user, not with her user data On a page where I display a story, it's pretty easy to ask the story if it's been favorited (without calling up a separate entity filled with user data). fav_index = FavoriteIndex.all().ancestor(story).get() fav_of_current_user = users.get_current_user().user_id() in fav_index.favorited_by It's also easy to get a list of all the users who have favorited a story (using the method in #2) Is there an easier way? Please help. How is this kind of thing normally done?

    Read the article

  • Problems hiring someone on elance or similar sites

    - by akim
    I am assigned to hire some developers/designers in these kind of sites, and I'm looking for war stories about problems with this strategy. All I found so far are tips about getting more jobs as a freelancer, but nothing about being in the other side. We need a web designer and maybe a web programer for a internal and small site, but really important with a very short deadline. Any thoughts about? Does it worth to hire someone? Or we should work extra hour and get this done by ourselves?

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • Backup those keys, citizen

    - by BuckWoody
    Periodically I back up the keys within my servers and databases, and when I do, I blog a reminder here. This should be part of your standard backup rotation – the keys should be backed up often enough to have at hand and again when they change. The first key you need to back up is the Service Master Key, which each Instance already has built-in. You do that with the BACKUP SERVICE MASTER KEY command, which you can read more about here. The second set of keys are the Database Master Keys, stored per database, if you’ve created one. You can back those up with the BACKUP MASTER KEY command, which you can read more about here. Finally, you can use the keys to create certificates and other keys – those should also be backed up. Read more about those here. Anyway, the important part here is the backup. Make sure you keep those keys safe! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to recover organic position in Google results after server down?

    - by ElHaix
    I have several sites that were doing quite well in terms of organic SEO rankings. I have the important sites setup in Google's Webmaster tools. Long story short, the system was down for about two weeks. Now in AdSense and Analytics, I am seeing that the page views are SLOWLY increasing. and I would like to know if there is anything I can do now to try to expedite the process of regaining those positions. Since there were several errors from that server, is it possible that Google will now rank any site from that IP address lower due to those two weeks of errors? Is this something that I just have to let ride out? Thanks.

    Read the article

  • Disaster After Removing Two HDD From LaCie RAID 0 Case

    - by John
    This is the second time this has happened. I own a LaCie IDE RAID 0 Enclosure and the RAID went bad. The system gave me a warning that the data could be read from the RAID but that nothing could be written, and to remove the data ASAP. I did that and erased and reinitialized the RAID. System reported it was fine, no issues. I wrote to the RAID again and the system reported the same issue. So, I removed the drives and tested them individually thinking one must have gone bad. Sure enough, one HDD reported all bad blocks, every single one after the Master Boot Record. I didn't think much about it because of the age of the drives, 5 years old. So, I bought two new drives plugged them in and started up the RAID again. Exactly the same thing happened. All was fine after initializing the RAID and then the next day after powering on the RAID the exact same issue. The HDD sitting in the same position as the first "bad" HDD reported all bad blocks. Obviously, this is an issue with LaCie's bridge board not with the drives. No utility I have used has been able to bring this HDD back to life. I thought I would just copy the MBR from the good drive to the new one using a sector editor but am hesitant. Is it possible the firmware on the HDD has been corrupted by the LaCie bridge board?? What else could be the cause of such an issue? How can I fix this drive?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >