Search Results

Search found 17622 results on 705 pages for 'external drive'.

Page 79/705 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Will my RAID0 stay intact when I move it to a new computer?

    - by Jeremy H
    My primary drive is a 250GB WD SATA drive. So, I added 2x 500GB 7,200 RPM WD SATA drives into my Windows Vista box and created a 1TB RAID0. I then formatted the the primary drive and installed Windows 7. To my pleasant surprise when I booted into Windows 7 my RAID0 was still intact and I kept trotting along the same as I did before. Now I am replacing my motherboard, processor, and RAM and plan on formatting the primary 250GB drive again and using it to boot for a new clean install of Windows 7. My question is: if I move these two SATA drives which are setup for RAID0 into the new system, install Windows 7 again, will the RAID0 remain? Edit: Software RAID. I created it within Windows. The RAID0 does NOT contain the system boot partition.

    Read the article

  • Computer hangs at BIOS screen. Cannot enter setup

    - by d2jxp
    I have an HP Pavilion a6500f (it's a year out of warranty) and it's hanging on the blue HP BIOS screen. If I mash F10 while it's starting up, it will say "Entering Setup..." but I will see no results. It will hang there and not do anything. If I actually wait until I can see the screen and then hit F10, there's no response at all and the computer will sit at the BIOS menu. I've dusted and cleaned it out, reseated the memory, switched the RAM slots, and reset the CMOS battery using the reset jumper. I'm out of ideas. I'm pretty sure it's not a hard drive issue, since my problem is at the BIOS. After this post, I'll disconnect the hard drive and try to just boot without it. Anyone have any other ideas? Edit: Okay, so I tried disconnecting the hard drive and now I can get back into the BIOS. I reconnected it and I'm locked out again. So the problem is my hard drive.. I guess I should delete this post unless someone has any ideas as to what's wrong with the drive?

    Read the article

  • Why does HDTune report better performing drives 2 months after installing them?

    - by Rolnik
    OK, so this is really weird. I ran HDTune on a newly set-up home-built computer and got the following readings from my drives in mid-November. SSD 154 MB/s RAID1 87 RAID0 198 (software installs) RAID0 98 Swap drive Today, in January, I run HDTune (same version) and get these results, in MB/s: SSD 186 RAID1 98 RAID0 241 RAID0 98 (Swap drive) Here are more details that HDTune reports on the SSD drive: HD Tune: OCZ-VERTEX Benchmark Blockquote Transfer Rate Minimum : 135.4 MB/sec Transfer Rate Maximum : 219.4 MB/sec Transfer Rate Average : 185.7 MB/sec Access Time : 0.1 ms Burst Rate : 187.3 MB/sec CPU Usage : -1.0% To get to my question: Why are my hard drives improving in performance? Most of my logical drives are in some form of RAID, except for the SSD. Will this performance ever deteriorate? Note, none of my drives are a hybrid drive that uses some form of SSD to enhance the write/reads on actual platters.

    Read the article

  • Detour (2.1 Professional) - 64bit "unresolved external symbol"

    - by HJ
    Hi, I compiled Detours 64 bit using: {nmake DETOURS_TARGET_PROCESSOR=X64} I'm using it in simple component. The component builds fine in 32 bit. But in 64 bit I am getting following linker errors: {unresolved external symbol DetourAttach} {unresolved external symbol DetourFindFunction} {unresolved external symbol DetourDetach} {unresolved external symbol DetourTransactionCommit} {...} I have correctly set the linker directories and library options in the component VC++ project file. Please help me to resolve this issue.

    Read the article

  • Make exact copy of USB stick [closed]

    - by Andrius Palivonas
    There's this school software on a USB drive. It only runs, when the stick they gave is plugged in. Cloning the drive with dd command didn't work. I'm guessing it checks the hardware ID of the flash drive. Is there any way to change drives information? I guess not, but is it possible to create a virtual flash drive with exactly same hardware id and all other read-only information that the software is most probably checking. EDIT: The paper math books we have dont' have answers. So when I'm doing homework I have no idea if did it right. The electronic version does have the answers. The publisher didn't put them into paper version because of simple reason - money. They would have to republish the book if some answers are found to be wrong. So I feel no shame trying to pirate that software, because publishers are ruining our math education.

    Read the article

  • Installing Chrome OS on HP Pavilion

    - by Lenny K
    Trying to install Chrome OS on HP Pavillion tx1000. I changed the BIOS to boot from USB before hard drive, and created a bootable USB drive (SanDisk Cruzer 4GB). No matter how many ways I try to make the USB drive, the laptop hangs on the startup screen, and trying to open the Boot Order Options freezes on "Fixed disk 0: ...". On the other hand, without the USB plugged in, the computer continues, and displays "Initialized Mouse", and then onto the Boot Order Options. If you let it boot normally from the beginning, it starts up fine into Windows Vista. I am using Hexxeh's Chrome OS build. Here are the different methods I've tried for making the Bootable USB Drive: Hexxeh's Image Creator (for Mac) Making the USB device bootable (using these directions). I wrote the image using Win32DiskImager. Writing the image using Win32DiskImager (without making it bootable).

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • Power outage during disk wipe. What do I do now?

    - by Mark Trexler
    I was using Roadkil Diskwipe on an external hard drive and the power went out. I had removed it from any outlet connection by the time power was restored to prevent power-spike damage (it's on a surge protector, but I didn't want to rely on that). My question is, where do I go from here? Obviously I don't care about preserving any data currently on it, I just want to make sure the drive itself is not terminally damaged. I'm running chkdsk (full), but I don't know if that's the correct step to assessing any damage. If it makes any difference, the hard drive was unallocated at the time of the outage, as Diskwipe requires that for it to run. Also, could something like this cause latent problems with the drive itself (i.e. serious issues that I won't be aware of when testing it now). I'd appreciate any program recommendations if chkdsk is not the most appropriate diagnostic route. Thank you.

    Read the article

  • Get drive type with SetupDiGetDeviceRegistryProperty

    - by new
    Dear all, I would like to know whether i can get the drive information using the SP_DEVICE_INTERFACE_DETAIL_DATA's DevicePath my device path looks like below "\?\usb#vid_04f2&pid_0111#5&39fe81e&0&2#{a5dcbf10-6530-11d2-901f-00c04fb951ed}" also please tell me in the winapi they say "To determine whether a drive is a USB-type drive, call SetupDiGetDeviceRegistryProperty and specify the SPDRP_REMOVAL_POLICY property." i too use SetupDiGetDeviceRegistryProperty like below while ( !SetupDiGetDeviceRegistryProperty( hDevInfo,&DeviceInfoData, SPDRP_REMOVAL_POLICY,&DataT,( PBYTE )buffer,buffersize,&buffersize )) but i dont know how can i get the drive type using the above.. Please help me up

    Read the article

  • optimizing a windows server 2003 storage capacity

    - by Hosni
    I have got a windows server 2003 with partitioned Hard drive 10Go and 80Go, and i want to improve the storage capacity as the little partition 10Go is almost full. So i have got choice between partition the hard drive to equal parts, or set up a new hard drive with better storage capacity.knowing that the server has to be on service as soon as possible. Which one may be the better solution that takes less time and less risks?

    Read the article

  • TrueCrypt partition will no longer mount

    - by sparkyuiop
    I am hoping for some advice to help me out of my situation, with luck. I have a computer running Windows 7 Ultimate x64 with 3 hard disks installed. On my 2TB hard disk 2 (non-system disk) I have 4 partitions. One is for music, another for video, a downloads partition and a 500GB RAW Truecrypt encrypted partition / volume that I had setup to mount with 4 photographs used as keyfiles. The 4 photographs are located in my 'Documents' partition which is one of four partitions on my 1.5TB hard disk 1 (non-system disk) When I setup the disk encryption I did not (I'm 99% sure) create a password, I only used the 4 photograph keyfiles to mount the volume. Recently my 1TB hard disk 0 (system / boot) started to fail so I decided to replace it. I was going to clone the old disk to a new disk but decided that a fresh installation would be more beneficial. Once I had transferred all the required 'User Data' from my old hard disk 0 (C: disk) I discarded it. I reinstalled Truecrypt, pointed to the partition, selected my 4 keyfiles photographs and I mounted my encrypted volume with no issues. In fact I mounted it several times after re-installing Windows and after reboots. Now all of a sudden when I try and mount it I get the message "incorrect keyfile(s) and/or password or not a Truecrypt volume". Now I am not sure why this happened as I do not recall exactly what I did between last mounting the volume successfully and it not mounting. Here are some of the possible things I may have done to cause it to stop working but I am at a loss as to where to start to try and resolve the problem. 1. I had swapped the drive letters to a preferred order. 2. I possibly swapped the physical SATA connectors on the mainboard. 3. I enabled 'Hot Plugging' for the two non-system hard disk SATA ports and the DVD SATA port in the BIOS. I have tried changing the encrypted partition drive letter as suggested in another post but this does not help. On my old system the encrypted drive was drive "X". I have about tried it with all the other free drive letters but alas nothing changes. I do not recall what drive letter was allocated to the encrypted partition before I changed them all. I have not tried to change the letter back to what it possibly was to start with as I am happy with the current layout. I will try this is anyone thinks it would be worthwhile though. I do hope I have managed to convey my situation in an understandable manner and live in hope someone could help me recover years of personal files. Thank you very much for taking the time to read my post and for any suggestions you may offer. Regards Phillip Thorne (UK) Anyone???

    Read the article

  • Which particular file caused "Delayed write failed" error?

    - by user35020
    I sometimes get this error when resuming from hibernation: Delayed Write Failed: Windows was unable to save all the data for the file G:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. I know this is caused because the hard drive (G:, an external USB drive) was (a) plugged in when I hibernated and wasn't ready at wake-up, or (b) I simply forgot to plug it when resuming from hibernation. My question is: is there any way to see which particular file/folder/etc failed to be written? The hard drive functions correctly before and after, so there seems to be no permanent damage. Is there a detailed log someplace or a utility? I've searched but found nothing. Thanks for any help!

    Read the article

  • USB 3.0 randomly disconnecting and reconnecting

    - by user1624552
    I had an old laptop that died so I have bought a new one. I have taken the 2.5 SATA hard drive from the old laptop and I have put it into an external 2.5 SATA enclosure usb 3.0 and I connect it to the new laptop. My new laptop has Windows 8 64 bit installed. When I connect the external hard drive to the new laptop throught USB 3.0 port, it gets randomly disconnecting and reconnecting continuously, even when I am not using it. Also happens if I connect to another usb 3.0 port. Also I have observed that If I connect the external hard drive to a USB 2.0 port instead of an USB 3.0 port all work ok, no randomly disconnection and reconnection occurs. It only happens when I connect it to an USB 3.0 port. Some ideas to solve this issue?

    Read the article

  • Symlink to /Documents /Pictures etc. in OS X Home Directory?

    - by Larry O'Brien
    I have just purchased a 120GB SSD with the intent of making it my boot drive. I'd like to keep it as lean as possible since, y'know, it's so small (Heaven help me). I've read Can I move my home folder in Mac OS X? and Moving Mac OS X user folders? which discourage moving the entire home dir to a data drive. Is it possible and less-dangerous to leave the home directory on the boot drive but move the big data directories to a slower drive and symlink to them? I have the same thoughts with the /Applications directory, but maybe I should make that a separate question?

    Read the article

  • vmware takes too much time to shut down

    - by user33754
    Each time I shuting down my virtual machine - it takes like 30 minutes for VM Ware to complete the process. The shut down is done, but it shows a black screen and doesn't respond to any command. I can't do a Power Power OFF or Power Suspend. When I click it - nothing changes. When trying to close - it says "still busy". This happens only when using a Virtual Machine stored on an external hard drive. When Virtual Machine files are stored on a local drive - everything is OK. Hard drive is working good. Any suggestions how can I investigate the issue?

    Read the article

  • Western Digital Caviar SE16 not recognized

    - by NStorm
    Before I start, I have been looking at quite a few websites and I still have not found an answer to my problem. I have been building my own computer recently and I have just received the hard drive (WD Caviar SE16 WD5000AAKS) I was planning to put in my computer. After connecting the SATA power cable (99.99999% sure it is connected correctly) and the SATA cable to my motherboard (ASUS M5A78L-M/USB3) I booted my computer into a Linux Mint 13 XFCE 64-bit live USB expecting to see a hard drive when I came to install. Sadly when I checked the only hard drive that was showing was /dev/sda which was my USB with the Linux files on it. I also checked gparted and no hard drive other than my USB was showing up there either. Lastly I checked my BIOS and no matter what SATA port I connected the HDD to it wouldn't show up there either. Does anyone have any advice? Some images of my set-up which could help are bellow: Thanks in advance, Nick

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >