Search Results

Search found 25718 results on 1029 pages for 'external hard drive'.

Page 100/1029 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • What's a worthwhile test for a new HD?

    - by Michael Kohne
    I work for a company that uses standard 2.5" SATA HD's in our product. We presently test them by running the Linux 'badblocks -w' command on them when we get them - but they are 160 gig drives, so that takes like 5 hours (we boot parted magic onto a PC to do the scan). We don't actually build that many systems at a time, so this doable, but seriously annoying. Is there any research or anecdotal evidence on what a good incoming test for a hard drive should be? I'm thinking that we should just wipe them with all zeros, write out our image, and do a full drive read back. That would end up being only about 1 hour 45 minutes total. Given that drives do block remapping on their own, would what I've proposed show up any infant mortality just as well as running badblocks?

    Read the article

  • Safest snapshot of a failing harddrive?

    - by ironfroggy
    I have a headless machine that stopped booting, so I pulled it out for diagnostics and got a message that one of the harddrives was about to fail, so I pulled them all out and I need to get everything off, before figuring out which I need to get rid of. I wasn't sure which drive was failing, because it only said "Harddrive 1" and I don't know which it referred to. I'm wondering the best way to get everything off. I'm worried if I copy everything, I could get corrupt data and not realize some files are wrong until the drive is completely out of commission. What are my best options to get everything off in a way I can safely move to new storage?

    Read the article

  • Get drive type with SetupDiGetDeviceRegistryProperty

    - by new
    Dear all, I would like to know whether i can get the drive information using the SP_DEVICE_INTERFACE_DETAIL_DATA's DevicePath my device path looks like below "\?\usb#vid_04f2&pid_0111#5&39fe81e&0&2#{a5dcbf10-6530-11d2-901f-00c04fb951ed}" also please tell me in the winapi they say "To determine whether a drive is a USB-type drive, call SetupDiGetDeviceRegistryProperty and specify the SPDRP_REMOVAL_POLICY property." i too use SetupDiGetDeviceRegistryProperty like below while ( !SetupDiGetDeviceRegistryProperty( hDevInfo,&DeviceInfoData, SPDRP_REMOVAL_POLICY,&DataT,( PBYTE )buffer,buffersize,&buffersize )) but i dont know how can i get the drive type using the above.. Please help me up

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • Why do SSD drives get so much more expensive as they get larger?

    - by futuraprime
    Normal HDD costs go up very little as drives get larger. For example, an average 1TB drive costs a little under $90, 2TB costs a little over $100, and a 3TB drive costs close to $150. For HDDs, the cost per GB goes down as the number of GB goes up. SSD costs don't work like this: a 128GB SSD goes for $120ish, 256GB goes for $250ish, and 512GB drives get up to $600. The cost per GB goes up as the number of GB rises. What is it about SSDs that makes them so much costlier as they get larger?

    Read the article

  • Which particular file caused "Delayed write failed" error?

    - by user35020
    I sometimes get this error when resuming from hibernation: Delayed Write Failed: Windows was unable to save all the data for the file G:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. I know this is caused because the hard drive (G:, an external USB drive) was (a) plugged in when I hibernated and wasn't ready at wake-up, or (b) I simply forgot to plug it when resuming from hibernation. My question is: is there any way to see which particular file/folder/etc failed to be written? The hard drive functions correctly before and after, so there seems to be no permanent damage. Is there a detailed log someplace or a utility? I've searched but found nothing. Thanks for any help!

    Read the article

  • Mount a remote Linux hard drive as another Windows 7 partition during boot?

    - by zhuanyi
    I would like to mount a hard drive on a remote computer (running on CentOS 6) as a Windows drive so that I can install programs to that drive. The primary hard drive for my Windows machine (which is at home) is pretty small, I have a Linux server sitting in a remote data center with a much larger hard drive and allow me to install more stuff. I know most of you are going to say Samba, unfortunately the biggest problem for me in this case is that I can not mount Samba as a network share unless I start OpenVPN or SSH tunneling first, which is not good for my case because I will install some startup programs to the remote drive as well. Therefore, the remote drive has to be ready and work just like another drive BEFORE any of the startup programs start to load. Is that possible? My home PC has Windows 7 Professional 32 bit installed and the remote server is a Xen virtual server running on CentOS 6. I have admin/root permissions for both. Thanks a lot!

    Read the article

  • Why is my computer running slower after I just installed more RAM and a new HDD?

    - by hopla
    I just bought 4 GB of ram (2x2GB) and a 1TB hard drive and installed them, upgrading from my original 1GB RAM and 250GB HDD. I put the 2GB sticks in 1st and 3rd slots and the 1GB stick in 2nd. Now with my new ram and HDD my computer is running MUCH slower and I dont know why. I've tried restarting just to see what happens and I noticed that even the Windows XP starting music is lagging. If anyone could help that would be fantastic. It's hard even to type this out.

    Read the article

  • Edit write-protected files by breaking hard links

    - by Taymon
    A directory which I own and can write to contains hard links to files that I don't own and don't have write permission for. I want to open and edit these files in Emacs. When I save my changes, Emacs should rename the existing hard link by appending ~, then write my new version of the file as a new file owned by me. I was under the impression that Emacs could just do this (because of the way it does backups), but it's not working; when I save, it attempts to change the file's permissions in order to write to it (and fails because I don't own the file). How do I make this happen?

    Read the article

  • Fresh Vista install and the strange appearance of boxes/rectangles

    - by Nick
    I purchased a new hard-drive, and after doing a fresh install of Vista and installing all updates/latest drivers, I'm noticing that all of a sudden and shortly after being powered on, I get black boxes/rectangles all over my screen (below). Any ideas as to what could be causing this? On a (maybe) related note, my brand new hard-drive is squeaking. Not often, but it is there. Debating on whether or not it is worth the hassle of sending it back, or if this is related to the mysterious rectangles.

    Read the article

  • Windows shows incorrect free space on Raid 10 volume

    - by Adenverd
    I have 4 1TB hard drives in RAID 1+0 configuration. Theoretically, I should have ~2TB of available space. Windows says the drive has a total size of 1.81 TB, which I'm fine with. As far as files on the volume go, I used WinDirStat to determine that I have 552.8GB of files on the volume. This means that I should have somewhere around 1.3TB minimum of free space. Yet Windows shows the drive as only having 492GB of free space. Are there hidden files somewhere that I can't see (I have show hidden files/folders turned on)? Does Windows not recognize that old files have been deleted? Is there any way to correct this problem?

    Read the article

  • Effective system to backup this setup?

    - by user71785
    I currently have my development environment on a usb hd. It has things like portable xampp, virtualbox with ubuntu guest, portable firefox and other dev tools. It works fantastic! I can attach it to almost any computer and all works fine. However, if this drive decides to go suicide on me I will be close behind it. The problem I'm having is I use this portable HD almost all the time and so I need a fast way to backup the entire drive. It is around 400gb. Any advice?

    Read the article

  • mirroing hard drives of different sizes

    - by user30472
    I have a dell server with two 18 gig drives in a Raid 1 config. I want to replace the drives with 400 gig drives. If I break the mirror and then remove one of the 18g drives, insert a 400g and create the mirror, then do the same process again with the other 400g drive, what will I have? I would like to think I would have a new 400g raid 1 drive. But I seem to remember that this process might result in only 18g of useable space on the 400g drives. Anyone know for sure? Thank

    Read the article

  • RAID 0 disk failure, how to recover the RAID?

    - by user7985
    Situation is this. A PC with 2 hard disks, in an RAID 0 Array. The electronics on one of the disks has failed. I can not find the same board for the disk (I've tried this, removed board from the OK disk, and the second, the damaged one, works fine). I've made an image with "dd" on linux on a new hard drive (same size, not same model) and now I get "Offline member" in the RAID config screen. Will I succeed to recover the data which is stored on the drives, any help, any experience with this kind of problem. And surly, I know it was stupid to put the disks in RAID 0 and store data on them :(

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Restoring databases to a set drive and directory

    - by okeofs
     Restoring databases to a set drive and directory Introduction Often people say that necessity is the mother of invention. In this case I was faced with the dilemma of having to restore several databases, with multiple ‘ndf’ files, and having to restore them with different physical file names, drives and directories on servers other than the servers from which they originated. As most of us would do, I went to Google to see if I could find some code to achieve this task and found some interesting snippets on Pinal Dave’s website. Naturally, I had to take it further than the code snippet, HOWEVER it was a great place to start. Creating a temp table to hold database file details First off, I created a temp table which would hold the details of the individual data files within the database. Although there are a plethora of fields (within the temp table below), I utilize LogicalName only within this example. The temporary table structure may be seen below:   create table #tmp ( LogicalName nvarchar(128)  ,PhysicalName nvarchar(260)  ,Type char(1)  ,FileGroupName nvarchar(128)  ,Size numeric(20,0)  ,MaxSize numeric(20,0), Fileid tinyint, CreateLSN numeric(25,0), DropLSN numeric(25, 0), UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0), ReadWriteLSN numeric(25,0), BackupSizeInBytes bigint, SourceBlocSize int, FileGroupId int, LogGroupGUID uniqueidentifier, DifferentialBaseLSN numeric(25,0), DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit,  TDEThumbPrint varchar(50) )    We now declare and populate a variable(@path), setting the variable to the path to our SOURCE database backup. declare @path varchar(50) set @path = 'P:\DATA\MYDATABASE.bak'   From this point, we insert the file details of our database into the temp table. Note that we do so by utilizing a restore statement HOWEVER doing so in ‘filelistonly’ mode.   insert #tmp EXEC ('restore filelistonly from disk = ''' + @path + '''')   At this point, I depart from what I gleaned from Pinal Dave.   I now instantiate a few more local variables. The use of each variable will be evident within the cursor (which follows):   Declare @RestoreString as Varchar(max) Declare @NRestoreString as NVarchar(max) Declare @LogicalName  as varchar(75) Declare @counter as int Declare @rows as int set @counter = 1 select @rows = COUNT(*) from #tmp  -- Count the number of records in the temp                                    -- table   Declaring and populating the cursor At this point I do realize that many people are cringing about the use of a cursor. Being an Oracle professional as well, I have learnt that there is a time and place for cursors. I would remind the reader that the data that will be read into the cursor is from a local temp table and as such, any locking of the records (within the temp table) is not really an issue.   DECLARE MY_CURSOR Cursor  FOR  Select LogicalName  From #tmp   Parsing the logical names from within the cursor. A small caveat that works in our favour,  is that the first logical name (of our database) is the logical name of the primary data file (.mdf). Other files, except for the very last logical name, belong to secondary data files. The last logical name is that of our database log file.   I now open my cursor and populate the variable @RestoreString Open My_Cursor  set @RestoreString =  'RESTORE DATABASE [MYDATABASE] FROM DISK = N''P:\DATA\ MYDATABASE.bak''' + ' with  '   We now fetch the first record from the temp table.   Fetch NEXT FROM MY_Cursor INTO @LogicalName   While there are STILL records left within the cursor, we dynamically build our restore string. Note that we are using concatenation to create ‘one big restore executable string’.   Note also that the target physical file name is hardwired, as is the target directory.   While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) -- As long as there are no rows missing select @RestoreString = case  when @counter = 1 then -- This is the mdf file    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.mdf' + '''' + ', '   -- OK, if it passes through here we are dealing with an .ndf file -- Note that Counter must be greater than 1 and less than the number of rows.   when @counter > 1 and @counter < @rows then -- These are the ndf file(s)    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ndf' + '''' + ', '   -- OK, if it passes through here we are dealing with the log file When @LogicalName like '%log%' then    @RestoreString + 'move  N''' + @LogicalName + '''' + ' TO N’’X:\DATA1\'+ @LogicalName + '.ldf' +'''' end --Increment the counter   set @counter = @counter + 1 FETCH NEXT FROM MY_CURSOR INTO @LogicalName END   At this point we have populated the varchar(max) variable @RestoreString with a concatenation of all the necessary file names. What we now need to do is to run the sp_executesql stored procedure, to effect the restore.   First, we must place our ‘concatenated string’ into an nvarchar based variable. Obviously this will only work as long as the length of @RestoreString is less than varchar(max) / 2.   set @NRestoreString = @RestoreString EXEC sp_executesql @NRestoreString   Upon completion of this step, the database should be restored to the server. I now close and deallocate the cursor, and to be clean, I would also drop my temp table.   CLOSE MY_CURSOR DEALLOCATE MY_CURSOR GO   Conclusion Restoration of databases on different servers with different physical names and on different drives are a fact of life. Through the use of a few variables and a simple cursor, we may achieve an efficient and effective way to achieve this task.

    Read the article

  • RAID-1 and regular drive removal (using RAID-1 as a backup measure)

    - by Vi
    Is using mdadm's RAID-1 of 2 partitions (one on laptop's internal HDD, one on external HDD) a good idea. I want the system to work as RAID-1 if both drives are present, work as regular volume (degradad RAID-1) if external HDD is unplugged and quickly resync when I plug external HDD again. Questions: Is it a good idea? Will write-intent bitmap be enough for this task or I need something else? Should I consider doing it at filesystem level (3b. if yes, how?). Basic requirements are: Quick resync when I re-add the external drive (provided I hasn't changed that partition). More or less consistent data on the removed drive if I remove it not during write/resync operation. If I remove the drive during resync I expect the data to be somewhat inconsistent, but expect quick resync completion when I re-add it again. E.g. I want the the remaining drive to track what is changed (there can be a lot of changes) and that sync back only those parts that need it.

    Read the article

  • kickstart: reference floppy drive via %ksappend or %include

    - by virtualeyes
    Having trouble getting %ksappend or %include to work when referencing a local floppy drive. Booting off remote server's cd-rom drive I am able to load the CentOS 6 minimal install image, and then add ks=hd:fd0/ks-jvm.cfg to boot params to load kickstart init file from floppy disk. That works fine. The problem is that I want to load a streamlined generic init file off the floppy and then, within the init, %ksappend or %include specific config files relative to the type of server I'm building (JVM, MySQL, Apache, etc.) I do not have DHCP, networking needs to be specified statically, so %ksappend and %include both fail when attempting to reference http://some-LAN-IP/foo.cfg since networking has not yet been set. The kickstart setup only works when I glob in the entire config into a single file, which is great, but ugly and difficult to maintain when I return later, having forgotten the original setup. At this point I'd be happy if I could get %ksappend or %include working with a floppy drive reference in the %post section; that would consolidate a lot of common boilerplate that all kickstarts will rely on (sshd_config, rsync config, resolve.conf, and so on) Thanks for providing the magic floppy drive reference that is eluding me!

    Read the article

  • Windows 7 cannot view FAT32 formatted bootable usb drive

    - by NaimK
    I'm having some issue where when I run bootsect with a command line of "bootsect.exe /nt52 : /force /mbr", then Windows 7 (the comp I'm running bootsect on) can no longer view the contents of the usb drive. Explorer tries to look at it, and then fails, and I can't even correctly eject the drive, when I try, it does nothing until I yank it out, and then I get some errors. Bootsect reports success on writing the volume and the drive data to make it bootable, but it doesn't boot after copying on the necessary files (files from a created ISO, it works when it is created on XP). But this may be that I'm not following the same instructions as when building it on XP since some of the command don't seem to always work correctly. The drive is formatted to FAT32 (necessary I think, cause I'm installing a custom version of Win XP embedded). Any ideas? Or perhaps a good or automated way to load a usb with a custom version of win xp and make it bootable from Win 7? I am having some issues, for instance, "ufdprep.exe" rarely works when I'm running it from Windows 7, I don't know why.

    Read the article

  • VoIP setup for one external PSTN line

    - by Jcl
    I'm completely new to VoIP and the likes, and I'm trying to find information about what could be the best setup for this. I need 4 (maybe more in the future, but maximum 5 or 6) wireless extensions, connected to 1 PSTN line, and maybe 2 in the future. I've been trying to gather information about the gear needed but everything I find seems too much over-the-top (and extremely expensive). The main problem is that the physical place we are on doesn't have possibilities of having a decent internet connection, so using a external VoIP "virtual PBX" is not an option. Thing is, even if small, phone is critical to this organization. I currently have an analog DECT/GAP PBX which does what I need, however the PBX is very bad and the call quality is horrible, and that's why I want to change it. The requirements would be: 4 wireless terminals (routing cable is not an option), all of them ringing on incoming PSTN calls. Ability to do internal calls (4 separate offices) and ability to pass calls between terminals. The 4 terminals should be able to access the external PSTN line without dialing any special codes. Very important: terminals should be able to issue commands on the PSTN line to the external operator in the form *nn*nnnnnnnn# . Don't know wether this could face to be a problem, but I've had problems with analog PBX which would take any * as a PBX command and wouldn't allow terminals to send it to the external lines. Not so important, but would be nice to have: call waiting music Could anyone recommend such a setup? I need to be able to do this on a EXTREMELY LIMITED budget (that is: I don't have a limit, but all should get as much to zero as possible). I have enough spare powerful computers and a 300mbps wireless network which works just fine, so that's not to include in the budget. Don't really know if this is the best place to ask, but it's the most StackExchange-related site I've found to this subject.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >