Search Results

Search found 18534 results on 742 pages for 'dave long'.

Page 264/742 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Annoying mistake when create web template in SharePoint 2010

    - by ybbest
    I get the error    Error occurred in deployment step ‘Add Solution’: Exception from HRESULT: 0x8107026E when deploying my web template project. It turns out that your name for the Web template name has to match the name of you WebTemplate section in the element.xml file. Please see the screenshot below, the highlighted two needs to be the same. It took me so long to figure out and I will keep here in case everybody else struggle. If you are a newbie with web template in SharePoint 2010, this blog post explains the details.

    Read the article

  • How to handle canonical url changes like Stack Overflow

    - by lulalala
    Stack Overflow sites all have pretty urls which include the question title. In the HTML it also have canonical url for that page. I just found out that when I change the question title, the url is changed immediately. The canonical url is also updated. Does it mean that as long as the page with the old canonical url redirects to the new canonical url, then search engines will update their records of the canonical url as well? Is there anything else that one can actively do to make the url change even more smoother?

    Read the article

  • A good software development book [closed]

    - by Mahmoud Hossam
    I've searched this website, as well as SO for a question like that, and I still haven't found what I'm looking for. I'm looking for a book that is similar to Head First Software Development. I want to know more about the different stages of software development, I know about coding already, but I don't know much about unit testing, version control, integration, design...etc. P.S. it'd be nice if the book wasn't a thousand pages long. Edit: I'm looking for an introductory text, not a book about the latest trends in software development.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • To maximize chances of functional programming employment

    - by Rob Agar
    Given that the future of programming is functional, at some point in the nearish future I want to be paid to code in a functional language, preferably Haskell. Assuming I have a firm grasp of the language, plus all the basic programmer attributes (good communication skills/sense of humour/hygiene etc), what should I concentrate on learning to maximize my chances? Are there any particularly sought after libraries I should know? Alternatively, would another language be a better bet, say F#? (I'm not too fussed about the kind of programming work, so long as it's reasonably interesting and reasonably well paid, and with nice people)

    Read the article

  • What You Said: Do You Use the Command Line?

    - by Jason Fitzpatrick
    Earlier this week we asked you to sound off with your love (or lack there of) for the command line. You sounded off in force and now we’re back with a comment roundup. It turns out you all pretty much love the command line with that love ranging from not even liking Graphic User Interfaces (GUIs) to using the command line to get serious work done but having a long standing affair with your OS’s GUI. Many of you lamented the poor command line implementation in Windows—especially after you’d had experience with other operation systems. Mike writes: Of course. Some things are easier that was. Like ping and ipconfig. With a strong Unix background I still write and use batch files. It would be nice is the command line included more nice things like grep, sleep, touch. Maybe, someday, Windows will mature into a full OS. What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Pac-Man Hiding Spot Makes High Scores a Snap

    - by Jason Fitzpatrick
    This interesting bug (feature?) in the original Pac-Man game makes it easy to hide from the ghosts, ensuring a long-lived and well-fed Pac-Man. Check out the video above to see the black hole you can park Pac-Man in to avoid assault by the ghosts. There’s two big caveats with this trick: first, it only works in the original game (spin offs and modern adaptations won’t necessarily have it but the original machine and MAME implementations of it will). Second, it doesn’t work if the ghosts see you park yourself there; you need to slip into the spot our of their direct line of sight. Still craving more Pac-Man goodness? Check out these cheat maps that map out all the patterns you need to follow to sneak through every level unmolested by ghosts. [via Neatorama] How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    Read the article

  • Square Reader Modified to Record Off Old Reel-to-Reel Tape [Video]

    - by Jason Fitzpatrick
    The Square Reader is a tiny magnetic credit card reader that has taken the mobile payment industry by storm. This clever hack dumps the credit card reading in favor of snagging the audio from old music reels. Evan Long was curious about whether the through-the-headphones interface of the Square Reader could be used to read audio data off old magnetic recordings. With a very small modification (he had to bend a metal tab inside the reader to allow the audio tape to slide through more easily) he was able to listen to and record audio off old reels. Watch the video above to see it in action or hit up the link below to read more about his project. iPod Meets Reel [via Make] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Generalize, or Fix The Problem?

    - by Droogans
    Which of these two programmers is "better", from a managerial standpoint? The first programmer is Albert. You tell Al to make a system that will pass you the salt at the dinner table. He does it in less than a day. It works fine. The second programmer is Ben. Ben is told to make a program to pass the salt, and after two days, he's still working on it. It will save time in the long run...if you need pepper, ketchup, etc. There isn't any clear indication that there will be a need for this, but it's not improbable. Who's the better programmer to have working under you, as a manager?

    Read the article

  • Is it safe to install Compiz Experimental Plugins 0.1.1 on Maverick?

    - by litvin05
    Does anyone have these plugins installed? Sorry, but I'm worried, because my past attempts to update compiz have failed, and when I try to install these plugins they ask to me to update these files: compiz-dev compiz-fusion-bcop debhelper html2text intltool-debian libcairo-gobject2 libcairo2-dev libdecoration0-dev libdrm-dev libexpat1-dev libfontconfig1-dev libfreetype6-dev libgl1-mesa-dev libglu1-mesa-dev libice-dev libkms1 libmail-sendmail-perl libpango1.0-dev libpixman-1-dev libpng12-dev libsm-dev libstartup-notification0-dev libsys-hostname-long-perl libx11-xcb-dev libxcb-render0-dev libxcb-shm0-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxext-dev libxfixes-dev libxft-dev libxinerama-dev libxml2-dev libxrandr-dev libxrender-dev libxslt1-dev libxss-dev mesa-common-dev po-debconf x11proto-composite-dev x11proto-damage-dev x11proto-fixes-dev x11proto-randr-dev x11proto-render-dev x11proto-scrnsaver-dev x11proto-xext-dev x11proto-xinerama-dev Please answer my question, and I'll be very grateful! These Plugins are here

    Read the article

  • GWB: 5 yr anniversary

    - by Theo Moore
    Wow, just realized it's my 5 year anniverary on GeeksWithBlogs. Hard to believe so much time has passed. I paged back through some of my early posts, curious what sort of things about which I used to post. It's also interesting to see how my focus has changed and what really hasn't. I was also reminded that Chris Williams and I have been friends for that long. I don't blog nearly as often now as I used to do, but I still really like the GWB community, and I am honoured to be allowed to continue to be a part of it. Another 5 years ahead (or more), I hope. :-)

    Read the article

  • Suggestions on managing social media accounts

    - by Rob
    As a company we now have Facebook, LinkedIN, Twitter and now Google+, is there a way to easily manage all these accounts without having to log into them individually? Things like posting content to each one is becoming a full time job in itself, is there a way to post once that in turn posts to all other accounts? I used to use http://ping.fm/ a long time ago, has there been any advancements in something similar to this? With friend lists, news feeds etc etc for each one, I wish there was a way to manage them all in one place with a service/tool!

    Read the article

  • « Linux a échoué sur le Desktop » pour le créateur de GNOME, un avis tranché qui divise la communauté open source

    « Linux a échoué sur le Desktop » pour le créateur de GNOME un avis tranché qui divise la communauté open source Miguel De Icaza, l'un des créateurs et meneur du développement de l'environnement de bureau libre pour Linux GNOME estime dans un article que « Linux est un échec en tant qu'OS grand public ». Un point de vue qui n'a pas manqué de créer une grosse polémique dans le monde de l'open source, entrainant des critiques acerbes de la part de Linus Torvalds. Déjà connu pour son franc-parler et son gout pour la polémique, Miguel De Icaza dans un long billet de blog intitulé « ce qui a tué le noyau Linux », fustige la communauté Linux et les choix de développement de celle...

    Read the article

  • 302 or 301 redirect in case where redirect lasts 1-2 months

    - by Matt Helmick
    I have a case where I have a newly built "author site" (promotes the author in general as a speaker and author) which needs to to temporarily redirect traffic from the author's "book site" (focuses on advertising the specific book). Because of some upcoming publicity we want to redirect traffic from the book site to the author site as a truly temporary measure, but that redirect would probably only last for 1-2 months (until we see the flurry of activity regarding the publicity die down or until the author site has an opportunity to rise in search rankings). At first glance this seems to be the situation designed for a 302 redirect, but I'm worried about losing link juice for the original book site. Would a 301 redirect be better (keeping in mind that this would be temporary) as long as the 301 redirect was lifted after 1-2 months?

    Read the article

  • TechEd 2010 Thanks and Demos

    - by Adam Machanic
    Thank you to everyone who attended my three sessions at this year's TechEd show in New Orleans. I had a great time presenting and answering the really great questions posed by attendees. My sessions were: DAT317 T-SQL Power! The OVER Clause: Your Key to No-Sweat Problem Solving Have you ever stared at a convoluted requirement, unsure of where to begin and how to get there with T-SQL? Have you ever spent three days working on a long and complex query, wondering if there might be a better way? Good...(read more)

    Read the article

  • Technologies similar to Flash and Silverlight for Desktop apps

    - by M.A. Hanin
    Long story short: we use Flash as a partial GUI in our .NET desktop applications. Normally, this means that the Flash player control sits in some WinForm, playing a movie file. Changes in the real world are presented in the movie (e.g., a light-bulb turned on in the real world? a matching one will light up inside the Flash movie), and interaction with the instances in the movie will affect the real world (clicked the light-bulb? the light bulb in the real world will turn on). My question is: which technologies / products can offer me similar capabilities? Of course, I'm looking for something that can compete with Flash / Silverlight: animations, object-oriented scripting and design, powerful tools allowing the artists to design symbols conveniently, etc... static image objects won't cut it

    Read the article

  • Partnering with a web designer to build and launch websites for fun. Where should I look for someone? [closed]

    - by FastCoder
    I have been working with web sites and web development for a long time and as a result I am able to build and launch complex websites (social network, dating site, stackoverflow, etc.) in little time (1-3 weeks). Problem is: I know very little about CSS, page layout, photoshop and graphic design. Of course I know HTML but when it comes to putting together something that looks good I am horrible. I just don't have the artistic skills. I wanted to launch some websites without any silly or naive intent to take over the world. Just for fun and to improve the portfolio. How do you guys recommend that I approach this problem of "finding this right partner" with the same mindset? Where should I look for this person? I have no idea... :(

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • How do I handle 3rd party search result data (via cache)

    - by reikyoushin
    I have a search function on my site and it is taking data from 6 different 3rd party resources. The problem is, it takes too long requesting the data over and over again on the results page. I've read for questions like this on SO about session not being a good choice but for me 'memcache' is not an option, because the server doesn't have memcached installed and I have no way to install it now. Is there any other approach to do this? Storing in the database seem inappropriate because the data depends on the search terms requested. What I've been thinking is writing a file on the server that would act as a cache for this file but I don't know how I would know when to delete it after.

    Read the article

  • Screen goes nuts and unreadable

    - by ChazD
    Running Ubuntu 10.10 in its own partition, also have Oracle Virtualbox 4 with Windows XP. Because i previously had a problem with a dark screen when i let the laptop on and i didn't use it for a long while i had set Power settings to Never sleep, never let the Display sleep. Monitor on install set to Laptop resolution 1400 by 1050, refresh rate 60 Hz, rotation Normal. The problem was when the computer was left on and i returned after about an hour, the screen was unreadable, similar to what used to happen to graphics cards before multisync became normal. So it appears as though the graphics card was asked to use a resolution it couldn't handle and went nuts. I had to power off the system, on restart everything was fine. Thanks for any suggestions. ChazD

    Read the article

  • How can I login to lightdm with password for fingerprint-enabled user after 12.10 upgrade?

    - by jxn
    Sorry for the long question. I have a laptop with ubuntu quantal 12.10, a fingerprint scanner, and a few active user accounts. When the machine boots up to lightdm, I get a prompt toenter my password or scan my finger print. Every now and then, fingerprint scanning just doesn't seem to work. Before the 12.10 upgrade, I was always able to enter my password for this user when fingerprint failed. Now, no matter what, I have to scan my prints to login as this user. If I try to login as a different user (fingerprint is not enabled for any others), I can see the password is typed out -- asterisks show in the password input box as I type them -- and get in. Not so for the fingerprint user. Any clues on how to figure out what's gone wrong?

    Read the article

  • GPL code allowing non-GPL local copies of nondistributed code

    - by Jason Posit
    I have come across a book that claims that alterations and augmentations to GPL works can be kept close-source as long as these are not redistributed into the wild. Therefore, customizations of websites deriving from GPL packages need not be released under the GPL and developers can earn profit on them by offering their services to their clients while keeping their GPL-based code closed source at the same time. (cf. Chapter 17 of WordPress Plugin Development by Wrox Press). I've never realized this, but essentially, by putting restrictions on redistributable code the GPL says nothing about what can and cannot be done with code which is kept private in terms of the licensing model. Have I understood this correctly?

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • March 2010 Chicago Architects Group Wrap Up

    - by Tim Murphy
    I would like to thank everyone who came out to last night’s event and especially thank Mike Vogt for the presentation. I think at first everyone glassed over since very few of us spend a lot of time with Integration Architecture and most of us live more in the application architecture space.  Learning about subject like BPEL and BPMN was refreshing. The discussion after Mike’s talk was lively and I think that everyone came away with a good idea of areas they might want to know more about.  People stuck around long after the meeting was over. If you are interested in the topic you can find the slides here. Be sure to join us next month when Matt Hidinger talks about Onion Architecture.  Details are coming soon. del.icio.us Tags: CAG,Chicago Architects Group

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >