Search Results

Search found 9067 results on 363 pages for 'big fizzy'.

Page 80/363 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • SQLAuthority News – Job Interviewing the Right Way (and for the Right Reasons) – Guest Post by Feodor Georgiev

    - by pinaldave
    Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. Feodor has written excellent article on Job Interviewing the Right Way. Here is his article in his own language. A while back I was thinking to start a blog post series on interviewing and employing IT personnel. At that time I had just read the ‘Smart and gets things done’ book (http://www.joelonsoftware.com/items/2007/06/05.html) and I was hyped up on some debatable topics regarding finding and employing the best people in the branch. I have no problem with hiring the best of the best; it’s just the definition of ‘the best of the best’ that makes things a bit more complicated. One of the fundamental books one can read on the topic of interviewing is the one mentioned above. If you have not read it, then you must do so; not because it contains the ultimate truth, and not because it gives the answers to most questions on the subject, but because the book contains an extensive set of questions about interviewing and employing people. Of course, a big part of these questions have different answers, depending on location, culture, available funds and so on. (What works in the US may not necessarily work in the Nordic countries or India, or it may work in a different way). The only thing that is valid regardless of any external factor is this: curiosity. In my belief there are two kinds of people – curious and not-so-curious; regardless of profession. Think about it – professional success is directly proportional to the individual’s curiosity + time of active experience in the field. (I say ‘active experience’ because vacations and any distractions do not count as experience :)  ) So, curiosity is the factor which will distinguish a good employee from the not-so-good one. But let’s shift our attention to something else for now: a few tips and tricks for successful interviews. Tip and trick #1: get your priorities straight. Your status usually dictates your priorities; for example, if the person looking for a job has just relocated to a new country, they might tend to ignore some of their priorities and overload others. In other words, setting priorities straight means to define the personal criteria by which the interview process is lead. For example, similar to the following questions can help define the criteria for someone looking for a job: How badly do I need a (any) job? Is it more important to work in a clean and quiet environment or is it important to get paid well (or both, if possible)? And so on… Furthermore, before going to the interview, the candidate should have a list of priorities, sorted by the most importance: e.g. I want a quiet environment, x amount of money, great helping boss, a desk next to a window and so on. Also it is a good idea to be prepared and know which factors can be compromised and to what extent. Tip and trick #2: the interview is a two-way street. A job candidate should not forget that the interview process is not a one-way street. What I mean by this is that while the employer is interviewing the potential candidate, the job seeker should not miss the chance to interview the employer. Usually, the employer and the candidate will meet for an interview and talk about a variety of topics. In a quality interview the candidate will be presented to key members of the team and will have the opportunity to ask them questions. By asking the right questions both parties will define their opinion about each other. For example, if the candidate talks to one of the potential bosses during the interview process and they notice that the potential manager has a hard time formulating a question, then it is up to the candidate to decide whether working with such person is a red flag for them. There are as many interview processes out there as there are companies and each one is different. Some bigger companies and corporates can afford pre-selection processes, 3 or even 4 stages of interviews, small companies usually settle with one interview. Some companies even give cognitive tests on the interview. Why not? In his book Joel suggests that a good candidate should be pampered and spoiled beyond belief with a week-long vacation in New York, fancy hotels, food and who knows what. For all I can imagine, an interview might even take place at the top of the Eifel tower (right, Mr. Joel, right?) I doubt, however, that this is the optimal way to capture the attention of a good employee. The ‘curiosity’ topic What I have learned so far in my professional experience is that opinions can be subjective. Plus, opinions on technology subjects can also be subjective. According to Joel, only hiring the best of the best is worth it. If you ask me, there is no such thing as best of the best, simply because human nature (well, aside from some physical limitations, like putting your pants on through your head :) ) has no boundaries. And why would it have boundaries? I have seen many curious and interesting people, naturally good at technology, though uninterested in it as one  can possibly be; I have also seen plenty of people interested in technology, who (in an ideal world) should have stayed far from it. At any rate, all of this sums up at the end to the ‘supply and demand’ factor. The interview process big-bang boils down to this: If there is a mutual benefit for both the employer and the potential employee to work together, then it all sorts out nicely. If there is no benefit, then it is much harder to get to a common place. Tip and trick #3: word-of-mouth is worth a thousand words Here I would just mention that the best thing a job candidate can get during the interview process is access to future team members or other employees of the new company. Nowadays the world has become quite small and everyone knows everyone. Look at LinkedIn, look at other professional networks and you will realize how small the world really is. Knowing people is a good way to become more approachable and to approach them. Tip and trick #4: Be confident. It is true that for some people confidence is as natural as breathing and others have to work hard to express it. Confidence is, however, a key factor in convincing the other side (potential employer or employee) that there is a great chance for success by working together. But it cannot get you very far if it’s not backed up by talent, curiosity and knowledge. Tip and trick #5: The right reasons What really bothers me in Sweden (and I am sure that there are similar situations in other countries) is that there is a tendency to fill quotas and to filter out candidates by criteria different from their skill and knowledge. In job ads I see quite often the phrases ‘positive thinker’, ‘team player’ and many similar hints about personality features. So my guess here is that discrimination has evolved to a new level. Let me clear up the definition of discrimination: ‘unfair treatment of a person or group on the basis of prejudice’. And prejudice is the ‘partiality that prevents objective consideration of an issue or situation’. In other words, there is not much difference whether a job candidate is filtered out by race, gender or by personality features – it is all a bad habit. And in reality, there is no proven correlation between the technology knowledge paired with skills and the personal features (gender, race, age, optimism). It is true that a significantly greater number of Darwin awards were given to men than to women, but I am sure that somewhere there is a paper or theory explaining the genetics behind this. J This topic actually brings to mind one of my favorite work related stories. A while back I was working for a big company with many teams involved in their processes. One of the teams was occupying 2 rooms – one had the team members and was full of light, colorful posters, chit-chats and giggles, whereas the other room was dark, lighted only by a single monitor with a quiet person in front of it. Later on I realized that the ‘dark room’ person was the guru and the ultimate problem-solving-brain who did not like the chats and giggles and hence was in a separate room. In reality, all severe problems which the chatty and cheerful team members could not solve and all emergencies were directed to ‘the dark room’. And thus all worked out well. The moral of the story: Personality has nothing to do with technology knowledge and skills. End of story. Summary: I’d like to stress the fact that there is no ultimately perfect candidate for a job, and there is no such thing as ‘best-of-the-best’. From my personal experience, the main criteria by which I measure people (co-workers and bosses) is the curiosity factor; I know from experience that the more curious and inventive a person is, the better chances there are for great achievements in their field. Related stories: (for extra credit) 1) Get your priorities straight. A while back as a consultant I was working for a few days at a time at different offices and for different clients, and so I was able to compare and analyze the work environments. There were two different places which I compared and recently I asked a friend of mine the following question: “Which one would you prefer as a work environment: a noisy office full of people, or a quiet office full of faulty smells because the office is rarely cleaned?” My friend was puzzled for a while, thought about it and said: “Hmm, you are talking about two different kinds of pollution… I will probably choose the second, since I can clean the workplace myself a bit…” 2) The interview is a two-way street. One time, during a job interview, I met a potential boss that had a hard time phrasing a question. At that particular time it was clear to me that I would not have liked to work under this person. According to my work religion, the properly asked question contains at least half of the answer. And if I work with someone who cannot ask a question… then I’d be doing double or triple work. At another interview, after the technical part with the team leader of the department, I was introduced to one of the team members and we were left alone for 5 minutes. I immediately jumped on the occasion and asked the blunt question: ‘What have you learned here for the past year and how do you like your job?’ The team member looked at me and said ‘Nothing really. I like playing with my cats at home, so I am out of here at 5pm and I don’t have time for much.’ I was disappointed at the time and I did not take the job offer. I wasn’t that shocked a few months later when the company went bankrupt. 3) The right reasons to take a job: personality check. A while back I was asked to serve as a job reference for a coworker. I agreed, and after some weeks I got a phone call from the company where my colleague was applying for a job. The conversation started with the manager’s question about my colleague’s personality and about their social skills. (You can probably guess what my internal reaction was… J ) So, after 30 minutes of pouring common sense into the interviewer’s head, we finally agreed on the fact that a shy or quiet personality has nothing to do with work skills and knowledge. Some years down the road my former colleague is taking the manager’s position as the manager is demoted to a different department. Reference: Feodor Georgiev, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • ????ASMM

    - by Liu Maclean(???)
    ???Oracle??????????????SGA/PGA???,????10g????????????ASMM????,????????ASMM?????????Oracle??????????,?ASMM??????DBA????????????;????????ASMM???????????????DBA???:????????????DB,?????????????DBA?????????????????????????????????,ASMM??????????,???????????,??????????,??????????????????;?10g release 1?10.2??????ASMM?????????????,???????ASMM????????ASMM?????startup???????????ASMM??AMM??,????????DBA????SGA/PGA?????????”??”??”???”???,???????????DBA????chemist(???????1??2??????????????)? ?????????????????ASMM?????,?????????????…… Oracle?SGA???????9i???????????,????: Buffer Cache ????????????,??????????????? Default Pool                  ??????,???DB_CACHE_SIZE?? Keep Pool                     ??????,???DB_KEEP_CACHE_SIZE?? Non standard pool         ???????,???DB_nK_cache_size?? Recycle pool                 ???,???db_recycle_cache_size?? Shared Pool ???,???shared_pool_size?? Library cache   ?????? Row cache      ???,?????? Java Pool         java?,???Java_pool_size?? Large Pool       ??,???Large_pool_size?? Fixed SGA       ???SGA??,???Oracle???????,?????????granule? ?9i?????ASMM,???????????SGA,??????MSMM??9i???buffer cache??????????,?????????????????????????,???9i?????????????,?????????????????????????? ????SGA?????: ?????shared pool?default buffer pool????????,??????????? ?9i???????????(advisor),?????????? ??????????????? ?????????,?????? ?????,?????ORA-04031?????????? ASMM?????: ?????????? ???????????????? ???????sga_target?? ???????????,??????????? ??MSMM???????: ???? ???? ?????? ???? ??????????,??????????? ??????????????????,??????????ORA-04031??? ASMM???????????:1.??????sga_target???????2.???????,???:????(memory component),????(memory broker)???????(memory mechanism)3.????(memory advisor) ASMM????????????(Automatically set),??????:shared_pool_size?db_cache_size?java_pool_size?large_pool _size?streams_pool_size;?????????????????,???:db_keep_cache_size?db_recycle_cache_size?db_nk_cache_size?log_buffer????SGA?????,????????????????,??log_buffer?fixed sga??????????????? ??ASMM?????????sga_target??,???????ASMM??????????????????db_cache_size?java_pool_size???,?????????????????????,????????????????????(???)????????,Oracle?????????(granule,?SGA<1GB?granule???4M,?SGA>1GB?granule???16M)???????,??????????????buffer cache,??????????????????(granule)??????????????????????sga_target??,???????????????????(dism,???????)???ASMM?????????????statistics_level?????typical?ALL,?????BASIC??MMON????(Memory Monitor is a background process that gathers memory statistics (snapshots) stores this information in the AWR (automatic workload repository). MMON is also responsible for issuing alerts for metrics that exceed their thresholds)?????????????????????ASMM?????,???????????sga_target?????statistics_level?BASIC: SQL> show parameter sga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 2000M sga_target big integer 2000M SQL> show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 2000M SQL> alter system set statistics_level=BASIC; alter system set statistics_level=BASIC * ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00830: cannot set statistics_level to BASIC with auto-tune SGA enabled ?????server parameter file?spfile??,ASMM????shutdown??????????????(Oracle???????,????????)???spfile?,?????strings?????spfile????????????????????,?: G10R2.__db_cache_size=973078528 G10R2.__java_pool_size=16777216 G10R2.__large_pool_size=16777216 G10R2.__shared_pool_size=1006632960 G10R2.__streams_pool_size=67108864 ???spfile?????????????????,???????????”???”?????,??????????”??”?? ?ASMM?????????????? ?????(tunable):????????????????????????????buffer cache?????????,cache????????????????,?????????? IO????????????????????????????Library cache????? subheap????,?????????????????????????????????(open cursors)?????????client??????????????buffer cache???????,???????????pin??buffer???(???????) ?????(Un-tunable):???????????????????,?????????????????,?????????????????????????large pool?????? ??????(Fixed Size):???????????,??????????????????????????????????????? ????????????????(memory resize request)?????????,?????: ??????(Immediate Request):???????????ASMM????????????????????????(chunk)?,??????OUT-OF-MEMORY(ORA-04031)???,????????????????????(granule)????????????????????granule,????????????,?????????????????????????????,????granule??????????????? ??????(Deferred Request):???????????????????????????,??????????????granule???????????????MMON??????????delta. ??????(Manual Request):????????????alter system?????????????????????????????????????????????????granule,??????grow?????ORA-4033??,?????shrink?????ORA-4034??? ?ASMM????,????(Memory Broker)????????????????????????????(Deferred)??????????????????????(auto-tunable component)???????????????,???????????????MMON??????????????????????????????????,????????????????;MMON????Memory Broker?????????????????????????MMON????????????????????????????????????????(resize request system queue)?MMAN????(Memory Manager is a background process that manages the dynamic resizing of SGA memory areas as the workload increases or decreases)??????????????????? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) ??????????(Donor,???trace????)???,?????????granule???buffer cache,????granule????????????: ????granule???????granule header ?????chunk????granule?????????buffer header ???,???chunk??????????????????????metadata? ???2-4??,???granule???? ??????????????????,??buffer cache??granule???shared pool?,???????: MMAN??????????buffer cache???granule MMAN????granule??quiesce???(Moving 1 granule from inuse to quiesce list of DEFAULT buffer cache for an immediate req) DBWR???????quiesced???granule????buffer(dirty buffer) MMAN??shared pool????????(consume callback),granule?free?chunk???shared pool??(consume)?,????????????????????granule????shared granule??????,???????????granule???????????,??????pin??buffer??Metadata(???buffer header?LE)?????buffer cache??? ???granule???????shared pool,???granule?????shared??? ?????ASMM???????????,??????????: _enabled_shared_pool_duration:?????????10g????shared pool duration??,?????sga_target?0?????false;???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? _memory_broker_shrink_heaps:???????0??Oracle?????shared pool?java pool,??????0,??shrink request??????????????????? _memory_management_tracing: ???????MMON?MMAN??????????(advisor)?????(Memory Broker)?????trace???;??ORA-04031????????36,???8?????????????trace,???23????Memory Broker decision???,???32???cache resize???;??????????: Level Contents 0×01 Enables statistics tracing 0×02 Enables policy tracing 0×04 Enables transfer of granules tracing 0×08 Enables startup tracing 0×10 Enables tuning tracing 0×20 Enables cache tracing ?????????_memory_management_tracing?????DUMP_TRANSFER_OPS????????????????,?????????????????trace?????????mman_trace?transfer_ops_dump? SQL> alter system set "_memory_management_tracing"=63; System altered Operation make shared pool grow and buffer cache shrink!!!.............. ???????granule?????,????default buffer pool?resize??: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 /* ???0xdc9c2628??????addr */ AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 /* ???????????Immediate???? */ AUTO SGA: Receiver of memory is shared pool, size=16, state=3, flg=0 /* ?????????shared pool,???,????16?granule,??grow?? */ AUTO SGA: Donor of memory is DEFAULT buffer cache, size=106, state=4, flg=0 /* ???????Default buffer cache,????,????106?granule,??shrink?? */ AUTO SGA: Memory requested=3896, remaining=3896 /* ??immeidate request???????3896 bytes */ AUTO SGA: Memory received=0, minreq=3896, gransz=16777216 /* ????free?granule,??received?0,gransz?granule??? */ AUTO SGA: Request 0xdc9c2628 status is INACTIVE /* ??????????,??????inactive?? */ AUTO SGA: Init bef rsz for request 0xdc9c2628 /* ????????before-process???? */ AUTO SGA: Set rq dc9c2628 status to PENDING /* ?request??pending?? */ AUTO SGA: 0xca000000 rem=3896, rcvd=16777216, 105, 16777216, 17 /* ???????0xca000000?16M??granule */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, a AUTO SGA: Resize done for pool DEFAULT, 8192 /* ???default pool?resize */ AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fff917964a0 AUTO SGA: Receiver of memory is shared pool, size=17, state=0, flg=0 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=105, state=0, flg=0 AUTO SGA: Memory requested=3896, remaining=0 AUTO SGA: Memory received=16777216, minreq=3896, gransz=16777216 AUTO SGA: Request 0x7fff917964a0 status is COMPLETE /* shared pool????16M?granule */ AUTO SGA: activated granule 0xca000000 of shared pool ?????partial granule????????????trace: AUTO SGA: Request 0xdc9c2628 after pre-processing, ret=0 AUTO SGA: IMMEDIATE, FG request 0xdc9c2628 AUTO SGA: Receiver of memory is shared pool, size=82, state=3, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=36, state=4, flg=1 /* ????????shared pool,?????default buffer cache */ AUTO SGA: Memory requested=4120, remaining=4120 AUTO SGA: Memory received=0, minreq=4120, gransz=16777216 AUTO SGA: Request 0xdc9c2628 status is INACTIVE AUTO SGA: Init bef rsz for request 0xdc9c2628 AUTO SGA: Set rq dc9c2628 status to PENDING AUTO SGA: Moving granule 0x93000000 of DEFAULT buffer cache to activate list AUTO SGA: Moving 1 granule 0x8c000000 from inuse to quiesce list of DEFAULT buffer cache for an immediate req /* ???buffer cache??????0x8c000000?granule??????inuse list, ???????quiesce list? */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: activated granule 0x93000000 of DEFAULT buffer cache AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 / * ??dbwr??0x8c000000 granule????dirty buffer */ AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 AUTO SGA: Returning 0 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 0, 1, 20a AUTO SGA: NOT_FREE for imm req for gran 0x8c000000 ......................................... AUTO SGA: Rcv shared pool consuming 8192 from 0x8c000000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 90112 from 0x8c002000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 24576 from 0x8c01a000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 65536 from 0x8c022000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 131072 from 0x8c034000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 286720 from 0x8c056000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 98304 from 0x8c09e000 in granule 0x8c000000; owner is DEFAULT buffer cache AUTO SGA: Rcv shared pool consuming 106496 from 0x8c0b8000 in granule 0x8c000000; owner is DEFAULT buffer cache ..................... /* ??shared pool????0x8c000000 granule??chunk, ??granule?owner????default buffer cache */ AUTO SGA: Imm xfer 0x8c000000 from quiesce list of DEFAULT buffer cache to partial inuse list of shared pool /* ???0x8c000000 granule?default buffer cache????????shared pool????inuse list */ AUTO SGA: Returning 4 from kmgs_process for request dc9c2628 AUTO SGA: Process req dc9c2628 ret 4, 1, 20a AUTO SGA: Init aft rsz for request 0xdc9c2628 AUTO SGA: Request 0xdc9c2628 after processing AUTO SGA: IMMEDIATE, FG request 0x7fffe9bcd0e0 AUTO SGA: Receiver of memory is shared pool, size=83, state=0, flg=1 AUTO SGA: Donor of memory is DEFAULT buffer cache, size=35, state=0, flg=1 AUTO SGA: Memory requested=4120, remaining=0 AUTO SGA: Memory received=14934016, minreq=4120, gransz=16777216 AUTO SGA: Request 0x7fffe9bcd0e0 status is COMPLETE /* ????partial transfer?? */ ?????partial transfer??????DUMP_TRANSFER_OPS????0x8c000000 partial granule???????,?: SQL> oradebug setmypid; Statement processed. SQL> oradebug dump DUMP_TRANSFER_OPS 1; Statement processed. SQL> oradebug tracefile_name; /s01/admin/G10R2/udump/g10r2_ora_21482.trc =======================trace content============================== GRANULE SIZE is 16777216 COMPONENT NAME : shared pool Number of granules in partially inuse list (listid 4) is 23 Granule addr is 0x8c000000 Granule owner is DEFAULT buffer cache /* ?0x8c000000 granule?shared pool?partially inuse list, ?????owner??default buffer cache */ Granule 0x8c000000 dump from owner perspective gptr = 0x8c000000, num buf hdrs = 1989, num buffers = 156, ghdr = 0x8cffe000 / * ?????granule?granule header????0x8cffe000, ????156?buffer block,1989?buffer header */ /* ??granule??????,??????buffer cache??shared pool chunk */ BH:0x8cf76018 BA:(nil) st:11 flg:20000 BH:0x8cf76128 BA:(nil) st:11 flg:20000 BH:0x8cf76238 BA:(nil) st:11 flg:20000 BH:0x8cf76348 BA:(nil) st:11 flg:20000 BH:0x8cf76458 BA:(nil) st:11 flg:20000 BH:0x8cf76568 BA:(nil) st:11 flg:20000 BH:0x8cf76678 BA:(nil) st:11 flg:20000 BH:0x8cf76788 BA:(nil) st:11 flg:20000 BH:0x8cf76898 BA:(nil) st:11 flg:20000 BH:0x8cf769a8 BA:(nil) st:11 flg:20000 BH:0x8cf76ab8 BA:(nil) st:11 flg:20000 BH:0x8cf76bc8 BA:(nil) st:11 flg:20000 BH:0x8cf76cd8 BA:0x8c018000 st:1 flg:622202 ............... Address 0x8cf30000 to 0x8cf74000 not in cache Address 0x8cf74000 to 0x8d000000 in cache Granule 0x8c000000 dump from receivers perspective Dumping layout Address 0x8c000000 to 0x8c018000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c018000 to 0x8c01a000 not in this pool Address 0x8c01a000 to 0x8c020000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c020000 to 0x8c022000 not in this pool Address 0x8c022000 to 0x8c032000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c032000 to 0x8c034000 not in this pool Address 0x8c034000 to 0x8c054000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c054000 to 0x8c056000 not in this pool Address 0x8c056000 to 0x8c09c000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c09c000 to 0x8c09e000 not in this pool Address 0x8c09e000 to 0x8c0b6000 in sga heap(1,3) (idx=1, dur=4) Address 0x8c0b6000 to 0x8c0b8000 not in this pool Address 0x8c0b8000 to 0x8c0d2000 in sga heap(1,3) (idx=1, dur=4) ???????granule?????shared granule??????,?????????buffer block,????1?shared subpool??????durtaion?4?chunk,duration=4?execution duration;??duration?chunk???????????,??extent???quiesce list??????????????free?execution duration?????????????,??????duration???extent(??????extent????granule)??????? ?????????????ASMM?????????,????: V$SGAINFODisplays summary information about the system global area (SGA). V$SGADisplays size information about the SGA, including the sizes of different SGA components, the granule size, and free memory. V$SGASTATDisplays detailed information about the SGA. V$SGA_DYNAMIC_COMPONENTSDisplays information about the dynamic SGA components. This view summarizes information based on all completed SGA resize operations since instance startup. V$SGA_DYNAMIC_FREE_MEMORYDisplays information about the amount of SGA memory available for future dynamic SGA resize operations. V$SGA_RESIZE_OPSDisplays information about the last 400 completed SGA resize operations. V$SGA_CURRENT_RESIZE_OPSDisplays information about SGA resize operations that are currently in progress. A resize operation is an enlargement or reduction of a dynamic SGA component. V$SGA_TARGET_ADVICEDisplays information that helps you tune SGA_TARGET. ?????????shared pool duration???,?????????

    Read the article

  • Device Manager is constantly refreshing - what is wrong?

    - by Jook
    I ran into an quite odd problem, after installing some drivers on my Lenovo N100 0768 (Windows 7) notebook and attaching an USB-HDD. Now I have sound, but constantly the device dicsonnected sound - like every 2 seconds! Looked at the device manager and it is flashing together with the sound. A quick search on the net directed me towords driver issues or issues with attached usb-devices. No big suprise here - but how can I solve this?!

    Read the article

  • What’s the best way to label cables in a data center

    - by Ben
    We're in the midst of planning for a big data center renovation at my office, which is going to result in a completely new power and network infrastructure. As part of this, I'd like to label all of our cables properly and sanely. What are your best practices, both for labeling patch panels, cables, power whips, anything and everything in a data center that you'd label?

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • avi to mpeg4 command line convertor

    - by Samvel Siradeghyan
    Hi all I am writting program for recording IP cameras videos. I use Aforge framework and can save video in avi format, but it's size is too big. I need some command line program to convert videos from avi to mpeg4 format. Is there any free program and if yes where can I download them and how to use it. Thanks.

    Read the article

  • Backup NAS to another NAS?

    - by Tronic
    hey there, is there a way to backup / transfer a NAS (it's like a big harddrive with some access to a webinterface, but closed linux etc.... only two ethernet ports) to another one? or is the only way to connect both NASs to my computer and copy the data from one to the other? it's like 750gb of data... thanks in advance... regards

    Read the article

  • Windows 7 search not finding files

    - by Rob Nicholson
    Can anyone please explain this quirk in Windows 7 search (not a big fan of it - preferred XP method or least both). With Outlook, you sometimes have to find and delete your OST file. It resides in the user's profile folder. How come searching the entire C: drive for *.ost files works - they are in c:\Users\rob.nicholson\appdata somewhere but starting the search from c:\Users\rob.nicholson fails to find the files??? Cheers, Rob.

    Read the article

  • Is ext4 ready for a production usage ?

    - by Konstantin
    Hi What do you think about ext4 filesystem in the production enviroment ? We are very close to launching our project that will use tens of millions quite often updated not very big files and we need to decide which FS to use. For a while our considerations about other linux FS are: Ext3 is rock stable, but not very well suited for handling millions small files XFS looks very nice, probably we'll use it ReiserFS ... well...vague future, who will end up fixing bugs ?

    Read the article

  • Debian (wheezy) force cache to RAM

    - by Marek Javurek
    I have Linux server running about 6 game servers. I have 3 GB total of RAM but I use only about 500 MB. Is there a way to cache one of my game servers (all the files - even not actually used maps etc. - about 1,5 GB) to RAM? The reason I want to do this is because my Linux server is virtual and the hard drives ar very slow, so there is really big IO wait time. IO: http://i.stack.imgur.com/7HLhB.png

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • OneNote 2007 - recommendation(s) of a good place/tutorials to learn features

    - by studiohack23
    I'm pretty familiar with Office 2007, however, I have just recently acquired OneNote 2007. It is such a big and powerful tool, that I'm pretty much lost on the features and how to use it. I don't really know where to start. I'm looking for some recommendation(s) on a good place to learn more about OneNote and its features and what it does...for what it's worth, I'm a student, so student perspectives on how to use/learn OneNote would be awesome! Thanks!

    Read the article

  • How to create 1280x720 music video with static pic

    - by monov
    I wanna upload a song to youtube, and put a static 1280x720 pic as the video. I'd like the format to be one of the recommended ones. I tried Windows Movie Maker 2.6 but it only generated a 640x480 video. I also tried Windows Live Movie Maker but it put a big black margin around my pic (and unexplicably, produced a video with a slightly lower volume). Do you know any way to do what I need?

    Read the article

  • Australian Wifi Hotspot Providers

    - by Dan
    More informational than techy, but are there any Australians (or frequent visitors to Australia) who can tell me who the big players are in providing Wifi hotspots over there? A colleague is going there for a few weeks and will need to connect back remotely, so I wanted to be able to steer him towards some of the bigger names. Thanks,

    Read the article

  • DNS down in Anonymous attack

    - by Tal Weiss
    As I'm writing this our company website and the web-service we developed are down in the big GoDaddy outage resulting from an Anonymous attack (or so says Twitter). We used GoDaddy as our registrar and we use it for DNS for some domains. Tomorrow is a new day - what can we do to mitigate such outages? Simply moving to, say, Route 53 for DNS might not be enough. Is there any way to remove this single point of failure?

    Read the article

  • Cyberlink PowerDVD 9 on netbook

    - by marc_s
    I tried to install CyberLink's PowerDVD 9 on a friend's netbook. The installation went OK (even though the install screen is too big to fit and you can't see the "Next " buttons etc.), but once installed, PowerDVD 9 refuses to launch. It claims it requires at least 1024x768 resolution - the Acer netbook has 1024x600 :-( Any way / hack / trick to get PowerDVD9 to work anyway?? Couldn't it scale down to e.g. 800x600?

    Read the article

  • some european CDNs?

    - by ajsie
    someone knows some CDN providers in Europe? cause i keep seeing big names like amazon, google, akamai, simplecdn, yahoo in all google searches. does it even matter where the CDN providers are located? cause apparently it won't matter cause their network grid is world wide:)

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >