Search Results

Search found 3207 results on 129 pages for 'david hyde'.

Page 6/129 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Choose a local file w/o uploading the chosen file

    - by Austin Hyde
    I am making a simple development tool for myself using PHP on my local development server. I would like a way to have a simple file-chooser to select a file without uploading it, but just retaining the file path. This is useful, because I will be the only one using the tool, and so PHP will have access to the chosen file without having it uploaded. My first thought is to have a <input type="file"...>, but as far as I know, there's no way to prevent the upload from happening. Is there a way to do this?

    Read the article

  • Scripting Languages vs. Compiled Languages for web development

    - by Austin Hyde
    Though I come from a purely PHP background on the web development side of programming, I have also spent much time with C# and C++ on the desktop. I don't really want to spark any flame wars, but: When should you use scripting languages over compiled languages for website development? (and vice versa) Just to clarify, for the sake of this question, I define a "scripting language" to mean an interpreted language like PHP, Python, or Ruby, and a "compiled language" to mean a strongly typed, compiled language like C#, C++, Java, or VB.

    Read the article

  • "Undefined Symbols" when inheriting from stdexcept classes

    - by Austin Hyde
    Here is an exception defined in <stdexcept>: class length_error : public logic_error { public: explicit length_error(const string& __arg); }; Here is my exception: class rpn_expression_error : public logic_error { public: explicit rpn_expression_error(const string& __arg); }; Why do I get this error when <stdexcept> does not? Undefined symbols: rpn_expression_error::rpn_expression_error(/*string*/ const&), referenced from: ... ld: symbol(s) not found

    Read the article

  • C# style Action<T>, Func<T,T>, etc in C++0x

    - by Austin Hyde
    C# has generic function types such as Action<T> or Func<T,U,V,...> With the advent of C++0x and the ability to have template typedef's and variadic template parameters, it seems this should be possible. The obvious solution to me would be this: template <typename T> using Action<T> = void (*)(T); however, this does not accommodate for functors or C++0x lambdas, and beyond that, does not compile with the error "expected unqualified-id before 'using'" My next attempt was to perhaps use boost::function: template <typename T> using Action<T> = boost::function<void (T)>; This doesn't compile either, for the same reason. My only other idea would be STL style template arguments: template <typename T, typename Action> void foo(T value, Action f) { f(value); } But this doesn't provide a strongly typed solution, and is only relevant inside the templated function. Now, I will be the first to admit that I am not the C++ wiz I prefer to think I am, so it's very possible there is an obvious solution I'm not seeing. Is it possible to have C# style generic function types in C++?

    Read the article

  • 5 Ways to Celebrate the Release of Internet Explorer 9

    - by David Wesst
    The day has finally come: Microsoft has released a web browser that is awesome. On Monday night, Microsoft officially introduced the world to the latest edition to its product family: Internet Explorer 9. That makes March 14, 2011 (also known as PI day) the official birthday of Microsoft’s rebirth in the world of web browsing. Just like any big event, you take some time to celebrate. Here are a few things that you can do to celebrate the return of Internet Explorer. 1. Download It If you’re not a big partier, that’s fine. The one thing you can do (and definitely should) is download it and give it a shot. Sure, IE may have disappointed you in the past, but believe me when I say they really put the effort in this time. The absolute least you can do is give it a shot to see how it stands up against your favourite browser. 2. Get yourself an HTML5 Shirt One of the coolest, if not best parts of IE9 being released is that it officially introduces HTML5 as a fully supported platform from Microsoft. IE9 supports a lot of what is already defined in the HTML5 technical spec, which really demonstrates Microsoft’s support of the new standard. Since HTML5 is cool on the web, it means that it is cool to wear it too. Head over to html5shirt.com and get yourself, or your staff, or your whole family, an HTML5 shirt to show the real world that you are ready for the future of the web. 3. HTML5-ify Something Okay, so maybe a shirt isn’t enough for you. Maybe you need start using HTML5 for real. If you have a blog, or a website, or anything out there on the web, celebrate IE9 adding some HTML5 to your site. Whether that is updating old code, adding something new, or just changing your WordPress theme, definitely take a look at what HTML5 can do for you. 4. Help Kill Old IE and Upgrade your Organization See this? This is sad. Upgrading web browsers in an large enterprise or organization is not a trivial task. A lot of companies will use the excuse of not having the resources to upgrade legacy web applications they were built for a specific version of IE and it doesn’t render correctly in legacy browsers. Well, it’s time to stop the excuses. IE9 allows you to define what version of Internet Explorer you would like it to emulate. It takes minimal effort for the developer, and will get rid of the excuses. Show your IT manager or software development team this link and show them how easy it is to make old code render right in the latest and greatest from the IE team. 5. Submit an Entry for DevUnplugged So, you’ve made it to number five eh? Well then, you must be pretty hardcore to make it this far down the list. Fine, let’s take it to the next level and build an HTML5 game. That’s right. A game. Like a video game. HTML5 introduces some amazing new features that can let you build working video games using HTML5, CSS3, and JavaScript. Plus, Microsoft is celebrating the launch of IE9 with a contest where you can submit an HTML5 game (or audio application) and have a chance to win a whack of cash and other prizes. Head here for the full scoop and rules for the DevUnplugged. This post also appears at http://david.wes.st

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • Mercurial: pull changes from unversioned copy

    - by Austin Hyde
    I am currently maintaining a Mercurial repository of the project I am working on. The rest of the team, however, doesn't. There is a "good" (unversioned) copy of the code base that I can access by SSH. What I would like to do is be able to do something like an hg pull from that good copy into my master repository whenever it gets updated. As far as I can tell, there's no obvious way to do this, as hg pull requires you have a source hg repository. I suppose I could use a utility like rsync to update my repository, then commit, but I was wondering: Is there was an easier/less contrived way to do this?

    Read the article

  • CSS doesn't apply to dynamically created elements in IE 7?

    - by Austin Hyde
    In the project I am working on, I dynamically generate (with javascript) filters that look like this: <div class="filter"> <a ... class="filter_delete_link">Delete</a> <div class="filter_field"> ... </div> <div class="filter_compare"> ... </div> <div class="filter_constraint"> ... </div> <div class="filter_logic"> ... </div> </div> And I have CSS that applies to each filter (for example): .filter a.filter_delete_link{ display:block; height:16px; background: url('../images/remove_16.gif') no-repeat; padding-left:20px; } However, it seems in IE 7 (and probably 6 for that matter), these styles don't get applied to the new filters. Everything works perfectly in Firefox/Chrome/IE8. Using the IE8 developer tools, set to IE7 mode, the browser can see the new elements, and can see the CSS, but just isn't applying the CSS. Is there a way to force IE to reload styles, or perhaps is there a better way to fix this?

    Read the article

  • Advice on a simple Windows Form

    - by Austin Hyde
    I have a VERY simple windows form that the user uses to manage "Stores". Each store has a name and number, and is kept in a corresponding DB table. The form has a listbox of stores, an add button that creates a new store, a delete button, and an edit button. Beside those I have text boxes for the name and number, and save/cancel buttons. When the user chooses a store from the list box, and clicks 'edit', the textboxes become populated and save/cancel become active. When the user clicks 'add', I create a new Store, add it to the listbox, activate the textboxes and save/cancel buttons, then commit it to the database when the user clicks 'save', or discards it when the user clicks 'cancel'. Right now, my event system looks like this (in psuedo-code. It's just shorter that way.) add->click: store = new Store() listbox.add(store) populateAndEdit(store) delete->click: store = listbox.selectedItem db.deleteOnSubmit(store) listbox.remove(store) db.submit() edit->click: populateAndEdit(listbox.selectedItem) save->click: parseAndSave(listbox.selectedItem) db.submit() disableTexts() cancel->click: disableTexts() The problem is in how I determine if we are inserting a new Store, or updating an existing one. The obvious solution to me would be to make it a "modal" process - that is, when I click edit, I go into edit mode, and the save button does things differently than if I were in add mode. I know I could make this more MVC-like, but I don't really think this simple form merits the added complexity. I'm not very experienced with winforms, so I'm not sure if I even have the right idea for how to tackle this. Is there a better way to do this? I would like to keep it simple, but usable.

    Read the article

  • How to resolve strange conflict between form post and ajax post?

    - by Oliver Hyde
    On the one page, I am trying to use ajax to edit existing values. I am doing this by using jQuery Inline Edit and posting away the new data, updating the record and returning with success. This is working fine. Next I have implemented the ability to add new records, to do this I have a form at the end of the table, which submits post data then redirects back to the original page. Each of them work individually, but after I have used the form to add a new record, the inline editing stops to work. If I close the webpage and reopen it, it works fine again until I have used the form and it goes of the rails again. I have tried a number of solutions, clearing session data, giving the form a separate name, redirecting to an alternative page (which does work, but is not ideal as I want the form to redirect back to the original location ). Here is a sample of the view form data: <?php foreach($week->incomes as $income):?> <tr> <td><?php echo $income->name;?></td> <td width="70" style="text-align:right;" class="editableSingle income id<?php echo $income->id;?>">$<?php echo $income->cost;?></td> </tr> <?php endforeach;?> <?php echo form_open('budget/add/'.$week->id.'/income/index', 'class="form-vertical" id="add_income"'); ?> <tr> <td> <input type="text" name="name" class="input-small" placeholder="Name"> <input type="text" name="cost" class="input-small" placeholder="Cost"> </td> <td> <button type="submit" class="btn btn-small pull-right"><i class="icon-plus "></i></button> </td> </tr> <?php echo form_close(); ?> This is the javascript initialisation code: $(function(){ $.inlineEdit({ income: 'budget/update_income/', expense: 'budget/update_expense/' }, { animate: false, filterElementValue: function($o){ if ($o.hasClass('income')) { return $o.html().match(/\$(.+)/)[1]; } else if ($o.hasClass('expense')) { return $o.html().match(/\$(.+)/)[1]; } else { return $o.html(); } }, afterSave: function(o){ if (o.type == 'income') { $('.income.id' + o.id).prepend('$'); } if (o.type == 'expense') { $('.expense.id' + o.id).prepend('$'); } }, colors: { error:'green' } }); }); If I can provide any more information to clarify what I have attempted etc, let me know. Temporary Fix It seems I have come up with a work around, not ideal as I still am not sure what is causing the issue. I have created a method called redirect. public function redirect(){ redirect(''); } am now calling that after the form submit which has temporarily allows my multiple post submits to work.

    Read the article

  • What is operator<< <> in C++?

    - by Austin Hyde
    I have seen this in a few places, and to confirm I wasn't crazy, I looked for other examples. Apparently this can come in other flavors as well, eg operator+ <>. However, nothing I have seen anywhere mentions what it is, so I thought I'd ask. It's not the easiest thing to google operator<< <>( :-)

    Read the article

  • Renamed MySQL table not renamed for INSERT queries?

    - by Austin Hyde
    After renaming one of my MySQL MyISAM tables from test_tablename to tablename, I have found that if I try to execute an INSERT (or REPLACE) query, I get the following message: 1146: Table 'dbname.test_tablename' doesn't exist I have triple-checked my database abstraction code, and verified this by running the query directly on the server. According to the MySQL server, the CREATE TABLE syntax is tablename, as expected, and when I run SHOW TABLES, it lists tablename as expected. Is there any reason for this to happen? More importantly, is there an easier way to fix this than dumping, dropping, re-creating, and reloading the table?

    Read the article

  • Silverlight Cream for March 26, 2010 -- #821

    - by Dave Campbell
    In this Issue: Max Paulousky, Christian Schormann, John Papa, Phani Raj, David Anson(-2-, -3-), Brad Abrams(-2-), and Jeff Wilcox(-2-, -3-). Shoutouts: Jeff Wilcox posted his material from mix and some preview TestFramework bits: Unit Testing Silverlight & Windows Phone Applications – talk now online At MIX10, Jeff Wilcox demo'd an app called "Peppermint"... here's the bleeding edge demo: “Peppermint” MIX demo sources Erik Mork and Co. have put out their weekly This Week In Silverlight 3.25.2010 Brad Abrams has all his materials posted for his MIX10 session Mix2010: Search Engine Optimization (SEO) for Microsoft Silverlight... including play-by-play of the demo and all source. Do you use Rooler? Well you should! Watch a video Pete Brown did with Pete Blois on Expression Blend, Windows Phone, Rooler Interested in Silverlight and XNA for WP7? Me too! Michael Klucher has a post outlining the two: Silverlight and XNA Framework Game Development and Compatibility From SilverlightCream.com: Modularity in Silverlight Applications - An Issue With ModuleInitializeException Max Paulousky has a truly ugly error trace listed by way of not having a reference listed, and the obvious simple solution. Next time he'll talk about the difficult situations. Using SketchFlow to Prototype for Windows Phone Christian Schormann has a tutorial up on using Expression Blend to develop for WP7 ... who better than Christian for that task?? Silverlight TV 18: WCF RIA Services Validation John Papa held forth with Nikhil Kothari on WCF RIA Services and validation just prior to MIX10, and was posted yesterday. Building SL3 applications using OData client Library with Vs 2010 RC Phani Raj walks through building an OData consumer in SL3, the first problem you're going to hit, and the easy solution to it. Tip: When creating a DependencyProperty, follow the handy convention of "wrapper+register+static+virtual" David Anson has a couple more of his 'Tips' up... this first is about Dependency Properties again... having a good foundation for all your Dependency Properties is a great way to avoid problems. Tip: Do not assign DependencyProperty values in a constructor; it prevents users from overriding them In the next post, David Anson talks about not assigning Dependency Property values in a constructor and gives one of the two ways to get around doing so. Tip: Set DependencyProperty default values in a class's default style if it's more convenient In his latest post, David Anson gives the second way to avoid setting a Dependency Property value in the constructor. Silverlight 4 + RIA Services - Ready for Business: Search Engine Optimization (SEO) Brad Abrams Abrams adds SEO to the tutorial series he's doing. He begins with his PDC09 session material on the subject and then takes off on a great detailed tutorial all with source. Silverlight 4 + RIA Services - Ready for Business: Localizing Business Application Brad Abrams then discusses localization and Silverlight in another detailed tutorial with all code included. Silverlight Toolkit and the Windows Phone: WrapPanel, and a few others Jeff Wilcox has a few WP7 posts I'm going to push today. This first is from earlier this week and is about using the Toolkit in WP7 and better than that, he includes the bits you need if all you want is the WrapPanel Data binding user settings in Windows Phone applications In the next one from yesterday, Jeff Wilcox demonstrates saving some user info in Isolated Storage to improve the user experience, and shares all the necessary plumbing files, and other external links as well. Displaying 2D QR barcodes in Windows Phone applications In a post from today, Jeff Wilcox ported his Silverlight 2D QR Barcode app from last year into WP7 ... just very cool... get the source and display your Microsoft Tag. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • Silverlight Cream for May 13, 2010 -- #861

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Jeff Prosise, DaveDev, Joe Zhou, Chris Eargle, John Papa(-2-, -3-), and David Anson(-2-). Shoutouts: In with the links I've listed below, Sigurd Snørteland also sent a link to this app he's working on which is actually pretty cool to see: ZuneLight. The code is not yet available. He also has a no-code demo of a Silverlight Media Center Pieter Voloshyn, Luiz Thadeu, and Jhun Iti have a very nice Silverlight image editor up: Thumba From SilverlightCream.com: WP7 - Silverlight on mobile Sigurd Snørteland submitted some links for me that have been translated to English from his blog. I hope the pages come out good because he's got a lot of good stuff on there. This one has a link to a presentation he did, and 4 projects you can load up in the emulator that he's converted to the phone: weather, worldclock, coverflow, and solitaire ... pretty cool... thanks for the links Sigurd! Understanding Page Orientation in Silverlight for Windows Phone Jeff Prosise has a really nice post up on page orientation in WP7 ... what it means to your app, how to detect it, and example code for what to do then... also love a quote by Jeff: "Silverlight for Windows Phone is the hottest thing since color TV" Why you should check out Expression Blend Behaviors when using Silverlight DaveDev has a post up describing Behaviors and why we should use them, plus tons of external links to resources, blogs, videos... all good stuff... Fiddler inspector for WCF Silverlight Polling Duplex and WCF RIA Joe Zhou announces and provides a link to a new Fiddler inspector that understands the framing in Polling Duplex and also raw binary xml and binary SOAP. Windows Phone Controls v0.7 Chris Eargle reports the release of Version 0.7 of the Windows Phone Controls project on CodePlex ... this includes a Pivot Control and a Panorama Control... both very nicely done. Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath John Papa responds to a user question and put up a nice post about binding to a ComboBox and then go from the selected item to some other property ... code included No More Boxes! Exploring the PathListBox (Silverlight TV #25) Silverlight TV 25 went up on Tuesday ... thought it was going to be Thursday?? anyway ... John Papa and Adam Kinney are discussing the PathListBox and looking at some cool demos thereof. Exposing SOAP, OData, and JSON Endpoints for RIA Services (Silverlight TV 26) Since today IS Thursday, we have a new Silverlight TV, number 26, and John Papa is chatting with Deepesh Mohnani of the WCF RIA Services team about exposing all sorts of endpoints... should be something in there for everybody :) Workaround for a Silverlight data binding bug affecting various scenarios - including DataGrid+ContextMenu David Anson details the rabbit-trail he and others on the team followed in response to a problem reported via Twitter where the binding on a DataGrid seemed off by a row(!) ... weird but true, validated, and SL3/4 are bug-for-bug compatible with this too! ... But David wouldn't leave us there.. he also has a workaround. Sharing the code for a simple Silverlight 4 REST-based cloud-oriented file management app for Azure and S3 David Anson had an opportunity to build an app he's wanted to build for a while and shares it with us: Blobstore -- a small, lightweight Silverlight 4 application that acts as a basic front-end for the Windows Azure Simple Data Storage and the Amazon Simple Storage Service (S3) -- and remember I said he shared the source :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Installing Catalyst 11.6 for an ATI HD 6970

    - by David Oliver
    Ubuntu Maverick 10.10 is displaying the desktop okay (though limited to 1600x1200) after my having installed my new HD 6970 card, so I'm now trying to install the proprietary driver (I understand the open source one requires a more recent kernel than that in Maverick). The proprietary driver under 'Additional Drivers' resulted in a black screen on boot, so I deactivated and am trying to follow the manual install instructions at the cchtml Ubuntu Maverick Installation Guide. When I try to create the .deb packages with: sh ati-driver-installer-11-6-x86.x86_64.run --buildpkg Ubuntu/maverick I get: david@skipper:~/catalyst11.6$ sh ati-driver-installer-11-6-x86.x86_64.run --buildpkg Ubuntu/maverick Created directory fglrx-install.oLN3ux Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.861......................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Generating package: Ubuntu/maverick Package build failed! Package build utility output: ./packages/Ubuntu/ati-packager.sh: 396: debclean: not found dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.64Vzxk dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . /usr/bin/fakeroot: line 176: debian/rules: Permission denied debuild: fatal error at line 1319: couldn't exec fakeroot debian/rules: dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.QEmIld dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . Can't exec "debian/rules": Permission denied at /usr/bin/debuild line 1314. debuild: fatal error at line 1313: couldn't exec debian/rules: Permission denied dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.xtY6vC dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . /usr/bin/fakeroot: line 176: debian/rules: Permission denied debuild: fatal error at line 1319: couldn't exec fakeroot debian/rules: dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.oYWICI dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Removing temporary directory: fglrx-install.oLN3ux I've installed devscripts which has debclean in it. I've tried running the command with and without sudo. I'm not experienced with installing from downloads/source, but it seems like the file debian/source isn't being set to be executable when it needs to be. If I extract only, without using the package builder command, debian/rules is 744. As to what to do next, I'm stumped. Many thanks.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • how do block websites using Ruckus ZoneDirector

    - by David A. Moody
    In my school we use Ruckus ZoneDirector to control our entire network. I have separate WLANs for faculty, elementary, and secondary. The elementary and secondary networks are set to go offline during recess/lunch breaks, and after school hours. This is working fine. What I need to be able to do is block Youtube access to students while leaving it accessible to teachers (faculty WLAN). Is it possible to do this? Thanks in advance. David.

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Lenovo DVD Drive Disabled After Windows 7 Install

    - by David Lacher
    Upgraded hard drive in Lenovo T61P; decided to start fresh with Windows 7 Pro. Windows installed, so DVD drive was working. All of a sudden, driver is not recognized. Device is "HL-DT-ST DVDRAM GSA-U10N ATA Device". It appears on device manager but with the yellow tag; have tried uninstalling, searching for drivers, everything I can think of. Cannot even start over with Windows 7 installation disk because disk spins but then stops and My Computer does not recognize the drive. Help please. thank you. David Lacher

    Read the article

  • Lenovo DVD Drive Disabled After Windows 7 Install

    - by David Lacher
    Upgraded hard drive in Lenovo T61P; decided to start fresh with Windows 7 Pro. Windows installed, so DVD drive was working. All of a sudden, driver is not recognized. Device is "HL-DT-ST DVDRAM GSA-U10N ATA Device". It appears on device manager but with the yellow tag; have tried uninstalling, searching for drivers, everything I can think of. Cannot even start over with Windows 7 installation disk because disk spins but then stops and My Computer does not recognize the drive. Help please. thank you. David Lacher

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • What is the difference between DLNA and UPNP ?

    - by David Michel
    Hi All, Can someone tell me the what is the difference between DLNA and UPNP ? I can see that some devices such as NAS have their specifications mentioning both (e.g. Iomega StorCenter) or only DLNA (e.g. Netgear Stora). Is this a synomym for the same thing or is is actually 2 different protocals ? Are they compatible, i.e. if a media server uses DLNA and the streaming device uses UPNP, will it work ? I looked around but could not find any clear answer... Many thanks David

    Read the article

  • Firefox 3.6.3 on Snow Leopard 10.6.3 - symbolic link to command line binary doesn't work?

    - by David Watson
    I have Firefox 10.6.3 installed on Mac OS X Snow Leopard from the DMG. I can run firefox from the terminal using /Applications/Firefox.app/Contents/MacOS/firefox-bin. However, if I create a symbolic link: sudo ln -s /Applications/Firefox.app/Contents/MacOS/firefox-bin /bin/firefox then it refuses to run, or at least display. When I issue "firefox" from the terminal, I can see the process in top, but never get the GUI to appear. :/ = ls -lr /bin/firefox lrwxr-xr-x 1 root wheel 52 May 5 15:19 /bin/firefox - /Applications/Firefox.app/Contents/MacOS/firefox-bin Any ideas? Thanks, David

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >