Search Results

Search found 13042 results on 522 pages for 'integrated business plann'.

Page 252/522 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Sharp HealthCare Reduces Storage Requirements by 50% with Oracle Advanced Compression

    - by [email protected]
    Sharp HealthCare is an award-winning integrated regional health care delivery system based in San Diego, California, with 2,600 physicians and more than 14,000 employees. Sharp HealthCare's data warehouse forms a vital part of the information system's infrastructure and is used to separate business intelligence reporting from time-critical health care transactional systems. Faced with tremendous data growth, Sharp HealthCare decided to replace their existing Microsoft products with a solution based on Oracle Database 11g and to implement Oracle Advanced Compression. Join us to hear directly from the primary DBA for the Data Warehouse Application Team, Kim Nguyen, how the new environment significantly reduced Sharp HealthCare's storage requirements and improved query performance.

    Read the article

  • Ubuntu 12.10 - Graphics driver shows Gallium 0.4 instead of intel HD

    - by nlooije
    I have a hybrid Intel/Nvidia system using bumblebee on Ubuntu 12.10 with specs: Clevo W150HR, SB i7-2720qm, 8GB RAM, 128GB Crucial M4 SSD + 500GB HDD It is reported to use software rendering (through Gallium driver) rather than accelerated through the intel HD driver. Reinstalling the intel driver has no effect: sudo apt-get install --reinstall xserver-xorg-video-intel The result is a rather sluggish desktop. Is there any way to blacklist the gallium driver? Or force the intel driver instead? Edit 1 - Ubuntu 12.04 shows the driver as Intel SandyBridge out of the box, but 12.10 does not even after running the above command. Edit 2 - Xorg.0.log shows: (--) intel(0): Integrated Graphics Chipset: Intel(R) Sandybridge Mobile (GT1) (WW) intel(0): Disabling hardware acceleration on this pre-production hardware. Edit 3 - It turns out that SandyBridge rev.7 systems are known to be unstable per this link. Accordingly in the xserver-xorg-video-intel if it detects this rev. it disables it with the warning in above log.

    Read the article

  • When Your Boss Doesn't Want you to Succeed

    - by Phil Factor
    You're working hard to get an application finished. You are programming long into the evenings sometimes, and eating sandwiches at your desk instead of taking a lunch break. Then one day you glance up at the IT manager, serene in his mysterious round of meetings, and think 'Does he actually care whether this project succeeds or not?'. The question may seem absurd. Of course the project must succeed. The truth, as always, is often far more complex. Your manager may even be doing his best to make sure you don't succeed. Why? There have always been rich pickings for the unscrupulous in IT.  In extreme cases, where administrators struggle with scarcely-comprehended technical issues, huge sums of money can be lost and gained without any perceptible results. In a very few cases can fraud be proven: most of the time, the intricacies of the 'game' are such that one can do little more than harbor suspicion.  Where does over-enthusiastic salesmanship end and fraud begin? The Business of Information Technology provides rich opportunities for White-collar crime. The poor developer has his, or her, hands full with the task of wrestling with the sheer complexity of building an application. He, or she, has no time for following the complexities of the chicanery of the management that is directing affairs.  Most likely, the developers wouldn't even suspect that their company management had ulterior motives. I'll illustrate what I mean with an entirely fictional, hypothetical, example. The Opportunist and the Aged Charities often do good, unexciting work that is funded by the income from a bequest that dates back maybe hundreds of years.  In our example, it isn't exciting work, for it involves the welfare of elderly people who have fallen on hard times.  Volunteers visit, giving a smile and a chat, and check that they are all right, but are able to spend a little money on their discretion to ameliorate any pressing needs for these old folk.  The money is made to work very hard and the charity averts a great deal of suffering and eases the burden on the state. Daisy hears the garden gate creak as Mrs Rainer comes up the path. She looks forward to her twice-weekly visit from the nice lady from the trust. She always asked ‘is everything all right, Love’. Cheeky but nice. She likes her cheery manner. She seems interested in hearing her memories, and talking about her far-away family. She helps her with those chores in the house that she couldn’t manage and once even paid to fill the back-shed with coke, the other year. Nice, Mrs. Rainer is, she thought as she goes to open the door. The trustees are getting on in years themselves, and worry about the long-term future of the charity: is it relevant to modern society? Is it likely to attract a new generation of workers to take it on. They are instantly attracted by the arrival to the board of a smartly dressed University lecturer with the ear of the present Government. Alain 'Stalin' Jones is earnest, persuasive and energetic. The trustees welcome him to the board and quickly forgive his humorless political-correctness. He talks of 'diversity', 'relevance', 'social change', 'equality' and 'communities', but his eye is on that huge bequest. Alain first came to notice as a Trotskyite union official, who insinuated himself into one of the duller Trades Unions and turned it, through his passionate leadership, into a radical, headline-grabbing organization.  Middle age, and the rise of European federal socialism, had brought him quiet prosperity and charcoal suits, an ear in the current government, and a wide influence as a member of various Quangos (government bodies staffed by well-paid unelected courtiers).  He was employed as a 'consultant' by several organizations that relied on government contracts. After gaining the confidence of the trustees, and showing a surprising knowledge of mundane processes and the regulatory framework of charities, Alain launches his plan.  The trust will expand their work by means of a bold IT initiative that will coordinate the interventions of several 'caring agencies', and provide  emergency cover, a special Website so anxious relatives can see how their elderly charges are doing, and a vastly more efficient way of coordinating the work of the volunteer carers. It will also provide a special-purpose site that gives 'social networking' facilities, rather like Facebook, to the few elderly folk on the lists with access to the internet. The trustees perk up. Their own experience of the internet is restricted to the occasional scanning of railway timetables, but they can see that it is 'relevant'. In his next report to the other trustees, Alain proudly announces that all this glamorous and exciting technology can be paid for by a grant from the government. He admits darkly that he has influence. True to his word, the government promises a grant of a size that is an order of magnitude greater than any budget that the trustees had ever handled. There was the understandable proviso that the company that would actually do the IT work would have to be one of the government's preferred suppliers and the work would need to be tendered under EU competition rules. The only company that tenders, a multinational IT company with a long track record of government work, quotes ten million pounds for the work. A trustee questions the figure as it seems enormous for the reasonably trivial internet facilities being built, but the IT Salesmen dazzle them with presentations and three-letter acronyms until they subside into quiescent acceptance. After all, they can’t stay locked in the Twentieth century practices can they? The work is put in hand with a large project team, in a splendid glass building near west London. The trustees see rooms of programmers working diligently at screens, and who talk with enthusiasm of the project. Paul, the project manager, looked through his resource schedule with growing unease. His initial excitement at being given his first major project hadn’t lasted. He’d been allocated a lackluster team of developers whose skills didn’t seem right, and he was allowed only a couple of contractors to make good the deficit. Strangely, the presentation he’d given to his management, where he’d saved time and resources with a OTS solution to a great deal of the development work, and a sound conservative architecture, hadn’t gone down nearly as big as he’d hoped. He almost got the feeling they wanted a more radical and ambitious solution. The project starts slipping its dates. The costs build rapidly. There are certain uncomfortable extra charges that appear, such as the £600-a-day charge by the 'Business Manager' appointed to act as a point of liaison between the charity and the IT Company.  When he appeared, his face permanently split by a 'Mr Sincerity' smile, they'd thought he was provided at the cost of the IT Company. Derek, the DBA, didn’t have to go to the server room quite some much as he did: but It got him away from the poisonous despair of the development group. Wave after wave of events had conspired to delay the project.  Why the management had imposed hideous extra bureaucracy to cover ISO 9000 and 9001:2008 accreditation just as the project was struggling to get back on-schedule was  beyond belief.  Then  the Business manager was coming back with endless changes in scope, sorrowing saying that the Trustees were very insistent, though hopelessly out in touch with the reality of technical challenges. Suddenly, the costs mount to the point of consuming the government grant in its entirety. The project remains tantalizingly just out of reach. Alain Jones gives an emotional rallying speech at the trustees review meeting, urging them not to lose their nerve. Sadly, the trustees dip into the accumulated capital of the trust, the seed-corn of all their revenues, in order to save the IT project. A few months later it is all over. The IT project is never delivered, even though it had seemed so incredibly close.  With the trust's capital all gone, the activities it funded have to be terminated and the trust becomes just a shell. There aren't even the funds to mount a legal challenge against the IT company, even had the trust's solicitor advised such a foolish thing. Alain leaves as suddenly as he had arrived, only to pop up a few months later, bronzed and rested, at another charity. The IT workers who were permanent employees are dispersed to other projects, and the contractors leave to other contracts. Within months the entire project is but a vague memory. One or two developers remain  puzzled that their managers had been so obstructive when they should have welcomed progress toward completion of the project, but they put it down to incompetence and testosterone. Few suspected that they were actively preventing the project from getting finished. The relationships between the IT consultancy, and the government of the day are intricate, and made more complex by the Private Finance initiatives and political patronage.  The losers in this case were the taxpayers, and the beneficiaries of the trust, and, perhaps the soul of the original benefactor of the trust, whose bid to give his name some immortality had been scuppered by smooth-talking white-collar political apparatniks.  Even now, nobody is certain whether a crime was ever committed. The perfect heist, I guess. Where’s the victim? "I hear that Daisy’s cottage is up for sale. She’s had to go into a care home.  She didn’t want to at all, but then there is nobody to keep an eye on her since she had that minor stroke a while back.  A charity used to help out. The ‘social’ don’t have the funding, evidently for community care. Yes, her old cat was put down. There was a good clearout, and now the house is all scrubbed and cleared ready for sale. The skip was full of old photos and letters, memories. No room in her new ‘home’."

    Read the article

  • 555 Footstool Turns Tech into Mad Scientist Decor

    - by Jason Fitzpatrick
    If you just can’t find the appropriate footstool for your laboratory, this laser-cut footstool styled to look like the ubiquitous 555 Timer should fit the bill. At Evil Mad Scientist Laboratories they were in search for the perfect footstool. Never ones to do something halfway they set out to build a footstool shaped like the famous integrated circuit design the 555 Timer. The project involved computer design, CNC routers, laser engraving, lots of plywood and glue, and paint. Hit up the link below to see pictures of the entire build process. 555 Footstool [Evil Mad Scientist Laboratories] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing

    Google I/O 2012 - Beyond Paper: Google Cloud Print and the Future of Printing Akshay Kannan Use Google Cloud Print's API to send documents to a printer (or anywhere else) quickly and easily. We're currently integrated with Chrome, ChromeOS, mobile Gmail/Docs, and most new printers, and that's just the start. We provide a configurable JavaScript API, an Android Intent, as well as HTTP and XMPP interfaces for sending documents and receiving them in virtually any format. Come learn how to enable printing from your web and mobile apps on any device to any printer in the world, with just a few lines of code! For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 41 1 ratings Time: 01:06:43 More in Science & Technology

    Read the article

  • Recent Innovations to ILOM

    - by B.Koch
    by Josh Rosen If you are wondering how Oracle can make some of the most advanced, reliable, and fault tolerant servers on the market, look no further than Oracle Integrated Lights Out Manager or ILOM.  We build ILOM into every server we create, from Oracle x86 Systems such as X3-2 to the SPARC T-Series family. Oracle ILOM is an embedded service processor, but it's really more than that.  It's a computer within a computer.  It's smart, it's tightly integrated into all aspects of the server's operation, and it's a big reason why Oracle servers are used for some of the most mission-critical workloads out there. To understand the value of ILOM, there is no better place to start than its fault management capability.  We have taken the sophisticated fault management architecture from Solaris, developed and refined over a decade, and built it into each and every ILOM. ILOM detects a potential issue at its earliest stage, watching low-level sensors.   If the root cause of a problem is not clear from a single error reading, ILOM will look for other clues and combine multiple pieces of information to correctly identify a failing component. ILOM provides peace of mind. We tailor our fault management for each new server platform that we produce.  You can rest assured that it's always actively keeping the server healthy.  And if there is a problem, you can be confident it will let you know by sending you a notification by e-mail or trap. We also heard IT managers tell us they needed a Ph.D. in computer engineering to manage today's servers. It doesn't have to be that way.  Thanks to the latest innovations to Oracle ILOM, we present hardware inventory and status in way that makes sense – to anyone.  Green means everything is healthy and red means something is wrong.  When a component needs to be replaced a clear message indicates where the problem is and points you at a knowledge article about that problem.  It's that simple. Simpler management and simple interfaces mean reduced complexity and lower costs to manage.  And we know that's really important. ILOM does all this while also providing advanced service processor features you depend on for managing enterprise class systems.  You can remotely control the server power, interact with a virtual video console for the server, and mount media on the server remotely.  There is no need to spend money on a KVM switch to get this functionality. And when people hear how advanced ILOM is, they can't believe ILOM is free.  All features are enabled and included with each Oracle server that you buy.  There are no advanced licenses you need to purchase or features to unlock. Configuring ILOM has also never been easier.  It is now possible to configure almost all aspects of the server directly from ILOM.  This includes changing BIOS settings, persistently modifying boot order, and optimizing power settings -- all directly from ILOM. But Oracle's innovation does not stop with ILOM.  Oracle has engineered Oracle Enterprise Manager Ops Center to integrate directly with ILOM, providing centralized management across all of our servers. Ops Center will discover each of your Oracle servers over the network by searching for ILOMs.  When it finds one, it knows how to communicate with ILOM to monitoring and configure that server from application to disk. Since every server that Oracle produces, from x86 Systems to SPARC T-Series up and down the line, comes with Oracle ILOM, you can manage all Oracle servers in the same way.  And while all of our servers may have different components on the inside, each with their specialized functions, the way you integrate them and the way you monitor and manage them is exactly the same. Oracle ILOM is state-of-art.  If you are looking for a server that make systems management simple and is easy to integrate and maintain, check out the latest advances to Oracle ILOM. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Ubuntu for Android on the ASUS Transformer Prime

    - by sola
    I would like to use Ubuntu on my Transformer Prime in parallel with Android (not as a dual booting solution, I want to be able to switch between them instantaniously). I am aware of the traditional chrooting/VNC solution but I heard that it performs very poorly so I would like to use Ubuntu For Android (UFA) which has been announced recently by Canonical. That looks like a polished, highly integrated solution for Android devices. The Prime would be the ideal device for Ubuntu For Android since it has a powerful processor (Tegra3) capable of running a lot of processes in parallel on its 4 cores. Does anyone know if Canonical or anybody else is working on supporting UFA on the ASUS Transformer Prime? As far as I understand, the X11 driver is available for Tegra3 so, the biggest hurdle may be easily overcome.

    Read the article

  • 32 Stunning Movie Tributes in LEGO

    - by Jason Fitzpatrick
    These impressive Sci-Fi LEGO tributes are an impressive combination of time, money, and a whole lot of LEGO bricks. Read on to see everything from Death Star hangers to adorable robots. Over at Dvice, a SyFy channel blog, they’ve rounded up 32 impressive movie tributes crafted entirely in LEGO bricks. The model seen above, for example, is composed of 30,000 bricks and is over six feet on a side. Planning on building your own? You’d better have $2,300 to blow on bricks and six months of spare time to invest. Hit up the link below for more LEGO tributes. 32 Fan-Built LEGO Tributes to Science Fiction [Dvice] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • SQL Server Intellisense VS. Red Gate SQL Prompt

    Fabiano Amorim is hooked on today's Integrated Development Environments with built-in Intellisense, so he looked forward keenly to SQL Server 2008's native intellisense. He was disappointed at how it turned out, so turned instead to SQL Prompt. Fabiano explains why he prefers to SQL Prompt, why he reckons it fits in with the way that database developers work, and goes on to describe some of the features he'd like to see in it SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Implementing Silverlight Coverflow with ADO.NET/WCF Data Services in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). WOOHOO!! My next video is online. In this video, I show how you can implement Silverlight Coverflow like UI using the Telerik silverlight toolset. In this demo, I talk to a picture library running in SharePoint 2010, and use ADO.NET Data Services to load up the various images loaded in the picture library. I then use the Telerik Silverlight toolset  integrated with the ADO.NET Data Services/WCF Data Services, and show a fancy coverflow like UI. As always, very few slides, completely hands-on, all code written in front of your eyes! Enjoy – the video.   And yes, there are a couple of more exciting videos coming! Stay tuned! Comment on the article ....

    Read the article

  • HTG Explains: Why You Shouldn’t Disable UAC

    - by Chris Hoffman
    User Account Control is an important security feature in the latest versions of Windows. While we’ve explained how to disable UAC in the past, you shouldn’t disable it – it helps keep your computer secure. If you reflexively disable UAC when setting up a computer, you should give it another try – UAC and the Windows software ecosystem have come a long way from when UAC was introduced with Windows Vista. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Oracle presentations at the CIPS ICE Conference, November 5 - 7, Edmonton, Alberta, Canada

    - by Darin Pendergraft
    Oracle will be presenting at the CIPS ICE conference the last week of October in Calgary and the first week of November in Edmonton. Here is a list of the presentations for Edmonton: SHAW Conference Centre • Session Title: Identity and Access Management Integrated; Analyzing the Platform vs Point Solution Approach • Speaker: Darin Pendergraft • Monday, November 5th @ 10:45 AM - 12:00 PM • Session Title: Is Your IT Security Strategy Putting Your Institution at Risk? • Speaker: Spiros Angelopoulos • Monday, November 5th @ 1:45 PM - 3:00 PM Three sessions under the TRAIN: Practical Knowledge Track • Monday, November 5th @ 10:45 AM, 1:45 PM, 3:30 PM • Title: What's new in the Java Platform   Presenter: Donald Smith • Title: Java Enterprise Edition 6   Presenter: Shaun Smith • Title: The Road Ahead for Java SE, JavaFX and Java EE    Presenters: Donald Smith and Shaun Smith To learn more about the conference, and to see the other sessions go to the conference website.

    Read the article

  • Mark Hurd on Oracle's Strategy to Be the Best

    - by Tuula Fai
    Mark Hurd, President of Oracle, energized a packed audience this Monday morning at OpenWorld with his keynote outlining Oracle’s four-pillar strategy: Be the leader at every level of the technology stack—applications, middleware, database, operating system, virtual machine, servers, and storage Vertically integrate these levels into differentiated solutions Offer Fusion, the next generation of applications, which are modular and can run in the cloud, on-premise, or both (hybrid) Deliver this technology portfolio through industry lenses to help Oracle customers solve their problems while innovating and becoming more efficient. Hurd’s message resonated throughout Monday’s Customer Experience (CX) sessions as we learned about Oracle’s investment in integrating its best-of-breed CX solutions to deliver an end-to-end suite that addresses every part of the customer lifecycle. For example, in the area of customer service, Oracle is developing enhancements to help contact center agents: Better understand customer needs through social listening tools that are integrated with knowledge management Empower themselves with internal collaboration and mobility tools Adapt to customer needs by engaging them through chat during a service or commerce interaction so they can deliver a great customer experience while transforming from a cost- into a profit center.

    Read the article

  • Bring Gadgets back to Your Desktop in Windows 8 RTM with 8GadgetPack

    - by Asian Angel
    Are you someone who loved using desktop gadgets in Windows 7 and Vista, but felt disappointed when learning they were removed in Windows 8 RTM? Then 8GadgetPack is just the app to put those gadgets back on your desktop! The good folks over at 7 Tutorials have a nice little write-up about 8GadgetPack with all the details you need to get those gadgets up and running once again. Just browse on over using the link below… How to Use Desktop Gadgets in Windows 8 with 8GadgetPack [7 Tutorials] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Glassfish alive or dead? WebLogic SE cost is less than Glassfish!

    - by JuergenKress
    Is a hot discussion in the community in the last few days! Send us your opinion on tiwtter @wlscommunity #Glassfish #WebLogicCommunity We posted theGlassFishStrategy.pptx at our WebLogic Community Workspace (WebLogic Community membership required). Please read also the Java EE and GlassFish Server Roadmap Update Bruno Borges ?Another great article covering story about #GlassFish. Comments starting to be reasonable ;-) 6 facts helped a lot http://adtmag.com/articles/2013/11/08/oracle-drops-glassfish.aspx … Adam Bien ?What Oracle Could Do For GlassFish Now: Move the sources to GitHub (GitHub is the most popular collaboration p... http://bit.ly/1d1uo24 JAXenter.com ?Oracle evangelist: “GlassFish Open Source Edition is not dead” http://jaxenter.com/oracle-evangelist-glassfish-open-source-edition-is-not-dead-48830.html … GlassFish 6 Facts About #GlassFish Announcement and the Future of #JavaEE http://bit.ly/1bbSVPf via @brunoborges David Blevins ?In support of our #GlassFish friends and open source in general: Feed the Fish http://www.tomitribe.com/blog/2013/11/feed-the-fish/ … #JavaEE #opensource #manifesto GlassFish ?GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Version 5.0 as impl for #JavaEE8 https://blogs.oracle.com/theaquarium/entry/java_ee_and_glassfish_server … #Community focused C2B2 Consulting ?C2B2 continues to offer support for your operational #JEE applications running on #GlassFish http://blog.c2b2.co.uk/2013/11/oracle-dropping-commercial-support-of.html … #Java Markus Eisele ?RT @InfoQ: #GlassFish Commercial Edition is Dead http://bit.ly/17eFB0Z < at least they agree to my points... Adam Bien suggests: Move the sources to GitHub (GitHub is the most popular collaboration platform). It is more likely for an individual to contribute via GitHub, than the current infrastructure. Introduce a business friendlier license like e.g. the Apache license. Companies interesting in providing added value (and commercial support) on top of existing sources would appreciate it. Implement GitHub-based, open source, CI system with nightly builds. Introduce a transparent voting process / pull-request acceptance process. Release more frequently. Keep https://glassfish.java.net as the main hub. C2B2 offers Glassfish support by Steve Millidge Oracle have just announced that commercial support for GlassFish 4 will not be available from Oracle. In light of this announcement I thought I would put together some thoughts about how I see this development. I think the key word in this announcement is "commercial", nowhere does Oracle announce the "death of GlassFish" in contrary Oracle reaffirm; GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition so GlassFish is not about to go away soon. In a similar fashion RedHat do not provide commercial support for WildFly and only provide commercial support for JBoss EAP. Admittedly JBoss EAP and WildFly are much closer together than GlassFish and WebLogic but WildFly and JBoss EAP are absolutely NOT the same thing. The key going forward to the viability of GlassFish as a production platform is how the GlassFish community develops; How often does the community release binary builds? How open is the community to bug fixes? How much engineering resource does Oracle commit to GlassFish? At this stage we just don't know the answers to these questions. If the GlassFish open source project continues on it's current trajectory without a commercial support offering then I don't see much of a problem. Oracle just have to work harder to sell migration paths to WebLogic in the same way as RedHat have to sell migration paths from WildFly to JBoss EAP. In the meantime C2B2 continues to offer support for your operational JEE applications running on GlassFish and we will endeavour to work with the community to get any bugs fixed. The key difference is we can no longer back our Expert Support with a support contract from Oracle for patches and fixes for any release greater than 3.x. Read the complete article here. 6 Facts About GlassFish Announcement By Bruno.Borges Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. Read the complete article here WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Glassfish,training,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Six Rubens’ Tubes Combined Into a Fire-Based Music Visualizer [Video]

    - by Jason Fitzpatrick
    The last Rubens’ Tube setup we shared with you was but a simple single tube. This impressive setup is six independent tubes that register distinct frequencies of sound in a musical composition as standing flames. Check out the video to see it in action. Curious about the Rubens’ Tubes? Read more about the phenomenon here. [via Design Boom] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PandoraBar Packs Pandora Radio Client into a Compact Case

    - by Jason Fitzpatrick
    This stylish and compact build makes it easy to enjoy streaming radio without the bulk and overhead of running your entire computer to do so. Check out the video to see the compact streaming radio box in action. Courtesy of tinker blog Engscope, we find this clean Pandora-client-in-box build. Currently the project blog has a cursory overview of the project with the demo video but promises future updates detailing the software and hardware components of the build. If you can’t wait that long, make sure to check out some of the previous Wi-Fi radio builds we’ve shared: DIY Wi-Fi Radio Brings Wireless Tunes Anywhere in Your House and Wi-Fi Speakers Stream Music Anywhere. Pandobar [via Hacked Gadgets] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • AMD/AMD GPU Switching

    - by user73816
    I have two GPU's in my laptop, both of which are AMD. Whenever I use the Catalyst Control Centre to change the GPU, nothing changes after reboot. In fact, when I do the fglrxinfo command the terminal only reports seeing one GPU, the integrated one (HD 4250). The dedicated one (Mobility HD5470) goes unnoticed and I can't seem to use that GPU at all. I really don't want to use the Open Source drivers because I've found there generally slower than the proprietary, but the proprietary doesn't seem to work either. Any help is appreciated.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >