Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 142/230 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • SQL Developer: Why Do You Require Semicolons When Executing SQL in the Worksheet?

    - by thatjeffsmith
    There are many database tools out there that support Oracle database. Oracle SQL Developer just happens to be the one that is produced and shipped by the same folks that bring you the database product. Several other 3rd party tools out there allow you to have a collection of SQL statements in their editor and execute them without requiring a statement delimiter (usually a semicolon.) Let’s look at a quick example: select * from scott.emp select * from hr.employees delete from HR_COPY.BEER where HR_COPY.BEER.STATE like '%West Virginia% In some tools, you can simply place your cursor on say the 2nd statement and ask to execute that statement. The tool assumes that the blank line between it and the next statement, a DELETE, serves as a statement delimiter. This is not bad in and of itself. However, it is very important to understand how your tools work. If you were to try the same trick by running the delete statement, it would empty my entire BEER table instead of just trimming out the breweries from my home state. SQL Developer only executes what you ask it to execute You can paste this same code into SQL Developer and run it without problems and without having to add semicolons to your statements. Highlight what you want executed, and hit Ctrl-Enter If you don’t highlight the text, here’s what you’ll see: See the statement at the cursor vs what SQL Developer actually executed? The parser looks for a query and keeps going until the statement is terminated with a semicolon – UNLESS it’s highlighted, then it assumes you only want to execute what is highlighted. In both cases you are being explicit with what is being sent to the database. Again, there’s not necessarily a ‘right’ or ‘wrong’ debate here. What you need to be aware of is the differences and to learn new workflows if you are moving from other database tools to Oracle SQL Developer. I say, when in doubt, back away from the tool, especially if you’re in production. Oh, and to answer the original question… Because we’re trying to emulate SQL*Plus behavior. You end statements in SQL*Plus with delimiters, and the default delimiter is a semicolon.

    Read the article

  • Should I be using a JavaScript SPA designed when security is important

    - by ryanzec
    I asked something kind of similar on stackoverflow with a particular piece of code however I want to try to ask this in a broader sense. So I have this web application that I have started to write in backbone using a Single Page Architecture (SPA) however I am starting to second guess myself because of security. Now we are not storing and sending credit card information or anything like that through this web application but we are storing sensitive information that people are uploading to us and will have the ability to re-download too. The obviously security concern that I have with JavaScript is that you can't trust anything that comes from JavaScript however in a Backbone SPA application, everything is being sent through JavaScript. There are two security features that I will have to build in JavaScript; permissions and authentication. The authentication piece is just me override the Backbone.Router.prototype.navigate method to check the fragment it is trying to load and if the JavaScript application.session.loggedIn is not set to true (and they are not viewing a none authenticated page), they are redirected to the login page automatically. The user could easily modify application.session.loggedIn to equal true (or modify Backbone.Router.prototype.navigate method) but then they would also have to not so easily dynamically embedded a link into the page (or modify a current one) that has the proper classes, data-* attributes, and href values to then load a page that should only be loaded when they user has logged in (and has the permissions). So I have an acl object that deals with the permissions stuff. All someone would have to do to view pages or parts of pages they should not be able to is to call acl.addPermission(resource, permission) with the proper permissions or modify the acl.hasPermission() to always return true and then navigate away and then back to the page. Now certain things is EMCAScript 5 like Object.seal() or Object.freeze() would help with some of this however we have to support IE 8 which does not support those pieces of functionality. Now the REST API also performs security checks on every request so technically even if they are able to see parts of the interface that they should not be able to, they still should not be able to actually affect any data. The main benefits for me in developing a JavaScript SPA application is that the application is a lot more responsive since it is only transferring the minimum amount of JSON data for the requested action and performing the minimum amount of work too. There are also other things that I think are beneficial like you are going to have to develop an API for the data (which is good if you want expand your application to different platforms/technologies) or their is more of a separation between front-end and back-end however if security is a concern, it is really wise to go down the road of a JavaScript SPA application for the front-end?

    Read the article

  • FREE Windows Azure evening in London on April 15th including FREE access to Windows Azure

    - by Eric Nelson
    [Did I overdo the use of FREE in the title? :-)] April 12th to 16th is Microsoft Tech Days – 5 days of sessions on Visual Studio 2010 through to Windows 7 Phone Series. Many of these days are now full (Tip - Thursday still has room if rich client applications is your thing) but the good news is the development community in the UK has pulled together an awesome series of “fringe events” during April in London and elsewhere in the UK. There are sessions on Silverlight, SQL Server 2008 R2, Sharepoint 2010 and … the Windows Azure Platform. The UK AzureNET user group is planning to put on a great evening and AzureNET will be giving away hundreds of free subscriptions to the Windows Azure Platform during the evening. The subscription includes up to 20 Windows Azure Compute nodes and 3 SQL Azure databases for you to play with over the 2 weeks following the event. This is a great opportunity to really explore the Windows Azure Platform in detail – without a credit card! Register now! (and you might also want to join the UK Fans of Azure Community while I have your attention) FYI The Thursday day time event includes an introduction to Windows Azure session delivered by my colleague David – which would be an ideal session to attend if you are new to Azure and want to get the most out of the evening session. 7:00pm: See the difference: How Windows Azure helped build a new way of giving Simon Evans and James Broome (@broomej) They will cover the business context for Azure and then go into patterns used and lessons learnt from the project....as well as showing off the app of course! 8:00pm: UK AzureNET update 8:15pm: NoSQL databases or: How I learned to love the hash table Mark Rendle (@markrendle) In this session Mark will look at how Azure Table Service works and how to use it. We’ll look briefly at the high-level Data Services SDK, talk about its limitations, and then quickly move on to the REST API and how to use it to improve performance and reduce costs. We’ll make-up some pretend real-world problems and solve them in new and interesting ways. We’ll denormalise data (for fun and profit). We’ll talk about how certain social networking sites can deal with huge volumes of data so quickly, and why it sometimes goes wrong. Check out the complete list of fringe events which covers the UK fairly well:

    Read the article

  • Customer Experience Online Forum

    - by Christie Flanagan
    Missed Oracle’s Customer Experience Online Forum?  Don’t worry. You can still catch the sessions at your convenience. Watch the Customer Experience Online Forum on demand to hear from Bruce Tempkin, a leading expert in customer experience, as well as other thought leaders and they delve into topics such as the ROI of customer experience and strategies for winning over customers.  Simply register to gain access to these sessions and more: The Customer Experience RevolutionCustomer experience has become the most important and defensible differentiator for your business. The customer experience is a journey that transcends all customer touchpoints and stages of the customer lifecycle. Discover where you are in the journey, identify how to begin optimizing the experience you deliver your customers, and join the Revolution.The ROI of Customer ExperienceBruce Temkin, Customer Experience Transformist & Managing Partner, Temkin GroupResearch of US and UK customers demonstrates a high correlation between a positive customer experience and loyalty. A successful customer interaction increases the willingness to buy more and to recommend your company. US companies can gain $380 million over three years by providing an optimized customer experience. This session will help companies determine the business impact that customer experience has on their specific business. Integrating Marketing and Loyalty to Deliver Great Customer ExperiencesNew devices and channels, such as mobile, social and web, are creating radical shifts in the customer buying process and the ways your company can reach and communicate with existing and potential customers. Learn how leading brands are using Oracle's marketing solutions to harness big data and better understand their customers, extend their marketing reach into social channels, and retain their high value customers through more rewarding customer experience.Where to Start Your Organization's RevolutionThe process of crafting a great customer experience starts with understanding customers and their goals. This session helps you to begin mapping a sound customer experience strategy, describing the intended experience and kinds of processes that create differentiation. The ROI of Customer Experience: A Tempkin Group Insight Report Did you know that customer experience leaders have more than a 16 percentage point advantage over customer experience laggards in consumers’ willingness to buy more, their reluctance to switch business away, and their likelihood to recommend? Did you know that even a modest increase in customer experience can translate into millions of dollars gained? Learn more about the ROI of customer experience in this free report.

    Read the article

  • The Whole Enchilada — Fusion Supply Chain in the Cloud

    - by Kathryn Perry
    A guest post by Tyra Crockett, Senior Manager at Oracle No other vendor can offer everything in the cloud the way Oracle can. You can get HR from Workday and CRM from Salesforce, but you can get the whole enchilada—HCM, CRM and ERP—all from Oracle on one platform. If you’re thinking about using Oracle's Cloud Services to implement the newest Oracle Fusion Supply Chain applications, this post is for you. Point #1: The Oracle Cloud Applications Services portfolio includes ERP cloud services which are flexible and can adapt to fill your supply chain needs. For example, you might be opening a small distribution facility in California, but don’t have the time or IT resources to warrant a full scale supply chain implementation. You can use Oracle’s Cloud to implement the Oracle Fusion Supply Chain applications you need without an increase in IT staff or hardware. Then as your business grows, you can add more features and applications to your cloud.   Point #2: Whether you’re implementing a slice of the Fusion Procurement pie, or the entire ERP portfolio, you want to be up and running fast with low upfront costs and investment risks. That’s where you can trust a world-class technology organization like Oracle. Your SaaS subscription-based deployment model will take away the headaches associated with determining your software costs. You also will be able to eliminate expensive customizations and configure your deployment as you like, saving you time and money during the initial stages and upon upgrade. Point #3: Another great benefit of operating your Oracle Fusion Supply Chain in the cloud is the opportunity to standardize your processes across your entire supply chain. You can institute processes in San Francisco and be confident they will be followed in Mexico City and Hong Kong. Point #4: If data security is a concern – and it is for most of us – Oracle-managed cloud services give you the comfort of knowing that your data will always be there when you need it. You will not have to manage the IT services associated with patching and upgrade. They will be taken care of automatically. This enables you to focus on what you do best: managing your business. Point #5: Cloud services aren’t an either/or proposition. You might have very good business reasons for choosing a hybrid model -- running some applications in the cloud and others on premise. That allows you to leverage your own IT department, when and where you need to, and shift focus when necessary. I urge you to take a hard look at the Oracle Fusion Supply Chain applications running in the cloud. These solutions running alongside your existing legacy systems can solve your toughest business challenges as you move forward in the 21st century.

    Read the article

  • Certificate Revocation checking affecting system performance [migrated]

    - by Colm Clarke
    I have a .NET 3.5 desktop application that had been showing periodic slow downs in functionality whenever the test machine it was on was out of the office. I managed to replicate the error on a machine in the office without an internet connection, but it was only when i used ANTS performance profiler that i got a clearer picture of what was going on. In ANTS I saw a "Waiting for synchronization" taking up to 16 seconds that corresponded to the delay I could see in the application when NHibernate tried to load the System.Data.SqlServerCE.dll assembly. If I tried the action again immediately it would work with no delay but if I left it for 5 minutes then it would be slow to load again the next time I tried it. From my research so far it appears to be because the SqlServerCE dll is signed and so the system is trying to connect to get the certificate revocation lists and timing out. Disabling the "Automatically detect settings" setting in the Internet Options LAN settings makes the problem go away, as does disabling the "Check for publishers certificate revocation". But the admins where this application will be deployed are not going to be happy with the idea of disabling certificate checking on a per machine or per user basis so I really need to get the application level disabling of the CRL check working. There is the well documented bug in .net 2.0 which describes this behaviour, and offers a possible fix with a config file element. <?xml version="1.0" encoding="utf-8"?> <configuration> <runtime> <generatePublisherEvidence enabled="false"/> </runtime> </configuration> This is NOT working for me however even though I am using .net 3.5. The SQLServerCE dll is being loaded dynamically by NHibernate and I wonder if the fact that it's dynamic could somehow be why the setting isn't working, but I don't know how I could check that. Can anyone offer suggestions as to why the config setting might not work? Or is there another way I could disable the check at the application level, perhaps a CAS policy setting that I can use to set an exception for the application when it's installed? Or is there something I can change in the application to up the trust level or something like that? I have also tried using to no advantage ServicePointManager.CheckCertificateRevocationList = false; http://rusanu.com/2009/07/24/fix-slow-application-startup-due-to-code-sign-validation/ I have also tried those registry settings out and unfortunately they didn't help. The dlls that appear to be the cause of the hold up are native SQL Server CE dlls, and looking at the stack traces in ProcMon mscorwks.dll doesn't appear to be involved even though the checks on crypto and cert registry keys are being done under the .NET application. It's definitely still something to do with publisher certificate checking because unticking "Check for publisher revocation certificate" still works but something odd is going on.

    Read the article

  • Silverlight Cream for June 06, 2010 -- #876

    - by Dave Campbell
    In this Issue: Brian Genisio, Michael Washington, Fons Sonnemans , Don Burnett, Xianzhong Zhu, Mike Snow, Jesse Liberty, Victor Gaudioso, David Kelley(-2-), and Matias Bonaventura . Shoutout: Anoop has a good post up: MEF or Managed Extensibility Framework and Lazy – Being Lazy with MEF, Custom Export Attributes etc Jesse Liberty's got a good post up if you are just Getting Started With Silverlight: A Path Through The Learning Material John Papa reports Updates and New Home for Sticky Plugin Tim Heuer announced Silverlight 4 Theme refresh including RIA Services templates From SilverlightCream.com: Adventures in MVVM – ViewModel Location and Creation Brian Genisio has a post up about ViewModels and how he attaches them to his views. Some discssion of MVVMLight, and other external links plus the code for the project. Simplified MEF: Dynamically Loading a Silverlight .xap Michael Washington has a good tutorial up on MEF, Silverlight, and ViewModel. In Michael's words: The goal here is to give you a quick easy win. You will be able to understand this one. You will come away with something you can use, and you will be able to tell your fellow colleagues, "MEF? yeah I'm using that, good stuff Touch Gesture Triggers for Windows Phone 7 projects in Blend 4.0 Fons Sonnemans has a post up about touch gestures for WP7 -- he's got 3 of them implemented using triggers, plus an external link to another, and the source. What the Heck is “MEF” for, and what Silverlight designers need to know about it? Don Burnett is also talking MEF... he does a good job of introducing MEF if you're not acquainted yet, plus some external information. Write Your Custom Effect Components in Silverlight 3 Xianzhong Zhu has a post up walking you through creating your own Custom Effect for Blend and Silverlight 3 ... lots of external links and the source project. Silverlight Tip of the Day #28 – Text Trimming Mike Snow's Tip #28 is about Text Trimming... what it does, and how it differs from WPF Windows Phone 7: Lists, Page Animation and oData Jesse Liberty called this a mini-tutorial, but it's not so mini... great tutorial on WP7, data, lists, and page transitions... oh, and the data is OData too... New Silveright Video Tutorial: How to Do Hit Detection Victor Gaudioso's latest video tutorial is up and he's demonstrating how to do Silverlight HitTesting via code from Andy Beaulieu Dependency Properties Made Easy Need a quick pick-up on Dependency Properties? David Kelley has a short post about them on his blog. Isolated Storage Made Easy David Kelley also has a quick post up about Isolated Storage ... going to keep an eye out for more of these quick "Made Easy" posts from David. Prism 4.0 First Drop – MVVM Matias Bonaventura has a post up about the recent Prism 4.0 drop and highlights a bunch of the features/enhancements in this... some code snippets and a linnk out to the CodePlex drop. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Digital Storage for Airline Entertainment

    - by Bill Evjen
    by Thomas Coughlin Common flash memory cards The most common flash memory products currently in use are SD cards and derivative products (e.g. mini and micro-SD cards) Some compact flash used for professional applications (such as DSLR cameras) Evolution of leading flash formats Standardization –> market expansion Market expansion –> volume iNAND –> focus is on enabling embedded X3 iSSD –> ideal for thin form factor devices Flash memory applications Phones are the #1 user of flash memory Flash memory is used as embedded and removable storage in many mobile applications Flash memory is being used in computers as USB sticks and SSDs Possible use of flash memory in computer combined with HDDs (hybrid HDDs and paired or dual storage computers) It can be a removable card or an embedded card These devices can only handle a specific number of writes Flash memory reads considerably quicker than hard drives Hybrid and dual storage in computers SSDs can provide fast performance but they are expensive HDDs can provide cheap storage but they are relatively slow Combining some flash memory with a HDD can provide costs close to those of HDDs and performance close to flash memory Seagate Momentus XT hybrid HDD Various dual storage offerings putting flash memory with HDDs Other common flash memory devices USB sticks All forms and colors Used for moving files around Some sold with content on them (Sony Movies on USB sticks) Solid State Drives (SSDs) Floating Gate Flash Memory Cell When a bit is programmed, electrons are stored upon the floating gate This has the effect of offsetting the charge on the control gate of the transistor If there is no charge upon the floating gate, then the control gate’s charge determines whether or not a current flows through the channel A strong charge on the control gate assumes that no current flows. A weak charge will allow a strong current to flow through. Similar to HDDs, flash memory must provide: Bit error correction Bad block management NAND and NOR memories are treated differently when it comes to managing wear In many NOR-based systems no management is used at all, since the NOR is simply used to store code, and data is stored in other devices. In this case, it would take a near-infinite amount of time for wear to become an issue since the only time the chip would see an erase/write cycle is when the code in the system is being upgraded, which rarely if ever happens over the life of a typical system. NAND is usually found in very different application than is NOR Flash memory wears out This is expected to get worse over time Retention: Disappearing data Bits fade away Retention decreases with increasing read/writes Bits may change when adjacent bits are read Time and traffic are concerns Controllers typically groom read disturb errors Like DRAM refresh Increases erase/write frequency Application characteristics Music – reads high / writes very low Video – r high / writes very low Internet Cache – r high / writes low On airplanes Many consumers now have their own content viewing devices – do they need the airlines? Is there a way to offer more to consumers, especially with their own viewers Additional special content tie into airplane network access to electrical power, internet Should there be fixed embedded or removable storage for on-board airline entertainment? Is there a way to leverage personal and airline viewers and content in new and entertaining ways?

    Read the article

  • SQL SERVER – Identify Numbers of Non Clustered Index on Tables for Entire Database

    - by pinaldave
    Here is the script which will give you numbers of non clustered indexes on any table in entire database. SELECT COUNT(i.TYPE) NoOfIndex, [schema_name] = s.name, table_name = o.name FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name ORDER BY schema_name, table_name Here is the small story behind why this script was needed. I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com. I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference. Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • First steps into css - aligning data insite one DIV [on hold]

    - by Andrew
    I am trying to move away from tables, and start doing CSS. Here is my HTML code that I currently trying to place into a nice looking container. <div> <div> <h2>ID: 4000 | SSN#: 4545</h2> </div> <div> <img src="./images/tenant/unknown.png"> </div> <div> <h3>Names Used</h3> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> </div> <div> <h3>Phones Used</h3> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> </div> <div> <h3>Addresses Used</h3> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> </div> </div> I now understand now I create classes and assign classes to elements. I have no issues doing colors. But I am very confused with elements alignments. Could you suggest a nice way to pack it together with some CSS which I can analyze and take as a CSS starting learning point?

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • VS.NET 2010 SP1, Win 7, Parallels, and a MBP&ndash;Hell, my friends&hellip;HELL!

    - by D'Arcy Lussier
    LightSwitch Beta 2 is out. That’s how all this started. All I wanted was to install it on my MBP’s Win7 Parallels VM. But as I’m finding with running a Win7 VM on a MBP, nothing is as easy as it should be. First my MBP froze during the SP1 installation. Not my VM crashing, the entire machine freezing…no mouse, nothing. Had to do a hard reset. BLECH. Then we’re back and I try to re-install SP1 (since the first try obviously failed). I get met with a dialog asking me where silverlight_sdk.msi was. It was *nowhere*! So I hit the net and download it from Microsoft’s site. Unfortunately, it only downloads an exe and not the individual files which would include the msi. Here’s what I did: - Download the tools for Silverlight 4 (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b3deb194-ca86-4fb6-a716-b67c2604a139&displaylang=en) - Run it, but don’t hit the install or next button when the dialog comes up - Look in your file structure for a folder with a weird name…bunch of numbers and letters. This is a temp folder that the exe creates and dumps all the necessary setup files into, and clears away after its done. - Inside this folder you’ll find the silverlight_sdk.msi (hooray!). Just copy it to a different location on the C drive. You can then cancel installation. Ok, so that takes care of that…but then running the SP1 installer I get hit with *another* dialog asking for the WCF RIA Services SP1 msi. Now it looks like this MSI is part of the Silverlight Tools package because you’ll see the MSI, but the VS.NET 2010 SP1 installer will thumb its nose at this unworthy msi…for whatever reason. So instead, go here: http://www.silverlight.net/getstarted/riaservices/ …and click on the “Install WCF Ria Services Sp1…” option. This downloads the msi, which you should save to your C drive and direct the VS.NET 2010 SP1 installer to. Then, if you’ve done all that, been good all year, and not made any little children cry, you *might* just be able to install VS.NET 2010 SP1 on your Parallels VM. If you were playing that “Take a shot every time he writes VS.NET 2010 Sp1” drinking game, then you’re drunk…which is a better place to be than where I am right now: watching the installation progress bar slowly creep to completion, hoping there’s no more surprises in store. D

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • Introducing Glimpse – Firebug for your server

    - by Neil Davidson
    Here at Red Gate, we spend every waking hour trying to wow .NET and SQL developers with great products.  Every so often, though, we find something out in the wild which knocks our socks off by taking “ingeniously simple” to a whole new level.  That’s what a little community led by developers Nik Molnar and Anthony van der Hoorn has done with the open source tool Glimpse. Glimpse describes itself as ‘Firebug for the server.’  You drop the NuGet package into your ASP.NET project, and then — like magic* — your web pages will bare every detail of their execution.  Even by our high standards, it was trivial to get running: if you can use NuGet, you’re already there. You get all that lovely detail without changing any code. Our feelings go beyond respect for the developers who designed and wrote Glimpse; we’re thrilled that Nik and Anthony have come to work for Red Gate full-time. They’re going to stay in control of the project and keep doing open source development work on Glimpse.  In the medium term, we’re hoping to make paid-for products which plug into the free open source framework, especially in areas like performance profiling where we already have some deep technology.  First, though, Glimpse needs to get from beta to a v1. Given the breakneck pace of new development, this should only be a month or so away. Supporting an open source project is a first for Red Gate, so we’re going to be working with Nik and Anthony, with the Glimpse community and even with other vendors to figure out what ‘great’ looks like from the a user perspective.  Only one thing is certain: this technology deserves a wider audience than the 40,000 people who have already downloaded it, so please have a look and tell us what you think. You can hear more about what the Glimpse developers think on the Glimpse blog, and there are plenty more technical facts over at our product manager’s blog. If you have any questions or queries, please tweet with the #glimpse hashtag or contact the Glimpse team directly on [email protected]. [*That’s ”magic” in the Arthur C. Clarke “sufficiently advanced technology” sense, of course] Neil Davidson co-founder and Joint CEO Red Gate Software http://twitter.com/neildavidson    

    Read the article

  • O the Agony - Merging Scrum and Waterfall

    - by John K. Hines
    If there's nothing else to know about Scrum (and Agile in general), it's this: You can't force a team to adopt Agile methods.  In all cases, the team must want to change. Well, sure, you could force a team.  But it's going to be a horrible, painful process with a huge learning curve made even steeper by the lack of training and motivation on behalf of the team.  On a completely unrelated note, I've spent the past three months working on a team that was formed by merging three separate teams.  One of these teams has been adopting and using Agile practices like Scrum since 2007, the other was in continuous bug fix mode, releasing on average one new piece of software per year using semi-Waterfall methods.  In particular, one senior developer on the Waterfall team didn't see anything in Agile but overhead. Fast forward through three months of tension, passive resistance, process pushback, and you have seven people who want to change and one who explicitly doesn't.  It took two things to make Scrum happen: The team manager took a class called "Agile Software Development using Scrum". The team lead explained the point of Agile was to reduce the workload of the senior developer, with another senior developer and the manager present. It's incredible to me how a single person can strongly influence the direction of an entire team.  Let alone if Scrum comes down as some managerial decree onto a functioning team who have no idea what it is.  Pity the fool. On the bright side, I am now an expert at drawing Visio process flows.  And I have some gentle advice for any first-level managers: If you preside over a team process change, it's beneficial to start the discussion on how the team will work as early as possible.  You should have a vision for this and guide the discussion, even if decisions are weeks away.  Don't always root for the underdog.  It's been my experience that managers who see themselves as compassionate and caring spend a great deal of time understanding and advocating for the one person on the team who feels left out.  Remember that by focusing on this one person you risk alienating the rest of the team, allow tension to build, and delay the resolution of the problem. My way would have been to decree Scrum, force all of my processes on everyone else, and use the past three months ironing out the kinks.  Which takes us all the way back to point number one. Technorati tags: Scrum Scrum Process Scrum and Waterfall

    Read the article

  • SQLAuthority News – New Book Released – SQL Server Interview Questions And Answers

    - by pinaldave
    Two days ago, on birthday of my blog – I asked simple question – Guess! What is in this box? I have received lots of interesting comments on the blog about what is in it. Many of you got it absolutely incorrect and many got it close to the right answer but no one got it 100% correct. Well, no issue at all, I am going to give away the price to whoever has the closest answer first in personal email. Here is the answer to the question about what is in the box? Here it is – the box has my new book. In fact, I should say our new book as I co-authored this book with my very good friend Vinod Kumar. We had real blast writing this book together and had lots of interesting conversation when we were writing this book. This book has one simple goal – “master the basics.” This book is not only for people who are preparing for interview. This book is for every one who wants to revisit the basics and wants to prepare themselves to the technology. One always needs to have practical knowledge to do their duty efficiently. This book talks about more than basics. There are multiple ways to present learning – either we can create simple book or make it interesting. We have decided the learning should be interactive and have opted for Interview Questions and Answer format. Here is quick interview which we have done together. Details of the books are here The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. You can get the books from [Amazon] | [Flipkart]. Read Vinod‘s blog post. Do not forget to wish him happy birthday as today is his birthday and also book release day – two reason to wish him congratulations. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • SQLAuthority News – I am Presenting 2 Sessions at TechEd India

    - by pinaldave
    TechED is the event which I am always excited about. It is one of the largest technology in India. Microsoft Tech Ed India 2011 is the premier technical education and networking event for tech professionals interested in learning, connecting and exploring a broad set of current and soon-to-be released Microsoft technologies, tools, platforms and services. I am going to speak at the TechED on two very interesting and advanced subjects. Venue: The LaLiT Ashok Kumara Krupa High Grounds Bangalore – 560001, Karnataka, India Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM History repeats itself! SQL Server 2008 has introduced a very powerful, yet very minimal reoccurring feature called Extended Events. This advanced session will teach experienced administrators’ capabilities that were not possible before. From T-SQL error to CPU bottleneck, error login to deadlocks –Extended Event can detect it for you. Understanding the pattern of events can prevent future mistakes. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. My session will be on the third day of the event and I am very sure that everybody will be in groove to learn new interesting subjects. I will have few give-away during and at the end of the session. I will not tell you what I will have but it will be for sure something you will love to have. Please make a point and reserve above time slots to attend my session. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • Silverlight Cream for May 20, 2010 -- #866

    - by Dave Campbell
    In this Issue: Mike Snow, Victor Gaudioso, Ola Karlsson, Josh Twist(-2-), Yavor Georgiev, Jeff Wilcox, and Jesse Liberty. Shoutouts: Frank LaVigne has an interesting observation on his site: The Big Take-Away from MIX10 Rishi has updated all his work including a release of nRoute to the latest bits: nRoute Samples Revisited Looks like I posted one of Erik Mork's links two days in a row :) ... that's because I meant to post this one: Silverlight Week – How to Choose a Mobile Platform Just in case you missed it (and for me to find it easy), Scott Guthrie has an excellent post up on Silverlight 4 Tools for VS 2010 and WCF RIA Services Released From SilverlightCream.com: Silverlight Tip of the Day #23 – Working with Strokes and Shapes Mike Snow's Silverlight Tip of the Day number 23 is up and about Strokes and Shapes -- as in dotted and dashed lines. New Silverlight Video Tutorial: How to Fire a Visual State based upon the value of a Boolean Variable Victor Gaudioso's latest video tutorial is up and is on selecting and firing a video state based on a boolean... project included. Simultaneously calling multiple methods on a WCF service from silverlight Ola Karlsson details a problem he had where he was calling multiple WCF services to pull all his data and had problems... turns out it was a blocking call and he found the solution in the forums and details it all out for us... actually, a search at SilverlightCream.com would have found one of the better posts listed once you knew the problem :) Securing Your Silverlight Applications Josh Twist has an article in MSDN on Silverlight Security. He talks about Windows, forms, and .NET authorization then WCF, WCF Data, cross domain and XAP files. He also has some good external links. Template/View selection with MEF in Silverlight Josh Twist points out that this next article is just a simple demonstration, but he's discussing, and provides code for, a MEF-driven ViewModel navigation scheme with animation on the navigation. Workaround for accessing some ASMX services from Silverlight 4 Are you having problems hitting you asmx web service with Silverlight 4? Yeah... others are too! Yavor Georgiev at the Silverlight Web Services Team blog has a post up about it... why it's a sometimes problem and a workaround for it. Using Silverlight 4 features to create a Zune-like context menu Jeff Wilcox used Silverlight 4 and the Toolkit to create some samples of menus, then demonstrates a duplication of the Zune menu. You Already Are A Windows Phone 7 Programmer Jesse Liberty is demonstrating the fact that Silverlight developers are WP7 developers by creating a Silverlight and a WP7 app side by side using the same code... this is a closer look at the Silverlight TV presentation he did. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Graphics card fan is loud (additional graphics card drivers cause problems)

    - by tk4muffin
    Okay this explanation is a bit longer ... but I start at the beginning: I've been using Windows 7 for a very long time, shortly after the release of v12.10 I installed Ubuntu via Windows installer. Everything worked fine but the fan of the graphics card. After a bit of research I found out, that I just had to select a different driver (nvidia-current (proprietary, tested) worked pretty well). This also fixed some graphical bugs when I just logged into my account. Due to my university I got a MSDNAA-Account (allows me to download every Windows OS for free). I downloaded and installed Windows 8. After configuration I installed ubuntu via the Windows installer once again and the first couple launches of ubuntu went well. Suddenly ubuntu didn't launched anymore...caused by some hard-disk errors and had no clue what to do. So I kept working on Windows 8 - unfortunately. After playing around with the new Windows, I put my PC to sleep-mode. I couldn't wake my PC up and it wasn't responding to anything (neither mouse-movement, -clicks or keyboard strokes, nor the power-button and the reset-button worked), so I pulled the plug. Turns out, this was a huge mistake. Somehow the BIOS broke and after restarting a couple of times, the BIOS repaired itself. Neither Windows 8, nor ubuntu where bootable. Now I had to install ubuntu several times, because after rebooting unity was hidden and I didn't know what the problem was and how to fix it. I finally realized that this problem was caused by the graphics card driver, which I've changed to the nvidia-current (This dirver worked fine before my PC "crashed"). So I installed Windows 8 again and after a bit of usage I installed ubuntu once again (via DVD). The booting of ubuntu and windows works fine - so far. But I'm still not able to change the graphics card driver without unity hiding away after restarting the OS. The noisy fan is really disturbing my work... PC Specs: Processor: Intel Core 2 Duo COU E8400 @ 3GHz x2 Memory: 7.8 GB OS type: 64-bit Graphics Card: GeForce 9600 GT Motherboard: Asus P5Q I hope the information given are enough.

    Read the article

  • Kickstarter "last minute cold feet"

    - by mm24
    today I scheduled the publication of a video on kickstarter requesting approximately 5.000 $ in order to complete the iPhone shooter game I started 1 year ago after quitting my job. I invested more than 20.000$ in the game so far (for artwork, music, legal and accountant expenses) and I am now getting cold feet about my decision of publishing the video. The game is "nearly finished", in other words: the game mechanics are working but I still have some bugs to fix. Once I will have finished this (I hope will take me 1 or 2 weeks) I plan to start working on the actual level balancing (e.g. deciding the order of appearence of enemies for each level and balancing the number of hitpoints and strenght of bullets that the enemies have). Reasons for not publishing the video are: fear that the concept can be copied easily: the game is a shooter game set in a different environment (its a pretty cool one, believe me :)) and I am worried that someone might copy* the idea (I know, its the usual "I am worried story.."). A shooter game is one of the easiest game to implement and hence there will be hundreds game developer able to copy it by just adapting their existing code and changing graphics (not as straightforward). It took me one year to develop this because I was inexperienced plus there are approximately 6/7 months of work from the illustrator and there are 8 unique music tracks composed. The soundtrack of the video is the soundtrack of the game wich is not yet published and has not been deposited to a music society. I did create legally valid timestamps for the tracks and I am considering uploading the album on iTunes before publishing the video so I can have a certain publication date. But overall I am a bit scared and worried because I have never done this before and even the simple act of publishing an album requires me to read a long contract from the "aggregator company") which, even if I do have contracts with the musicians do worry me as I am not a U.S. resident and I am not familiar with the U.S. law system Reasons for publishing the video are: I almost run out of money (but this is not a real reason as I should have enough for one more month of development time) ...I kind of need extra money as, even if I do have money for 1 month of development I do not have money for marketing and for other expenses (e.g. accountant) It will create a fan base I could get some useful feedback from a wider range of beta testers It might create some pre-release buzz in case some blogger or game magazine likes the concept Anyone has had similar experiences? Is there a real risk that someone will copy the concept and implement it in a couple of months? Will the Kickstarter campaing be a good pre-release exposure for the gmae? Any refrences of similar projects/situations? Is it realistic that someone like ROVIO will copy the idea straight away?

    Read the article

  • Renault under threat from industrial espionage, intellectual property the target

    - by Simon Thorpe
    Last year we saw news of both General Motors and Ford losing a significant amount of valuable information to competitors overseas. Within weeks of the turn of 2011 we see the European car manufacturer, Renault, also suffering. In a recent news report, French Industry Minister Eric Besson warned the country was facing "economic war" and referenced a serious case of espionage which concerns information pertaining to the development of electric cars. Renault senior vice president Christian Husson told the AFP news agency that the people concerned were in a "particularly strategic position" in the company. An investigation had uncovered a "body of evidence which shows that the actions of these three colleagues were contrary to the ethics of Renault and knowingly and deliberately placed at risk the company's assets", Mr Husson said. A source told Reuters on Wednesday the company is worried its flagship electric vehicle program, in which Renault with its partner Nissan is investing 4 billion euros ($5.3 billion), might be threatened. This casts a shadow over the estimated losses of Ford ($50 million) and General Motors ($40 million). One executive in the corporate intelligence-gathering industry, who spoke on condition of anonymity, said: "It's really difficult to say it's a case of corporate espionage ... It can be carelessness." He cited a hypothetical example of an enthusiastic employee giving away too much information about his job on an online forum. While information has always been passed and leaked, inadvertently or on purpose, the rise of the Internet and social media means corporate spies or careless employees are now more likely to be found out, he added. We are seeing more and more examples of where companies like these need to invest in technologies such as Oracle IRM to ensure such important information can be kept under control. It isn't just the recent release of information into the public domain via the Wikileaks website that is of concern, but also the increasing threats of industrial espionage in cases such as these. Information rights management doesn't totally remove the threat, but abilities to control documents no matter where they exist certainly increases the capabilities significantly. Every single time someone opens a sealed document the IRM system audits the activity. This makes identifying a potential source for a leak much easier when you have an absolute record of every person who's had access to the documents. Oracle IRM can also help with accidental or careless loss. Often people use very sensitive information all the time and forget the importance of handling it correctly. With the ability to protect the information from screen shots and prevent people copy and pasting document information into social networks and other, unsecured documents, Oracle IRM brings a totally new level of information security that would have a significant impact on reducing the risk these organizations face of losing their most valuable information.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >