Search Results

Search found 13004 results on 521 pages for 'pretty printing'.

Page 347/521 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Silverlight Cream for June 16, 2010 -- #884

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Emiel Jongerius, Charles Petzold, Adam Kinney, Deepesh Mohnani, Timmy Kokke, and Damon Payne. Shoutouts: Andy Beaulieu reported his Coding4Fun: Shuffleboard Game for WP7 has been posted -- Big ol' Tutorial and 6 videos of WP7 goodness Karl Shifflett announced Three New WPF and Silverlight Designer Videos Posted Charles Petzold has a cool Flip-Number Clock in Silverlight posted... cool demo, and the source. From SilverlightCream.com: Data Driven Applications with MVVM Part II: Messaging, Unit Testing, and Live Data Sources Zoltan Arvai has part 2 of his Data-Driven Apps with MVVM up, and this one is also including Messaging, Unit Testing, and Live WCF Data... good tutorial and all the code. Silverlight DataContext Changed Event and Trigger Emiel Jongerius takes a hard swing at the lack of DataContextChanged... his solution involves two attached properties instead of one... check it out and see what you think! Orientation Strategies for Windows Phone 7 Charles Petzold is discussing WP7 Orientation... showing the problems you can get involved in, and how to work through them... and you might be surprised at how he does it :) ... pretty cool as usual, Charles! Debugging the TranslateZoomRotate WPF Behavior in Blend Adam Kinney talks through a bug reported about the WPF TranslateZoomRotate Behavior. Again, it's WPF, but it's in Blend, and ya never know when the solution might apply. I want my app to look like the Zune client Deepesh Mohnani demonstrates using the Cosmopolitan theme to get his app to have the same look as the Zune client. MVVM Project and Item Templates Timmy Kokke is continuing with his cool SilverAmp media player, using it to expand upon the new Blend and Silverlight 4 features. This episode touches very lightly on cranking up a new MVVM project in Blend. Great Features for MVVM Friendly Objects Part 0: Favor Composition Over Inheritance Damon Payne has the first part up of a series he's working on with 'MVVM Friendly' features... he's building out a lot of the infrastructure in this post for the ones that follow... all good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • Single Instance of Child Forms in MDI Applications

    - by Akshay Deep Lamba
    In MDI application we can have multiple forms and can work with multiple forms i.e. MDI childs at a time but while developing applications we don't pay attention to the minute details of memory management. Take this as an example, when we develop application say preferably an MDI application, we have multiple child forms inside one parent form. On MDI parent form we would like to have menu strip and tab strip which in turn calls other forms which build the other parts of the application. This also makes our application looks pretty and eye-catching (not much actually). Now on a first go when a user clicks a menu item or a button on a tab strip an application initialize a new instance of a form and shows it to the user inside the MDI parent, if a user again clicks the same button the application creates another new instance for the form and presents it to the user, this will result in the un-necessary usage of the memory. Therefore, if you wish to have your application to prevent generating new instances of the forms then use the below method which will first check if the the form is visible among the list of all the child forms and then compare their types, if the form types matches with the form we are trying to initialize then the form will get activated or we can say it will be bring to front else it will be initialize and set visible to the user in the MDI parent window. The method we are using: private bool CheckForDuplicateForm(Form newForm) { bool bValue = false; foreach (Form frm in this.MdiChildren) { if (frm.GetType() == newForm.GetType()) { frm.Activate(); bValue = true; } } return bValue; } Usage: First we need to initialize the form using the NEW keyword ReportForm ReportForm = new ReportForm(); We can now check if there is another form present in the MDI parent. Here, we will use the above method to check the presence of the form and set the result in a bool variable as our function return bool value. bool frmPresent = CheckForDuplicateForm(Reportfrm); Once the above check is done then depending on the value received from the method we can set our form. if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); } In the end this is the code you will have at you menu item or tab strip click: ReportForm Reportfrm = new ReportForm(); bool frmPresent = CheckForDuplicateForm(Reportfrm); if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); }

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • Silverlight Cream for June 10, 2010 -- #879

    - by Dave Campbell
    In this Issue: Emiel Jongerius, Nokola, Christian Schormann, Tim Heuer, David Poll, Mike Snow(-2-), John Papa, and Charles Petzold. Shoutout: Viktor Larsson has a frank look at WP7 based on information from MIX10 and what was said this week in his post: Licking Windows Phone 7... yeah licking, not liking :) .. my guess is even that didn't allow him to keep it! If you haven't already noticed, the CodeProject reader's choice awards are out this week and Telerik won for their RadColorPicker and RadCalendar for Silverlight Telerik also needs congratulations for winning Telerik wins “Best of TechEd” award in the “Components and Middleware” category... check out that trophy... Steven Forte has a picture up of the Telerikers after getting the award. Koen Zwikstra has a new release of Silverlight Spy up that supports the latest release: Silverlight Spy 3.0.0.12 From SilverlightCream.com: Localization of XAML files in Silverlight Emiel Jongerius is back with another post, this time discussing Localizing XAM files... external links and source included. Coolest Silverlight Sound Library for Games I've Seen Yet Nokola talks up a Sound Library for Silverlight 4 Games ... and has links to a great demo, plus the source. SketchFlow: Firing Actions when a Storyboard is Complete Christian Schormann responded to some Twitter questions and demonstrates using the StoryboardCompleted trigger with a Navigate action. Hosting cross-domain Silverlight applications (XAP) Tim Heuer responds to a question from a reader and demonstrates how to host a XAP from a domain other than the one you're working on. Taking Microsoft Silverlight 4 Applications Beyond the Browser (TechEd WEB313) David Poll has all his material up from his TechEd presentation earlier this week on Silverlight OOB... and he covered some pretty extensive material ... check it out! Silverlight Tip of the Day #29 – Configuring Service Reference to Back to LocalHost Mike Snow has a couple new tips up... this first one is quick, but very useful... how to switch your service reference back to localhost without pulling out your hair. Silverlight Tip of the Day #30 – Sending Email from Silverlight In Mike Snow's latest tip, he shows how to send email from your Silverlight app... using a WCF service... and a step-by-step set of instructions. Creating Rich Interactions Using Blend 4: Transition Effects, Fluid Layout and Layout States (Silverlight TV #32) John Papa has Silverlight TV #32 up, and he's talking with Kenny Young of the Expression Blend team while Kenny uses some built-om effects and also creates some impressive examples from scratch -- code included. Simulating Touch Inertia on Windows Phone 7 Charles Petzold has a post up on simulating inertia on WP7... demos in WPF and then moves into WP7... math, source, and external links. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Avoiding Flicker with JQuery Tabs

    - by Damon
    I am a huge fan of JQuery because it seems like every time I want to do something it has a plugin that already does it.  Adding a tabbed interface to a web page was always quite an annoyance, but JQuery UI offers a pretty descent tabs solution (click here to see it).  If you read through the documentation, you'll find that you can create a tabbed interface by calling the tabs() method on an element containing an unordered list.  The only problem that I've experienced with the method is that on slower machines you can see the unordered list render out in its original state before being updated into the final tabbed interface.  A quick way to fix that issues is to set the CSS display property of the element to none, then call the show() method directly after calling the tabs() method.  This keeps the element completely hidden while JQuery sets up the tabs interface and eliminates the flicker. <SCRIPT type="text/javascript">      $(function()      {           $("#tabs").tabs();           $("#tabs").show();      }); </SCRIPT> <div id="tabs" style="display:none;">     <ul>         <li><a href="#tabs-1">First Tab</a></li>         <li><a href="#tabs-2">Second Tab</a></li>         <li><a href="#tabs-3">Third Tab</a></li>     </ul>     ... </div>

    Read the article

  • Step Away From That Computer! You’re Not Qualified to Use It!

    - by Michael Sorens
    Most things tend to come with warnings and careful instructions these days, but sadly not one of the most ubiquitous appliances of all, your computer. If a chainsaw is missing its instructions, you’re well advised not to use it, even though you probably know roughly how it’s supposed to work. I confess, there are days when I feel the same way about computers. Long ago, during the renaissance of the computer age, it was possible to know everything about computers. But today, it is challenging to be fully knowledgeable even in one small area, and most people aren’t as savvy as they like to think. And, if I may borrow from Edwin Abbott Abbott’s classic Flatland, that includes me. And you. Need an example of what I mean? Take a look at almost any recent month’s batch of Windows updates. Just two quick questions for you: Do you need all of those updates? Is it safe to install all of those updates? I do software design and development for a living on Windows and the .NET platform, but I will be quite candid: I often have little clue what the heck some of those updates are going to do or why they are needed. So, if you do not know why they are needed or what they do, how do you know if they are safe? Of course, one can sidestep both questions by accepting Microsoft’s recommended Windows Update setting of “install updates automatically”. That leads you to infer that you need all of them (which is not always the case) and, more significantly, that they are safe. Quite safe. Ah, lest reality intrude upon such a pretty picture! Sadly, there is no such thing as risk-free software installation, and payloads from Windows Update are no exception. Earlier this year, a Windows Secrets Patch Watch article touted this headline: Keep this troublesome kernel update on hold. It discusses KB 2862330, a security update originally published more than 4 months earlier, and yet the article still recommends not installing it! Most people simply do not have the time, resources, or interest, to go about figuring out which updates to install or postpone or skip for safety reasons. Windows Secrets Patch Watch is the best service I have encountered for getting advice, but it is still no panacea and using the service effectively requires a degree of computer literacy that I still think is beyond a good number of people. Which brings us full circle: Step Away From That Computer! You’re Not Qualified to Use It!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • Oracle’s Sun Server X4-8 with Built-in Elastic Computing

    - by kgee
    We are excited to announce the release of Oracle's new 8-socket server, Sun Server X4-8. It’s the most flexible 8-socket x86 server Oracle has ever designed, and also the most powerful. Not only does it use the fastest Intel® Xeon® E7 v2 processors, but also its memory, I/O and storage subsystems are all designed for maximum performance and throughput. Like its predecessor, the Sun Server X4-8 uses a “glueless” design that allows for maximum performance for Oracle Database, while also reducing power consumption and improving reliability. The specs are pretty impressive. Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots, and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. The Sun Server X4-8 is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.  There has been a lot of innovation in Oracle’s x86 product line, but the latest and most significant is a capability called elastic computing. This new capability is built into each Sun Server X4-8.   Elastic computing starts with the Intel processor. While Intel provides a wide range of processors each with a fixed combination of core count, operational frequency, and power consumption, customers have been forced to make tradeoffs when they select a particular processor. They have had to make educated guesses on which particular processor (core count/frequency/cache size) will be best suited for the workload they intend to execute on the server.Oracle and Intel worked jointly to define a new processor, the Intel Xeon E7-8895 v2 for the Sun Server X4-8, that has unique characteristics and effectively combines the capabilities of three different Xeon processors into a single processor. Oracle system design engineers worked closely with Oracle’s operating system development teams to achieve the ability to vary the core count and operating frequency of the Xeon E7-8895 v2 processor with time without the need for a system level reboot.  Along with the new processor, enhancements have been made to the system BIOS, Oracle Solaris, and Oracle Linux, which allow the processors in the system to dynamically clock up to faster speeds as cores are disabled and to reach higher maximum turbo frequencies for the remaining active cores. One customer, a stock market trading company, will take advantage of the elastic computing capability of Sun Server X4-8 by repurposing servers between daytime stock trading activity and nighttime stock portfolio processing, daily, to achieve maximum performance of each workload.To learn more about Sun Server X4-8, you can find more details including the data sheet and white papers here.Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software. He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Silverlight Cream for April 08, 2010 -- #834

    - by Dave Campbell
    In this Issue: Michael Washington, Phil Middlemiss, Yochay Kiriaty, Giorgetti Alessandro, Mike Snow, John Papa, SilverLaw, smartyP, and Pete Brown. Shoutouts: Steve Wortham sent me a link to his RegEx tool that is written in Silverlight... definitely worth a look: Introducing Code Hinting for Regular Expressions Joshua Blake posted his MIX10 materials: MIX10 NUI session sample code From SilverlightCream.com: Silverlight MVVM: An (Overly) Simplified Explanation Michael Washington has a tutorial up for getting your arms (and head) around MVVM and Silverlight, and Blend too. A Chrome and Glass Theme - Part 3 Phil Middlemiss has part 3 up of his tutorial series on building an awesome theme for Silverlight... he's styling the textbox and checkbox this time around, and improving the button too Automatic Rotation Support or Automatic Multi-Orientation Layout Support for Windows Phone Yochay Kiriaty is giving up some WP7 goodness with his post on Multi-Orientation Layout Support ... yeah I had to say it twice myself :) good links and all the code in addition to the good blog post Silverlight Navigation Framework: resolve the pages using an IoC container Giorgetti Alessandro has some pretty cool code up as a proof of concept of using an IoC container with the Navigation Framework of Silverlight 4. Silverlight Tip of the Day No. 109 – Attach to Process Debugging Mike Snow is back doing Tips of the Day... and number 109 is showing how to attach the debugger to a running Silverlight app. Silverlight TV 20: Community Driven Development with WCF RIA Services In his latest Silverlight TV episode, John Papa talks with Jeff Handley about RIA Services, and how feedback from the community helped shape the product. ChildWindowMouseScrollResizeBehavior - Silverlight 3 SilverLaw has a new Behavior up at the Expression Gallery that gives you resizing on a ChildWindow using the Mouse Wheel. Creating a Windows Phone 7 Metro Style Pivot Application [Part 3] smartyP has the 3rd and final episode for his WP7 Pivot up, and this one includes not only the source but a video tutorial. Layout Rounding Pete Brown talks about Layout Rounding and it has nothing to do with rounding corners... it has to do with rounding off where your objects get placed pixel-wise ... I've blogged about this seemingly-anti-aliasing more than once... Pete has the real answer Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Just another web startup - platform comparison

    - by Holland
    I'm looking to do a web startup which involves something along the lines of an ecommerce site, yet a little more in depth than that. While it's something that I would rather not go into detail with in terms of the initial idea, I can specify (on a basic level) what would be required of the website. If you have any observations or opinions derived from personal experience, which relate to what you see here, I'd appreciate it if you could share these. Paypal's API interaction (definitely). From what I've read about their API, integration with it into their website is VERY expensive, so I'd probably hold off on that until I've (hopefully) generated money and write my own simple credit-card interaction system. SQL Backend (obviously) PostgreSQL seems like a pretty good choice, as from what I've read, it's structure is a bit more "object-oriented" than, say, MySQL. Then again, I've used MySQL before and haven't had much problem with it whatsoever. Would it be worth learning PostgreSQL for this purpose? Java or .Net implementation (Preferably Mono, so I can use .Net while hosting the website using Apache). The reason for this is because, frankly, while I know PHP is a great platform to develop websites with, I hate developing with it. Before someone chimes in and flames me for saying that, note that I have nothing against the language, I just don't like it for my purposes. While Mono may be good to go with, I'm aware that ASP.Net MVC 3 hasn't been released for Mono yet, which may be a pain to work with, without their Razor syntax. Ontop of that, it seems Java is completely FULL of class libraries which deal with web development, that can be downloaded from the web. If anyone has any experience with these, I'd appreciate if that were posted. From what I've read about Spring and Struts2, they seem to be the best out there - especially since they're (AFAIK) MVC. I've considered Python and Django, which do seem REALLY nice, but I don't know much Python, and I'd rather start with something that I already know (language-wise; not framework-wise) than dive into learning a language AND a new framework. I'd REALLY like to be able to host my website via Apache, rather than using Windows Server or anything like that, as, frankly, I hate their setup. I'm not dissing it in any way, shape, or form, I'm just saying I dislike it. <3 terminal config. If there is a good reason to with Windows Server, however, I'd be willing to learn it. C# has a lot of things that Java appears not to have, including Delegates, unsigned types, and LINQ. Is there anything that Java has which can counter these?

    Read the article

  • Kickstarter "last minute cold feet"

    - by mm24
    today I scheduled the publication of a video on kickstarter requesting approximately 5.000 $ in order to complete the iPhone shooter game I started 1 year ago after quitting my job. I invested more than 20.000$ in the game so far (for artwork, music, legal and accountant expenses) and I am now getting cold feet about my decision of publishing the video. The game is "nearly finished", in other words: the game mechanics are working but I still have some bugs to fix. Once I will have finished this (I hope will take me 1 or 2 weeks) I plan to start working on the actual level balancing (e.g. deciding the order of appearence of enemies for each level and balancing the number of hitpoints and strenght of bullets that the enemies have). Reasons for not publishing the video are: fear that the concept can be copied easily: the game is a shooter game set in a different environment (its a pretty cool one, believe me :)) and I am worried that someone might copy* the idea (I know, its the usual "I am worried story.."). A shooter game is one of the easiest game to implement and hence there will be hundreds game developer able to copy it by just adapting their existing code and changing graphics (not as straightforward). It took me one year to develop this because I was inexperienced plus there are approximately 6/7 months of work from the illustrator and there are 8 unique music tracks composed. The soundtrack of the video is the soundtrack of the game wich is not yet published and has not been deposited to a music society. I did create legally valid timestamps for the tracks and I am considering uploading the album on iTunes before publishing the video so I can have a certain publication date. But overall I am a bit scared and worried because I have never done this before and even the simple act of publishing an album requires me to read a long contract from the "aggregator company") which, even if I do have contracts with the musicians do worry me as I am not a U.S. resident and I am not familiar with the U.S. law system Reasons for publishing the video are: I almost run out of money (but this is not a real reason as I should have enough for one more month of development time) ...I kind of need extra money as, even if I do have money for 1 month of development I do not have money for marketing and for other expenses (e.g. accountant) It will create a fan base I could get some useful feedback from a wider range of beta testers It might create some pre-release buzz in case some blogger or game magazine likes the concept Anyone has had similar experiences? Is there a real risk that someone will copy the concept and implement it in a couple of months? Will the Kickstarter campaing be a good pre-release exposure for the gmae? Any refrences of similar projects/situations? Is it realistic that someone like ROVIO will copy the idea straight away?

    Read the article

  • SQL SERVER – Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1

    - by pinaldave
    Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Let us recap the puzzles here quickly. Puzzle 1: Why following code when executed in SSMS displays result as a * (Star)? SELECT CAST(634 AS VARCHAR(2)) Puzzle 2: Write the shortest code that produces results as 1 without using any numbers in the select statement. Bonus Q: How many different Operating System (OS) NuoDB support? As I mentioned earlier the participation was nothing but Amazing. I will write about the winners and the best answers in short time. Meanwhile I will give to the point answers to above puzzles. Solution 1: When you convert character or binary expressions (char, nchar, nvarchar, varchar,binary, or varbinary) to an expression of a different data type, data can be truncated, only partially displayed, or an error is returned because the result is too short to display. Conversions to char, varchar, nchar, nvarchar, binary, and varbinary are truncated, except for the conversions shown in the following table. Reference of the text and table from MSDN. Solution 2: The shortest code to produce answer 1 : SELECT EXP($) or SELECT COS($) or SELECT DAY($) When SELECT $ it gives us the result as 0.00 and the EXP of the same is 1. I believe it is pretty neat. There were plenty other answers but this was the shortest. Another shorter answer would be PRINT EXP($) but no one has proposed that as in original Question I have explicitly mentioned SELECT in the original question. Bonus Answer: 5 OS: Windows, MacOS, Linux, Solaris, Joyent SmartOS Reference Please do read every single comment here. Do leave a comment which one do you think is the best comment out of all the comments. Meanwhile if there is a better solution and I have missed it do let me know as we still have time to correct it. I will be selecting the winner before the weekend as I am going through each and every of 350 comment. I will be selecting the best comments along with the winning comment. If our selection matches – one of you may still win something cool.  Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • JavaScript Sucks.

    - by Matt Watson
    JavaScript Sucks. Yes, I said it. Microsoft's announcement of TypeScript got me thinking today. Is this a step in the right direction? It sounds like it fixes a lot of problems with JavaScript development. But is it really just duct tape and super glue for a programming model that needs to be replaced?I have had a love hate relationship with JavaScript, like most developers who would prefer avoiding client side code. I started doing web development over 10 years ago and I have done some pretty cool stuff with JavaScript. It has came a long ways and is the universal standard these days for client side scripting in the web browser. Over the years the browsers have become much faster at processing JavaScript. Now people are even trying to use it on the server side via node.js. OK, so why do I think JavaScript sucks?Well first off, as an enterprise web application developer, I don't like any scripting or dynamic languages. I like code that compiles for lots of obvious reasons. It is messy to code with and lacks all kinds of modern programming features. We spend a lot of time trying to hack it to do things it was never really designed for.Ever try to use different jQuery based plugins that require conflicting jQuery versions? Yeah, that sucks.How about trying to figure out how to make 20 javascript include files load quicker as one request? Yeah that sucks too.Performance? Let me just point to the old Facebook mobile app made with JS & HTML5. It sucked. Enough said.How about unit testing JavaScript? I've never tried it, but it sure sounds like fun.My biggest problem with JavaScript is code security. If I make some awesome product, there is no way to protect my code. How can we expect game makers to write apps in 100% JavaScript and HTML5 if they can't protect their intellectual property?There are compiling tools like Closure, unit test frameworks, minify, coffee script, TypeScript and a bunch of other tools. But to me, they all try to make up for the weaknesses and problems with JavaScript. JavaScript is a mess and we spend a lot of time trying to work around all of it's problems. It is possible to program in Silverlight, Java or Flash and run that in the browser instead of JavaScript, but they all have their own problems and lack universal mobile support. I believe Microsoft's new TypeScript is a step forward for JavaScript, but I think we need to start planning to go a whole different direction. We need a new universal client side programming model, because JavaScript sucks.

    Read the article

  • MVC data binding

    - by user441521
    I'm using MVC but I've read that MVVM is sort of about data binding and having pure markup in your views that data bind back to the backend via the data-* attributes. I've looked at knockout but it looks pretty low level and I feel like I can make a library that does this and is much easier to use where basically you only need to call 1 javascript function that will data bind your entire page because of the data-* attributes you assign to html elements. The benefits of this (that I see) is that your view is 100% decoupled from your back-end so that a given view never has to be changed if your back-end changes (ie for asp.net people no more razor in your view that makes your view specific to MS). My question would be, I know there is knockout out there but are there any others that provide this data binding functionality for MVC type applications? I don't want to recreate something that may already exist but I want to make something "better" and easier to use than knockout. To give an example of what I mean here is all the code one would need to get data binding in my library. This isn't final but just showing the idea that all you have to do is call 1 javascript function and set some data-* attribute values and everything ties together. Is this worth seeing through? <script> $(function () { // this is all you have to call to make databinding for POST or GET to work DataBind(); }); </script> <form id="addCustomer" data-bind="Customer" data-controller="Home" data-action="CreateCustomer"> Name: <input type="text" data-bind="Name" data-bind-type="text" /> Birthday: <input type="text" data-bind="Birthday" data-bind-type="text" /> Address: <input type="text" data-bind="Address" data-bind-type="text" /> <input type="submit" value="Save" id="btnSave" /> </form> ================================================= // controller action [HttpPost] public string CreateCustomer(Customer customer) { if(customer.Name == "Rick") return "success"; return "failure"; } // model public class Customer { public string Name { get; set; } public DateTime Birthday { get; set; } public string Address { get; set; } }

    Read the article

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV

    - by wim.coekaerts
    One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground. It allows me to study the products in great detail and put them to use in ways that individual product teams don't always intend them to be used for :) but that makes it fun. I have a lot of this set up at home because... work is sort of hobby and I just like to tinker with it. Anyway, a few weeks ago I was looking at my sun ray rig at home and how well 3D works. Google Earth and some basic opengl tests like glxspheres combined with virtualgl. It resulted in some very cool demos recorded with my little camera (sorry for the crappy quality of the video :-) : OVDC (soft client) on my mac Sun Ray 2FS Never mind the hickups during zoom, that's because I was using the scrollwheel on my mouse and I can't scroll uninterrupted :) Anyway, this is quite cool ! The setup for this was the following : Sun Ray on LAN, Sun Ray Server 5 latest installed on OL5.5 inside a VM running on Oracle VM 2.2 (hardware virt, with a virtual network (vif)) and the virtualgl rendering happened on another box (wopr5) that runs linux on a little atom D520 with an ION2 gpu. So network goes from Sun Ray to Sun Ray Server to wopr5 and back. Given that this is full screen 3D it puts a good amount of load on the network and it's pretty cool that SRS was just a VM :) So, separately, I had written a little blog entry about using sriov and oracle vm a while back. link to sriov blog entry Last night when I came home I wanted to do some more playing around with SRIOV and live migrate. To do this, I wanted to set up a VM with 2 network interfaces, one virtual network (vif) and then one that's one of the SRIOV virtual functions from my network card. Inside the guest they show as eth0 and eth1, and then bond them using a standard linux bonding device (bond0 here) with active active links. The goal here is that on live migrate, we would detach the VF (eth1 in guest in this case), the bond would then just hum along on eth0 (vif) we can live migrate the VM and then on the other server after the migrate completes we re-attach a VF to the VM there and eth1 pops up again and the bond uses both eth0/eth1 to do its work. So, to set this up, I figured, why not use my sun ray server VM because the 3D work generates a nice network load and is very latency/timing sensitive. In the end, I ran glxspheres on my sunray server (vm) displaying on my sun ray 2 fs and while that was running, I did my live migrate test of this vm (unplug pci VF, migrate, reconnect vf) and guess what, it just kept running :) veryyyyyy cool. now, it was supposed to, but it's always nice to see it actually work, for real. Here's a diagram of it. No gimics - just real technology at work ! enjoy :)

    Read the article

  • Graphics card fan is loud (additional graphics card drivers cause problems)

    - by tk4muffin
    Okay this explanation is a bit longer ... but I start at the beginning: I've been using Windows 7 for a very long time, shortly after the release of v12.10 I installed Ubuntu via Windows installer. Everything worked fine but the fan of the graphics card. After a bit of research I found out, that I just had to select a different driver (nvidia-current (proprietary, tested) worked pretty well). This also fixed some graphical bugs when I just logged into my account. Due to my university I got a MSDNAA-Account (allows me to download every Windows OS for free). I downloaded and installed Windows 8. After configuration I installed ubuntu via the Windows installer once again and the first couple launches of ubuntu went well. Suddenly ubuntu didn't launched anymore...caused by some hard-disk errors and had no clue what to do. So I kept working on Windows 8 - unfortunately. After playing around with the new Windows, I put my PC to sleep-mode. I couldn't wake my PC up and it wasn't responding to anything (neither mouse-movement, -clicks or keyboard strokes, nor the power-button and the reset-button worked), so I pulled the plug. Turns out, this was a huge mistake. Somehow the BIOS broke and after restarting a couple of times, the BIOS repaired itself. Neither Windows 8, nor ubuntu where bootable. Now I had to install ubuntu several times, because after rebooting unity was hidden and I didn't know what the problem was and how to fix it. I finally realized that this problem was caused by the graphics card driver, which I've changed to the nvidia-current (This dirver worked fine before my PC "crashed"). So I installed Windows 8 again and after a bit of usage I installed ubuntu once again (via DVD). The booting of ubuntu and windows works fine - so far. But I'm still not able to change the graphics card driver without unity hiding away after restarting the OS. The noisy fan is really disturbing my work... PC Specs: Processor: Intel Core 2 Duo COU E8400 @ 3GHz x2 Memory: 7.8 GB OS type: 64-bit Graphics Card: GeForce 9600 GT Motherboard: Asus P5Q I hope the information given are enough.

    Read the article

  • My Body Summary template for Orchard

    - by Bertrand Le Roy
    By default, when Orchard displays a content item such as a blog post in a list, it uses a very basic summary template that removes all markup and then extracts the first 200 characters. Removing the markup has the unfortunate effect of removing all styles and images, in particular the image I like to add to the beginning of my posts. Fortunately, overriding templates in Orchard is a piece of cake. Here is the Common.Body.Summary.cshtml file that I drop into the Views/Parts folder of pretty much all Orchard themes I build: @{ Orchard.ContentManagement.ContentItem contentItem = Model.ContentPart.ContentItem; var bodyHtml = Model.Html.ToString(); var more = bodyHtml.IndexOf("<!--more-->"); if (more != -1) { bodyHtml = bodyHtml.Substring(0, more); } else { var firstP = bodyHtml.IndexOf("<p>"); var firstSlashP = bodyHtml.IndexOf("</p>"); if (firstP >=0 && firstSlashP > firstP) { bodyHtml = bodyHtml.Substring(firstP, firstSlashP + 4 - firstP); } } var body = new HtmlString(bodyHtml); } <p>@body</p> <p>@Html.ItemDisplayLink(T("Read more...").ToString(), contentItem)</p> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This template does not remove any tags, but instead looks for an HTML comment delimiting the end of the post’s intro: <!--more--> This is the same convention that is being used in WordPress, and it’s easy to add from the source view in TinyMCE or Live Writer. If such a comment is not found, the template will extract the first paragraph (delimited by <p> and </p> tags) as the summary. And if it finds neither, it will use the whole post. The template also adds a localizable link to the full post.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >